Nov 24 21:19:40 crc systemd[1]: Starting Kubernetes Kubelet... Nov 24 21:19:40 crc restorecon[4683]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:19:40 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:19:41 crc restorecon[4683]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 24 21:19:41 crc restorecon[4683]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 24 21:19:42 crc kubenswrapper[4915]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 21:19:42 crc kubenswrapper[4915]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 24 21:19:42 crc kubenswrapper[4915]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 21:19:42 crc kubenswrapper[4915]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 21:19:42 crc kubenswrapper[4915]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 24 21:19:42 crc kubenswrapper[4915]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.138566 4915 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143571 4915 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143607 4915 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143616 4915 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143625 4915 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143635 4915 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143643 4915 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143651 4915 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143659 4915 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143668 4915 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143675 4915 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143698 4915 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143706 4915 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143713 4915 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143721 4915 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143728 4915 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143737 4915 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143747 4915 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143757 4915 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143767 4915 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143805 4915 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143816 4915 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143825 4915 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143833 4915 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143841 4915 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143849 4915 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143857 4915 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143865 4915 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143873 4915 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143881 4915 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143889 4915 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143897 4915 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143932 4915 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143941 4915 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143949 4915 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143957 4915 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143965 4915 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143972 4915 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143980 4915 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143987 4915 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.143995 4915 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.144005 4915 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.144015 4915 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.144027 4915 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.144035 4915 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.144042 4915 feature_gate.go:330] unrecognized feature gate: Example Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.144050 4915 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.144057 4915 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.144065 4915 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.144072 4915 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.144080 4915 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.144087 4915 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.144095 4915 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.144103 4915 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.144112 4915 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.144120 4915 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.144128 4915 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.144136 4915 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.144144 4915 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.144153 4915 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.144162 4915 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.144170 4915 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.144181 4915 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.144192 4915 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.144200 4915 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.144209 4915 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.144219 4915 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.144227 4915 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.144235 4915 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.144244 4915 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.144251 4915 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.144259 4915 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144405 4915 flags.go:64] FLAG: --address="0.0.0.0" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144421 4915 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144436 4915 flags.go:64] FLAG: --anonymous-auth="true" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144447 4915 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144458 4915 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144467 4915 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144479 4915 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144491 4915 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144501 4915 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144510 4915 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144520 4915 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144530 4915 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144539 4915 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144548 4915 flags.go:64] FLAG: --cgroup-root="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144557 4915 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144566 4915 flags.go:64] FLAG: --client-ca-file="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144575 4915 flags.go:64] FLAG: --cloud-config="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144584 4915 flags.go:64] FLAG: --cloud-provider="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144593 4915 flags.go:64] FLAG: --cluster-dns="[]" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144603 4915 flags.go:64] FLAG: --cluster-domain="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144611 4915 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144620 4915 flags.go:64] FLAG: --config-dir="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144629 4915 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144639 4915 flags.go:64] FLAG: --container-log-max-files="5" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144649 4915 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144658 4915 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144667 4915 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144676 4915 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144686 4915 flags.go:64] FLAG: --contention-profiling="false" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144695 4915 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144704 4915 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144713 4915 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144722 4915 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144733 4915 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144742 4915 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144751 4915 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144760 4915 flags.go:64] FLAG: --enable-load-reader="false" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144768 4915 flags.go:64] FLAG: --enable-server="true" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144805 4915 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144818 4915 flags.go:64] FLAG: --event-burst="100" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144827 4915 flags.go:64] FLAG: --event-qps="50" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144836 4915 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144845 4915 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144856 4915 flags.go:64] FLAG: --eviction-hard="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144866 4915 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144875 4915 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144885 4915 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144894 4915 flags.go:64] FLAG: --eviction-soft="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144903 4915 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144912 4915 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144921 4915 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144930 4915 flags.go:64] FLAG: --experimental-mounter-path="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144939 4915 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144948 4915 flags.go:64] FLAG: --fail-swap-on="true" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144957 4915 flags.go:64] FLAG: --feature-gates="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144968 4915 flags.go:64] FLAG: --file-check-frequency="20s" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144977 4915 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144986 4915 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.144995 4915 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145004 4915 flags.go:64] FLAG: --healthz-port="10248" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145013 4915 flags.go:64] FLAG: --help="false" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145022 4915 flags.go:64] FLAG: --hostname-override="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145031 4915 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145040 4915 flags.go:64] FLAG: --http-check-frequency="20s" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145050 4915 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145058 4915 flags.go:64] FLAG: --image-credential-provider-config="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145067 4915 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145075 4915 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145084 4915 flags.go:64] FLAG: --image-service-endpoint="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145093 4915 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145102 4915 flags.go:64] FLAG: --kube-api-burst="100" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145111 4915 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145120 4915 flags.go:64] FLAG: --kube-api-qps="50" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145129 4915 flags.go:64] FLAG: --kube-reserved="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145138 4915 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145147 4915 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145156 4915 flags.go:64] FLAG: --kubelet-cgroups="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145165 4915 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145174 4915 flags.go:64] FLAG: --lock-file="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145184 4915 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145193 4915 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145202 4915 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145215 4915 flags.go:64] FLAG: --log-json-split-stream="false" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145225 4915 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145234 4915 flags.go:64] FLAG: --log-text-split-stream="false" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145245 4915 flags.go:64] FLAG: --logging-format="text" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145254 4915 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145263 4915 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145272 4915 flags.go:64] FLAG: --manifest-url="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145280 4915 flags.go:64] FLAG: --manifest-url-header="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145292 4915 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145301 4915 flags.go:64] FLAG: --max-open-files="1000000" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145311 4915 flags.go:64] FLAG: --max-pods="110" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145321 4915 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145331 4915 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145340 4915 flags.go:64] FLAG: --memory-manager-policy="None" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145349 4915 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145358 4915 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145367 4915 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145376 4915 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145394 4915 flags.go:64] FLAG: --node-status-max-images="50" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145403 4915 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145412 4915 flags.go:64] FLAG: --oom-score-adj="-999" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145421 4915 flags.go:64] FLAG: --pod-cidr="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145430 4915 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145443 4915 flags.go:64] FLAG: --pod-manifest-path="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145452 4915 flags.go:64] FLAG: --pod-max-pids="-1" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145461 4915 flags.go:64] FLAG: --pods-per-core="0" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145470 4915 flags.go:64] FLAG: --port="10250" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145479 4915 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145488 4915 flags.go:64] FLAG: --provider-id="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145498 4915 flags.go:64] FLAG: --qos-reserved="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145507 4915 flags.go:64] FLAG: --read-only-port="10255" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145516 4915 flags.go:64] FLAG: --register-node="true" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145525 4915 flags.go:64] FLAG: --register-schedulable="true" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145535 4915 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145549 4915 flags.go:64] FLAG: --registry-burst="10" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145558 4915 flags.go:64] FLAG: --registry-qps="5" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145567 4915 flags.go:64] FLAG: --reserved-cpus="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145575 4915 flags.go:64] FLAG: --reserved-memory="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145586 4915 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145595 4915 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145604 4915 flags.go:64] FLAG: --rotate-certificates="false" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145613 4915 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145622 4915 flags.go:64] FLAG: --runonce="false" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145630 4915 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145640 4915 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145649 4915 flags.go:64] FLAG: --seccomp-default="false" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145658 4915 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145667 4915 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145676 4915 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145685 4915 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145694 4915 flags.go:64] FLAG: --storage-driver-password="root" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145703 4915 flags.go:64] FLAG: --storage-driver-secure="false" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145711 4915 flags.go:64] FLAG: --storage-driver-table="stats" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145720 4915 flags.go:64] FLAG: --storage-driver-user="root" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145729 4915 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145738 4915 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145747 4915 flags.go:64] FLAG: --system-cgroups="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145756 4915 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145769 4915 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145802 4915 flags.go:64] FLAG: --tls-cert-file="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145812 4915 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145823 4915 flags.go:64] FLAG: --tls-min-version="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145832 4915 flags.go:64] FLAG: --tls-private-key-file="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145840 4915 flags.go:64] FLAG: --topology-manager-policy="none" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145849 4915 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145859 4915 flags.go:64] FLAG: --topology-manager-scope="container" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145867 4915 flags.go:64] FLAG: --v="2" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145879 4915 flags.go:64] FLAG: --version="false" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145889 4915 flags.go:64] FLAG: --vmodule="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145901 4915 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.145910 4915 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146108 4915 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146119 4915 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146128 4915 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146137 4915 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146146 4915 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146154 4915 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146162 4915 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146170 4915 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146179 4915 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146186 4915 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146194 4915 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146202 4915 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146210 4915 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146217 4915 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146225 4915 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146233 4915 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146240 4915 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146248 4915 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146256 4915 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146263 4915 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146271 4915 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146280 4915 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146287 4915 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146295 4915 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146303 4915 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146311 4915 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146319 4915 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146326 4915 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146334 4915 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146342 4915 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146352 4915 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146363 4915 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146372 4915 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146381 4915 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146390 4915 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146398 4915 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146406 4915 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146414 4915 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146422 4915 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146430 4915 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146437 4915 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146445 4915 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146453 4915 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146460 4915 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146469 4915 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146476 4915 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146484 4915 feature_gate.go:330] unrecognized feature gate: Example Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146491 4915 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146499 4915 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146506 4915 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146514 4915 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146522 4915 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146530 4915 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146538 4915 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146546 4915 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146556 4915 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146565 4915 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146574 4915 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146582 4915 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146589 4915 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146599 4915 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146608 4915 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146616 4915 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146623 4915 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146631 4915 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146639 4915 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146647 4915 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146654 4915 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146662 4915 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146675 4915 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.146692 4915 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.147822 4915 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.159711 4915 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.159754 4915 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.159932 4915 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.159948 4915 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.159957 4915 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.159968 4915 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.159977 4915 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.159985 4915 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.159993 4915 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160002 4915 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160010 4915 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160019 4915 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160029 4915 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160040 4915 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160049 4915 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160058 4915 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160067 4915 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160075 4915 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160084 4915 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160093 4915 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160101 4915 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160109 4915 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160117 4915 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160125 4915 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160132 4915 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160140 4915 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160148 4915 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160157 4915 feature_gate.go:330] unrecognized feature gate: Example Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160165 4915 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160173 4915 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160181 4915 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160188 4915 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160196 4915 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160205 4915 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160212 4915 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160220 4915 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160227 4915 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160235 4915 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160243 4915 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160250 4915 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160258 4915 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160265 4915 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160274 4915 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160286 4915 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160296 4915 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160305 4915 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160314 4915 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160322 4915 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160330 4915 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160337 4915 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160345 4915 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160353 4915 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160361 4915 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160368 4915 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160378 4915 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160387 4915 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160394 4915 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160404 4915 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160412 4915 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160420 4915 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160427 4915 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160435 4915 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160443 4915 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160883 4915 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160909 4915 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160920 4915 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160930 4915 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160940 4915 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160950 4915 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160960 4915 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160969 4915 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160979 4915 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.160993 4915 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.161012 4915 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.161588 4915 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.161606 4915 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.161618 4915 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.161630 4915 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.161641 4915 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.161651 4915 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.161661 4915 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.161671 4915 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.161709 4915 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.161723 4915 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.161733 4915 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.161756 4915 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.161767 4915 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.161807 4915 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.161828 4915 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.161838 4915 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.161848 4915 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.161867 4915 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.161877 4915 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.161888 4915 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.161898 4915 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.161927 4915 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.161937 4915 feature_gate.go:330] unrecognized feature gate: Example Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.161947 4915 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.161957 4915 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.161971 4915 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.161982 4915 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.161992 4915 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162001 4915 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162011 4915 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162021 4915 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162031 4915 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162040 4915 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162112 4915 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162187 4915 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162204 4915 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162216 4915 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162230 4915 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162242 4915 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162253 4915 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162264 4915 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162294 4915 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162303 4915 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162311 4915 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162319 4915 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162327 4915 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162362 4915 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162370 4915 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162380 4915 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162390 4915 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162401 4915 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162622 4915 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162634 4915 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162644 4915 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162655 4915 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162664 4915 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162672 4915 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162681 4915 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162689 4915 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162701 4915 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162712 4915 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162721 4915 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162730 4915 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162738 4915 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162746 4915 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162753 4915 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162761 4915 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162769 4915 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162807 4915 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162815 4915 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.162823 4915 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.162835 4915 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.165113 4915 server.go:940] "Client rotation is on, will bootstrap in background" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.172755 4915 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.172922 4915 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.175880 4915 server.go:997] "Starting client certificate rotation" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.175966 4915 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.177333 4915 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-09 21:31:15.063239704 +0000 UTC Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.177466 4915 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 360h11m32.885778917s for next certificate rotation Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.207357 4915 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.212549 4915 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.231458 4915 log.go:25] "Validated CRI v1 runtime API" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.277609 4915 log.go:25] "Validated CRI v1 image API" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.280090 4915 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.287359 4915 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-24-21-15-22-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.287406 4915 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.323461 4915 manager.go:217] Machine: {Timestamp:2025-11-24 21:19:42.317588441 +0000 UTC m=+0.633840694 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:0b6646c2-0b1e-4f58-9c61-a13867a5dfdb BootID:f82f57df-1c60-47eb-b103-4027a05787c0 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:0c:f9:d5 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:0c:f9:d5 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:82:67:d6 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:1f:79:7e Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:cc:02:a3 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:40:e4:26 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:0a:f2:0a:29:7b:e2 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:86:a2:6b:3b:08:8f Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.324049 4915 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.324320 4915 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.324950 4915 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.325276 4915 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.325334 4915 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.325670 4915 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.325689 4915 container_manager_linux.go:303] "Creating device plugin manager" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.326313 4915 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.326364 4915 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.327652 4915 state_mem.go:36] "Initialized new in-memory state store" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.327833 4915 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.331724 4915 kubelet.go:418] "Attempting to sync node with API server" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.331759 4915 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.331808 4915 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.331829 4915 kubelet.go:324] "Adding apiserver pod source" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.331877 4915 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.336385 4915 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.337313 4915 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.340039 4915 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.340864 4915 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.340859 4915 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 24 21:19:42 crc kubenswrapper[4915]: E1124 21:19:42.340989 4915 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 24 21:19:42 crc kubenswrapper[4915]: E1124 21:19:42.341015 4915 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.341904 4915 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.341956 4915 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.341975 4915 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.341992 4915 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.342019 4915 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.342037 4915 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.342051 4915 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.342073 4915 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.342088 4915 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.342104 4915 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.342122 4915 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.342136 4915 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.343010 4915 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.343713 4915 server.go:1280] "Started kubelet" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.345086 4915 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.345609 4915 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.345607 4915 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.347104 4915 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 21:19:42 crc systemd[1]: Started Kubernetes Kubelet. Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.350377 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.350426 4915 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.350477 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 14:19:08.839987838 +0000 UTC Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.350535 4915 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 952h59m26.489454534s for next certificate rotation Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.351198 4915 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.351245 4915 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 24 21:19:42 crc kubenswrapper[4915]: E1124 21:19:42.351240 4915 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.351301 4915 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.352377 4915 server.go:460] "Adding debug handlers to kubelet server" Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.352515 4915 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 24 21:19:42 crc kubenswrapper[4915]: E1124 21:19:42.359960 4915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="200ms" Nov 24 21:19:42 crc kubenswrapper[4915]: E1124 21:19:42.362936 4915 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.363721 4915 factory.go:153] Registering CRI-O factory Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.363755 4915 factory.go:221] Registration of the crio container factory successfully Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.363964 4915 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.363975 4915 factory.go:55] Registering systemd factory Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.363984 4915 factory.go:221] Registration of the systemd container factory successfully Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.364010 4915 factory.go:103] Registering Raw factory Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.364029 4915 manager.go:1196] Started watching for new ooms in manager Nov 24 21:19:42 crc kubenswrapper[4915]: E1124 21:19:42.361538 4915 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.107:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187b0e19a491ca30 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-24 21:19:42.34367032 +0000 UTC m=+0.659922523,LastTimestamp:2025-11-24 21:19:42.34367032 +0000 UTC m=+0.659922523,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.370978 4915 manager.go:319] Starting recovery of all containers Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.378260 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.378505 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.378714 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.378953 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.379132 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382023 4915 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382105 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382137 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382155 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382177 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382252 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382274 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382289 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382302 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382322 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382336 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382350 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382365 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382380 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382395 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382410 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382424 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382488 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382502 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382521 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382587 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382601 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382616 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382632 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382652 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382668 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382695 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382729 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382743 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382757 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382769 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382803 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382816 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382830 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382844 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382858 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382872 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382885 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382900 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382918 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382932 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382946 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382961 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382975 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.382988 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383002 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383016 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383029 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383049 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383066 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383086 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383101 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383117 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383151 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383165 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383177 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383191 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383205 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383217 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383232 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383245 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383259 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383273 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383287 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383302 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383315 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383329 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383343 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383355 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383369 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383384 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383397 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383410 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383425 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383439 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383452 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383473 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383488 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383501 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383514 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383530 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383542 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383556 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383568 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383581 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383594 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383608 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383622 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383635 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383652 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383667 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383681 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383694 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383707 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383721 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383735 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383750 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383762 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383793 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383807 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383826 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383864 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383881 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383896 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383912 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383930 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383948 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383970 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.383989 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384008 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384026 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384040 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384055 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384069 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384092 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384105 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384117 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384131 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384145 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384157 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384171 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384185 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384198 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384213 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384314 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384329 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384343 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384356 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384371 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384385 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384399 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384414 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384429 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384443 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384456 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384469 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384482 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384498 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384513 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384527 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384543 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384558 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384573 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384586 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384600 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384614 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384627 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384640 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384653 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384667 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384680 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384695 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384708 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384724 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384737 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384750 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384763 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384796 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384810 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384832 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384846 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384860 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384874 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384887 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384902 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384916 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384931 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384950 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384968 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.384986 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385004 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385025 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385045 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385064 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385082 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385101 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385115 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385129 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385144 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385157 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385171 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385185 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385198 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385211 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385225 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385239 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385252 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385266 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385280 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385293 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385305 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385319 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385333 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385346 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385360 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385373 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385388 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385401 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385416 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385430 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385443 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385457 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385471 4915 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385483 4915 reconstruct.go:97] "Volume reconstruction finished" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.385493 4915 reconciler.go:26] "Reconciler: start to sync state" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.408147 4915 manager.go:324] Recovery completed Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.420891 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.422438 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.422486 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.422502 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.422829 4915 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.424318 4915 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.424342 4915 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.424363 4915 state_mem.go:36] "Initialized new in-memory state store" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.425350 4915 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.425391 4915 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.425416 4915 kubelet.go:2335] "Starting kubelet main sync loop" Nov 24 21:19:42 crc kubenswrapper[4915]: E1124 21:19:42.426015 4915 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 21:19:42 crc kubenswrapper[4915]: W1124 21:19:42.426053 4915 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 24 21:19:42 crc kubenswrapper[4915]: E1124 21:19:42.426127 4915 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.444543 4915 policy_none.go:49] "None policy: Start" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.446264 4915 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.446287 4915 state_mem.go:35] "Initializing new in-memory state store" Nov 24 21:19:42 crc kubenswrapper[4915]: E1124 21:19:42.451649 4915 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 24 21:19:42 crc kubenswrapper[4915]: E1124 21:19:42.526169 4915 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.528059 4915 manager.go:334] "Starting Device Plugin manager" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.528386 4915 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.528409 4915 server.go:79] "Starting device plugin registration server" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.528897 4915 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.528919 4915 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.529158 4915 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.529255 4915 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.529264 4915 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 21:19:42 crc kubenswrapper[4915]: E1124 21:19:42.543310 4915 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 24 21:19:42 crc kubenswrapper[4915]: E1124 21:19:42.560741 4915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="400ms" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.629255 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.630729 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.630834 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.630854 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.630897 4915 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 21:19:42 crc kubenswrapper[4915]: E1124 21:19:42.631544 4915 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.107:6443: connect: connection refused" node="crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.727043 4915 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.727226 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.729133 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.729194 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.729218 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.729456 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.729670 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.729731 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.730916 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.730963 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.730981 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.731111 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.731221 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.731264 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.732149 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.732265 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.732320 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.732392 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.732407 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.732421 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.732437 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.732459 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.732478 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.732878 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.732946 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.732891 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.735015 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.735058 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.735074 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.735218 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.735399 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.735455 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.736208 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.736251 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.736267 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.736415 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.736455 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.736474 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.736836 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.736913 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.737551 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.737635 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.737724 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.738242 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.738299 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.738319 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.792262 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.792359 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.792422 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.792514 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.792563 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.792652 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.792721 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.792773 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.792869 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.792918 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.792968 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.793016 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.793122 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.793170 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.793222 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.831731 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.833188 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.833229 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.833240 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.833272 4915 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 21:19:42 crc kubenswrapper[4915]: E1124 21:19:42.834011 4915 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.107:6443: connect: connection refused" node="crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.894542 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.894603 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.894659 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.894689 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.894722 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.894754 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.894868 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.894909 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.894974 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.894933 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.894911 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.895023 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.895010 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.895043 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.895086 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.894885 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.895133 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.895102 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.895554 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.895605 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.895632 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.895666 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.895720 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.895748 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.895772 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.895820 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.895845 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.895870 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.895927 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: I1124 21:19:42.896115 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 24 21:19:42 crc kubenswrapper[4915]: E1124 21:19:42.962102 4915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="800ms" Nov 24 21:19:43 crc kubenswrapper[4915]: I1124 21:19:43.086537 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:19:43 crc kubenswrapper[4915]: I1124 21:19:43.118412 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:19:43 crc kubenswrapper[4915]: I1124 21:19:43.148517 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 21:19:43 crc kubenswrapper[4915]: W1124 21:19:43.158220 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-19e2bc4ae1ca27852ff3e61a90d877397845f31d8aa49776f1f5ae71eab97399 WatchSource:0}: Error finding container 19e2bc4ae1ca27852ff3e61a90d877397845f31d8aa49776f1f5ae71eab97399: Status 404 returned error can't find the container with id 19e2bc4ae1ca27852ff3e61a90d877397845f31d8aa49776f1f5ae71eab97399 Nov 24 21:19:43 crc kubenswrapper[4915]: W1124 21:19:43.174863 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-2da6b6a074afb14f3e75c68dc4cbd3c5381a47009568b5b8a1dd30b131802775 WatchSource:0}: Error finding container 2da6b6a074afb14f3e75c68dc4cbd3c5381a47009568b5b8a1dd30b131802775: Status 404 returned error can't find the container with id 2da6b6a074afb14f3e75c68dc4cbd3c5381a47009568b5b8a1dd30b131802775 Nov 24 21:19:43 crc kubenswrapper[4915]: I1124 21:19:43.180620 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 24 21:19:43 crc kubenswrapper[4915]: I1124 21:19:43.187623 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 24 21:19:43 crc kubenswrapper[4915]: W1124 21:19:43.203441 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-02ab28134a0ada7a0f3af93f74085eceae95732144aec94efb8ee81871f69ade WatchSource:0}: Error finding container 02ab28134a0ada7a0f3af93f74085eceae95732144aec94efb8ee81871f69ade: Status 404 returned error can't find the container with id 02ab28134a0ada7a0f3af93f74085eceae95732144aec94efb8ee81871f69ade Nov 24 21:19:43 crc kubenswrapper[4915]: W1124 21:19:43.212980 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-23292ebd6ec0b521aa1ae18b36342a2332eebc958dcadbfbeb8d8fa396e33579 WatchSource:0}: Error finding container 23292ebd6ec0b521aa1ae18b36342a2332eebc958dcadbfbeb8d8fa396e33579: Status 404 returned error can't find the container with id 23292ebd6ec0b521aa1ae18b36342a2332eebc958dcadbfbeb8d8fa396e33579 Nov 24 21:19:43 crc kubenswrapper[4915]: I1124 21:19:43.234920 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:43 crc kubenswrapper[4915]: I1124 21:19:43.236602 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:43 crc kubenswrapper[4915]: I1124 21:19:43.236643 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:43 crc kubenswrapper[4915]: I1124 21:19:43.236655 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:43 crc kubenswrapper[4915]: I1124 21:19:43.236681 4915 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 21:19:43 crc kubenswrapper[4915]: E1124 21:19:43.237273 4915 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.107:6443: connect: connection refused" node="crc" Nov 24 21:19:43 crc kubenswrapper[4915]: W1124 21:19:43.253488 4915 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 24 21:19:43 crc kubenswrapper[4915]: E1124 21:19:43.253598 4915 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 24 21:19:43 crc kubenswrapper[4915]: I1124 21:19:43.346739 4915 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 24 21:19:43 crc kubenswrapper[4915]: I1124 21:19:43.429809 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"02ab28134a0ada7a0f3af93f74085eceae95732144aec94efb8ee81871f69ade"} Nov 24 21:19:43 crc kubenswrapper[4915]: I1124 21:19:43.430575 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2da6b6a074afb14f3e75c68dc4cbd3c5381a47009568b5b8a1dd30b131802775"} Nov 24 21:19:43 crc kubenswrapper[4915]: I1124 21:19:43.431375 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"19e2bc4ae1ca27852ff3e61a90d877397845f31d8aa49776f1f5ae71eab97399"} Nov 24 21:19:43 crc kubenswrapper[4915]: I1124 21:19:43.434765 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e635d59ebab32e54c7375d1b493da8ca1076ee1f98781b7b9469b5c3d9429b76"} Nov 24 21:19:43 crc kubenswrapper[4915]: I1124 21:19:43.437602 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"23292ebd6ec0b521aa1ae18b36342a2332eebc958dcadbfbeb8d8fa396e33579"} Nov 24 21:19:43 crc kubenswrapper[4915]: W1124 21:19:43.651410 4915 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 24 21:19:43 crc kubenswrapper[4915]: E1124 21:19:43.651539 4915 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 24 21:19:43 crc kubenswrapper[4915]: E1124 21:19:43.763681 4915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="1.6s" Nov 24 21:19:43 crc kubenswrapper[4915]: W1124 21:19:43.894630 4915 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 24 21:19:43 crc kubenswrapper[4915]: E1124 21:19:43.894742 4915 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 24 21:19:43 crc kubenswrapper[4915]: W1124 21:19:43.904730 4915 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 24 21:19:43 crc kubenswrapper[4915]: E1124 21:19:43.904838 4915 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.038312 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.040316 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.040380 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.040397 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.040439 4915 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 21:19:44 crc kubenswrapper[4915]: E1124 21:19:44.041176 4915 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.107:6443: connect: connection refused" node="crc" Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.346605 4915 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 24 21:19:44 crc kubenswrapper[4915]: E1124 21:19:44.414001 4915 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.107:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187b0e19a491ca30 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-24 21:19:42.34367032 +0000 UTC m=+0.659922523,LastTimestamp:2025-11-24 21:19:42.34367032 +0000 UTC m=+0.659922523,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.444305 4915 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="947c69acffc884b2bef5a6bb30b407f9e5dd0519fcaf967da29b2b9c1f983459" exitCode=0 Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.444486 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.444486 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"947c69acffc884b2bef5a6bb30b407f9e5dd0519fcaf967da29b2b9c1f983459"} Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.447012 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.447041 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.447053 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.452688 4915 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b" exitCode=0 Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.452852 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b"} Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.452890 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.454453 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.454505 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.454523 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.457639 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.458207 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"430687a31d119ccef92a1bc31318cef87e3c53b4a12df207f9b9ef8bcc968c9f"} Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.458261 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6566ed68b170603480f4f1938004e3c9cde1a814ceba7575a792ee272c03c2e8"} Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.458274 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"94f65a630ef6d66b01511f0e4ce0279addf8b6f72ee79283bccc1325b5f198b1"} Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.458930 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.458957 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.458968 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.460628 4915 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9" exitCode=0 Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.460710 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.460728 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9"} Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.462397 4915 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="d0d1f0567f02e99d743d58700039b1bfd0e0fb3229f8755c018888d74f82af93" exitCode=0 Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.462432 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"d0d1f0567f02e99d743d58700039b1bfd0e0fb3229f8755c018888d74f82af93"} Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.462520 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.462894 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.462917 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.462929 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.463417 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.463441 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:44 crc kubenswrapper[4915]: I1124 21:19:44.463452 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.346209 4915 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 24 21:19:45 crc kubenswrapper[4915]: E1124 21:19:45.365392 4915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.107:6443: connect: connection refused" interval="3.2s" Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.470410 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.471006 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"26fa9d2e4c65eb47a5bdf7ef3f569abd936b8bf98e79ec9656e0edbf1f1cdd60"} Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.471430 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.471472 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.471484 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.474258 4915 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a" exitCode=0 Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.474380 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a"} Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.474464 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.475229 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.475353 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.475369 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.477069 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e8cc8d1fdf79e7fba176ea0a5438b71c7d75ddff7816448f51c963f26550374d"} Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.477164 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.478678 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.478704 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.478715 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.480870 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"014646fee05d1d964cd6d7b56ab09b6c95b56f92796b542c0675a50734e92f37"} Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.480897 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.480916 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3aa36275462843401ca1bae1eee8831ff2c28d8e6af24eaea56d765c16ecc8af"} Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.480938 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"74c9823ea60f9f0adf6aca07204f39ef6bf6eeb622f4fcba7d3f804ae38f337d"} Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.481805 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.481849 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.481868 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.483833 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a"} Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.483857 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74"} Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.483867 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0"} Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.483876 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c"} Nov 24 21:19:45 crc kubenswrapper[4915]: W1124 21:19:45.530881 4915 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 24 21:19:45 crc kubenswrapper[4915]: E1124 21:19:45.530974 4915 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.641844 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.642971 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.643009 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.643019 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:45 crc kubenswrapper[4915]: I1124 21:19:45.643046 4915 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 21:19:45 crc kubenswrapper[4915]: E1124 21:19:45.643504 4915 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.107:6443: connect: connection refused" node="crc" Nov 24 21:19:45 crc kubenswrapper[4915]: W1124 21:19:45.816553 4915 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.107:6443: connect: connection refused Nov 24 21:19:45 crc kubenswrapper[4915]: E1124 21:19:45.816714 4915 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.107:6443: connect: connection refused" logger="UnhandledError" Nov 24 21:19:46 crc kubenswrapper[4915]: I1124 21:19:46.199769 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 21:19:46 crc kubenswrapper[4915]: I1124 21:19:46.337187 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:19:46 crc kubenswrapper[4915]: I1124 21:19:46.490881 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54"} Nov 24 21:19:46 crc kubenswrapper[4915]: I1124 21:19:46.491065 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:46 crc kubenswrapper[4915]: I1124 21:19:46.492329 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:46 crc kubenswrapper[4915]: I1124 21:19:46.492357 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:46 crc kubenswrapper[4915]: I1124 21:19:46.492365 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:46 crc kubenswrapper[4915]: I1124 21:19:46.495061 4915 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03" exitCode=0 Nov 24 21:19:46 crc kubenswrapper[4915]: I1124 21:19:46.495151 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:46 crc kubenswrapper[4915]: I1124 21:19:46.495160 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:46 crc kubenswrapper[4915]: I1124 21:19:46.495280 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:46 crc kubenswrapper[4915]: I1124 21:19:46.495301 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:46 crc kubenswrapper[4915]: I1124 21:19:46.495316 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03"} Nov 24 21:19:46 crc kubenswrapper[4915]: I1124 21:19:46.496087 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:46 crc kubenswrapper[4915]: I1124 21:19:46.496119 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:46 crc kubenswrapper[4915]: I1124 21:19:46.496128 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:46 crc kubenswrapper[4915]: I1124 21:19:46.496194 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:46 crc kubenswrapper[4915]: I1124 21:19:46.496213 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:46 crc kubenswrapper[4915]: I1124 21:19:46.496222 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:46 crc kubenswrapper[4915]: I1124 21:19:46.496348 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:46 crc kubenswrapper[4915]: I1124 21:19:46.496360 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:46 crc kubenswrapper[4915]: I1124 21:19:46.496367 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:46 crc kubenswrapper[4915]: I1124 21:19:46.496941 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:46 crc kubenswrapper[4915]: I1124 21:19:46.496959 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:46 crc kubenswrapper[4915]: I1124 21:19:46.496967 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:47 crc kubenswrapper[4915]: I1124 21:19:47.502921 4915 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 21:19:47 crc kubenswrapper[4915]: I1124 21:19:47.502911 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2"} Nov 24 21:19:47 crc kubenswrapper[4915]: I1124 21:19:47.502973 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:47 crc kubenswrapper[4915]: I1124 21:19:47.502982 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e"} Nov 24 21:19:47 crc kubenswrapper[4915]: I1124 21:19:47.503001 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90"} Nov 24 21:19:47 crc kubenswrapper[4915]: I1124 21:19:47.503012 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d"} Nov 24 21:19:47 crc kubenswrapper[4915]: I1124 21:19:47.503046 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:47 crc kubenswrapper[4915]: I1124 21:19:47.503153 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:47 crc kubenswrapper[4915]: I1124 21:19:47.504414 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:47 crc kubenswrapper[4915]: I1124 21:19:47.504445 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:47 crc kubenswrapper[4915]: I1124 21:19:47.504457 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:47 crc kubenswrapper[4915]: I1124 21:19:47.504479 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:47 crc kubenswrapper[4915]: I1124 21:19:47.504493 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:47 crc kubenswrapper[4915]: I1124 21:19:47.504501 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:47 crc kubenswrapper[4915]: I1124 21:19:47.504426 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:47 crc kubenswrapper[4915]: I1124 21:19:47.504533 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:47 crc kubenswrapper[4915]: I1124 21:19:47.504542 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:47 crc kubenswrapper[4915]: I1124 21:19:47.939944 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:19:48 crc kubenswrapper[4915]: I1124 21:19:48.511336 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:48 crc kubenswrapper[4915]: I1124 21:19:48.512472 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:48 crc kubenswrapper[4915]: I1124 21:19:48.513194 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4"} Nov 24 21:19:48 crc kubenswrapper[4915]: I1124 21:19:48.513734 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:48 crc kubenswrapper[4915]: I1124 21:19:48.513844 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:48 crc kubenswrapper[4915]: I1124 21:19:48.513869 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:48 crc kubenswrapper[4915]: I1124 21:19:48.515296 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:48 crc kubenswrapper[4915]: I1124 21:19:48.515343 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:48 crc kubenswrapper[4915]: I1124 21:19:48.515366 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:48 crc kubenswrapper[4915]: I1124 21:19:48.844263 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:48 crc kubenswrapper[4915]: I1124 21:19:48.846098 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:48 crc kubenswrapper[4915]: I1124 21:19:48.846170 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:48 crc kubenswrapper[4915]: I1124 21:19:48.846194 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:48 crc kubenswrapper[4915]: I1124 21:19:48.846227 4915 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 21:19:49 crc kubenswrapper[4915]: I1124 21:19:49.337448 4915 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 21:19:49 crc kubenswrapper[4915]: I1124 21:19:49.337543 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 21:19:49 crc kubenswrapper[4915]: I1124 21:19:49.514024 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:49 crc kubenswrapper[4915]: I1124 21:19:49.514957 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:49 crc kubenswrapper[4915]: I1124 21:19:49.515020 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:49 crc kubenswrapper[4915]: I1124 21:19:49.515040 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:49 crc kubenswrapper[4915]: I1124 21:19:49.652957 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 24 21:19:50 crc kubenswrapper[4915]: I1124 21:19:50.517087 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:50 crc kubenswrapper[4915]: I1124 21:19:50.518093 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:50 crc kubenswrapper[4915]: I1124 21:19:50.518121 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:50 crc kubenswrapper[4915]: I1124 21:19:50.518130 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:50 crc kubenswrapper[4915]: I1124 21:19:50.845728 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:19:50 crc kubenswrapper[4915]: I1124 21:19:50.846009 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:50 crc kubenswrapper[4915]: I1124 21:19:50.847543 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:50 crc kubenswrapper[4915]: I1124 21:19:50.847596 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:50 crc kubenswrapper[4915]: I1124 21:19:50.847614 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:51 crc kubenswrapper[4915]: I1124 21:19:51.487992 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:19:51 crc kubenswrapper[4915]: I1124 21:19:51.519511 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:51 crc kubenswrapper[4915]: I1124 21:19:51.520588 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:51 crc kubenswrapper[4915]: I1124 21:19:51.520654 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:51 crc kubenswrapper[4915]: I1124 21:19:51.520672 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:52 crc kubenswrapper[4915]: I1124 21:19:52.048152 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 24 21:19:52 crc kubenswrapper[4915]: I1124 21:19:52.048332 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:52 crc kubenswrapper[4915]: I1124 21:19:52.049959 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:52 crc kubenswrapper[4915]: I1124 21:19:52.050049 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:52 crc kubenswrapper[4915]: I1124 21:19:52.050070 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:52 crc kubenswrapper[4915]: E1124 21:19:52.544400 4915 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 24 21:19:53 crc kubenswrapper[4915]: I1124 21:19:53.753284 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:19:53 crc kubenswrapper[4915]: I1124 21:19:53.753526 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:53 crc kubenswrapper[4915]: I1124 21:19:53.755264 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:53 crc kubenswrapper[4915]: I1124 21:19:53.755313 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:53 crc kubenswrapper[4915]: I1124 21:19:53.755330 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:53 crc kubenswrapper[4915]: I1124 21:19:53.988028 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:19:54 crc kubenswrapper[4915]: I1124 21:19:54.527228 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:54 crc kubenswrapper[4915]: I1124 21:19:54.528204 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:54 crc kubenswrapper[4915]: I1124 21:19:54.528239 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:54 crc kubenswrapper[4915]: I1124 21:19:54.528248 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:55 crc kubenswrapper[4915]: I1124 21:19:55.256220 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:19:55 crc kubenswrapper[4915]: I1124 21:19:55.262192 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:19:55 crc kubenswrapper[4915]: I1124 21:19:55.528892 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:55 crc kubenswrapper[4915]: I1124 21:19:55.529750 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:55 crc kubenswrapper[4915]: I1124 21:19:55.529833 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:55 crc kubenswrapper[4915]: I1124 21:19:55.529857 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:55 crc kubenswrapper[4915]: I1124 21:19:55.534232 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:19:56 crc kubenswrapper[4915]: W1124 21:19:56.213229 4915 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 24 21:19:56 crc kubenswrapper[4915]: I1124 21:19:56.213630 4915 trace.go:236] Trace[1643450287]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 21:19:46.211) (total time: 10002ms): Nov 24 21:19:56 crc kubenswrapper[4915]: Trace[1643450287]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (21:19:56.213) Nov 24 21:19:56 crc kubenswrapper[4915]: Trace[1643450287]: [10.002204982s] [10.002204982s] END Nov 24 21:19:56 crc kubenswrapper[4915]: E1124 21:19:56.213965 4915 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 24 21:19:56 crc kubenswrapper[4915]: I1124 21:19:56.347245 4915 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 24 21:19:56 crc kubenswrapper[4915]: I1124 21:19:56.526555 4915 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 24 21:19:56 crc kubenswrapper[4915]: I1124 21:19:56.526646 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 24 21:19:56 crc kubenswrapper[4915]: I1124 21:19:56.533390 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 21:19:56 crc kubenswrapper[4915]: I1124 21:19:56.533828 4915 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Nov 24 21:19:56 crc kubenswrapper[4915]: I1124 21:19:56.533878 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 24 21:19:56 crc kubenswrapper[4915]: I1124 21:19:56.536290 4915 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54" exitCode=255 Nov 24 21:19:56 crc kubenswrapper[4915]: I1124 21:19:56.536405 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:56 crc kubenswrapper[4915]: I1124 21:19:56.536386 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54"} Nov 24 21:19:56 crc kubenswrapper[4915]: I1124 21:19:56.536703 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:56 crc kubenswrapper[4915]: I1124 21:19:56.538839 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:56 crc kubenswrapper[4915]: I1124 21:19:56.538880 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:56 crc kubenswrapper[4915]: I1124 21:19:56.538893 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:56 crc kubenswrapper[4915]: I1124 21:19:56.538901 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:56 crc kubenswrapper[4915]: I1124 21:19:56.538937 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:56 crc kubenswrapper[4915]: I1124 21:19:56.538948 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:56 crc kubenswrapper[4915]: I1124 21:19:56.539462 4915 scope.go:117] "RemoveContainer" containerID="0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54" Nov 24 21:19:57 crc kubenswrapper[4915]: I1124 21:19:57.541410 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 24 21:19:57 crc kubenswrapper[4915]: I1124 21:19:57.543432 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918"} Nov 24 21:19:57 crc kubenswrapper[4915]: I1124 21:19:57.543491 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:57 crc kubenswrapper[4915]: I1124 21:19:57.543592 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:57 crc kubenswrapper[4915]: I1124 21:19:57.544723 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:57 crc kubenswrapper[4915]: I1124 21:19:57.544757 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:57 crc kubenswrapper[4915]: I1124 21:19:57.544771 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:57 crc kubenswrapper[4915]: I1124 21:19:57.545608 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:57 crc kubenswrapper[4915]: I1124 21:19:57.545639 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:57 crc kubenswrapper[4915]: I1124 21:19:57.545652 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:57 crc kubenswrapper[4915]: I1124 21:19:57.940684 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:19:58 crc kubenswrapper[4915]: I1124 21:19:58.546170 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:19:58 crc kubenswrapper[4915]: I1124 21:19:58.547273 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:19:58 crc kubenswrapper[4915]: I1124 21:19:58.547299 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:19:58 crc kubenswrapper[4915]: I1124 21:19:58.547307 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:19:59 crc kubenswrapper[4915]: I1124 21:19:59.339041 4915 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 21:19:59 crc kubenswrapper[4915]: I1124 21:19:59.339610 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 24 21:20:00 crc kubenswrapper[4915]: I1124 21:20:00.850881 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:20:00 crc kubenswrapper[4915]: I1124 21:20:00.851082 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:20:00 crc kubenswrapper[4915]: I1124 21:20:00.852346 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:00 crc kubenswrapper[4915]: I1124 21:20:00.852384 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:00 crc kubenswrapper[4915]: I1124 21:20:00.852392 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:00 crc kubenswrapper[4915]: I1124 21:20:00.857367 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:20:01 crc kubenswrapper[4915]: E1124 21:20:01.507621 4915 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 24 21:20:01 crc kubenswrapper[4915]: I1124 21:20:01.509017 4915 trace.go:236] Trace[416458386]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 21:19:49.316) (total time: 12192ms): Nov 24 21:20:01 crc kubenswrapper[4915]: Trace[416458386]: ---"Objects listed" error: 12192ms (21:20:01.508) Nov 24 21:20:01 crc kubenswrapper[4915]: Trace[416458386]: [12.192858302s] [12.192858302s] END Nov 24 21:20:01 crc kubenswrapper[4915]: I1124 21:20:01.509040 4915 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 24 21:20:01 crc kubenswrapper[4915]: I1124 21:20:01.509721 4915 trace.go:236] Trace[284556321]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 21:19:46.768) (total time: 14740ms): Nov 24 21:20:01 crc kubenswrapper[4915]: Trace[284556321]: ---"Objects listed" error: 14740ms (21:20:01.509) Nov 24 21:20:01 crc kubenswrapper[4915]: Trace[284556321]: [14.740845901s] [14.740845901s] END Nov 24 21:20:01 crc kubenswrapper[4915]: I1124 21:20:01.509749 4915 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 24 21:20:01 crc kubenswrapper[4915]: E1124 21:20:01.509874 4915 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 24 21:20:01 crc kubenswrapper[4915]: I1124 21:20:01.510331 4915 trace.go:236] Trace[664510477]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (24-Nov-2025 21:19:51.288) (total time: 10221ms): Nov 24 21:20:01 crc kubenswrapper[4915]: Trace[664510477]: ---"Objects listed" error: 10221ms (21:20:01.510) Nov 24 21:20:01 crc kubenswrapper[4915]: Trace[664510477]: [10.221535624s] [10.221535624s] END Nov 24 21:20:01 crc kubenswrapper[4915]: I1124 21:20:01.510347 4915 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 24 21:20:01 crc kubenswrapper[4915]: I1124 21:20:01.511093 4915 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.020908 4915 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.087589 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.105283 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.345527 4915 apiserver.go:52] "Watching apiserver" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.348314 4915 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.348582 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc","openshift-image-registry/node-ca-kg8p2","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-dns/node-resolver-vl494","openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.348926 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.349081 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.349200 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.349265 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.349409 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 21:20:02 crc kubenswrapper[4915]: E1124 21:20:02.349511 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.349590 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:02 crc kubenswrapper[4915]: E1124 21:20:02.349631 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:20:02 crc kubenswrapper[4915]: E1124 21:20:02.349678 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.349844 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kg8p2" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.349856 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vl494" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.351582 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.351842 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.352049 4915 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.352387 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.352426 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.352469 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.352520 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.352558 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.352575 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.352522 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.352723 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.352770 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.353001 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.353194 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.354542 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.355847 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.360935 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.371845 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.383639 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.393256 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.403148 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.412055 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.415698 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.415738 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.415767 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.415813 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.415840 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.415864 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.415889 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.415912 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.415939 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.415961 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.415982 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416008 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416032 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416055 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416080 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416105 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416128 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416154 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416181 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416194 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416191 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416236 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416204 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416347 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416377 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416381 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416403 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416427 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416452 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416473 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416498 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416513 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416522 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416544 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416566 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416589 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416583 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416617 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416643 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416667 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416666 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416691 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416717 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416741 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416768 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416828 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416856 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416882 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416913 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416937 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416960 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416982 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417004 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417027 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417043 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417058 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417074 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417088 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417103 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417119 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417134 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417148 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417165 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417186 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417202 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417218 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417240 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417256 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417272 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417286 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417301 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417329 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417385 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417407 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417430 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417454 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417480 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417809 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417832 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417855 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417877 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417900 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417925 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417949 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417973 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417994 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418016 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418038 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418058 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418079 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418101 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418126 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418148 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418169 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418191 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418214 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418238 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418262 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418284 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418311 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418334 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418358 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418380 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418404 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418426 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418451 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418473 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418496 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418577 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418600 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418626 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418651 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418675 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418699 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418720 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418745 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418771 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418811 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418836 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418863 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418889 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416694 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418918 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416751 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416744 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416861 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416881 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.416989 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417024 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417037 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417110 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417160 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417198 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417292 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417375 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417400 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417423 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417465 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417555 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417658 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417656 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417688 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417835 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.417930 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418153 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418182 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418195 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418247 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418442 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418521 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418549 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418606 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418635 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418747 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418977 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.418910 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.419975 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420005 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420028 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420047 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420065 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420082 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420099 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420118 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420136 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420155 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420174 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420191 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420207 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420226 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420245 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420268 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420287 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420306 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420328 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420344 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420361 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420379 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420397 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420415 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420431 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420448 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420483 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420502 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420518 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420534 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420552 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420568 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420583 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420602 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420618 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420633 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420653 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420670 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420686 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420704 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420719 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420736 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420751 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420768 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420797 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420814 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420829 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420845 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420862 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420879 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420894 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420909 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420925 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420946 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420963 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.420981 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.421000 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.421016 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.421031 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.421052 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.421070 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.421086 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.421101 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.421119 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.421133 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.421150 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.421169 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.421185 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.421200 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.421229 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.421251 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.421269 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.421361 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.421380 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.421397 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.421413 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.421429 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.421445 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.421461 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.421477 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.421517 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.421539 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.421559 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.421578 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.421596 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.421614 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.422546 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.422572 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.422596 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.422614 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.422631 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3edde8ec-c020-4be1-8007-edf769dd0ecc-host\") pod \"node-ca-kg8p2\" (UID: \"3edde8ec-c020-4be1-8007-edf769dd0ecc\") " pod="openshift-image-registry/node-ca-kg8p2" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.422663 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.422690 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.422709 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7dd98449-7ae2-455f-aa42-fc277ebfd5f2-hosts-file\") pod \"node-resolver-vl494\" (UID: \"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\") " pod="openshift-dns/node-resolver-vl494" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.422728 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.422745 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.422761 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pttv5\" (UniqueName: \"kubernetes.io/projected/3edde8ec-c020-4be1-8007-edf769dd0ecc-kube-api-access-pttv5\") pod \"node-ca-kg8p2\" (UID: \"3edde8ec-c020-4be1-8007-edf769dd0ecc\") " pod="openshift-image-registry/node-ca-kg8p2" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.422791 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqrj5\" (UniqueName: \"kubernetes.io/projected/7dd98449-7ae2-455f-aa42-fc277ebfd5f2-kube-api-access-sqrj5\") pod \"node-resolver-vl494\" (UID: \"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\") " pod="openshift-dns/node-resolver-vl494" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.422810 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3edde8ec-c020-4be1-8007-edf769dd0ecc-serviceca\") pod \"node-ca-kg8p2\" (UID: \"3edde8ec-c020-4be1-8007-edf769dd0ecc\") " pod="openshift-image-registry/node-ca-kg8p2" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.425704 4915 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.425726 4915 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.425739 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.425750 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.425762 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.425785 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.425796 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.425807 4915 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.425818 4915 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.425828 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.425839 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.425848 4915 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.425858 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.425869 4915 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.425878 4915 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.425889 4915 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.425899 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.425908 4915 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.425919 4915 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.425929 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.425939 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.425950 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.425961 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.425971 4915 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.425981 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.425991 4915 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.426001 4915 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.426010 4915 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.426020 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.426029 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.426038 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.426048 4915 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.426058 4915 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.426068 4915 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.426077 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.426088 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.426100 4915 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.426111 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.426120 4915 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.422845 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.424397 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.424474 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.425165 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.425339 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.425556 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.425609 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.425902 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.426072 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.426257 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.426304 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.426424 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.426909 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.427221 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.431569 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.431659 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.431760 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.432115 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.432362 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.432353 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.432762 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.432904 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.432963 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.433056 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.433216 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.433587 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.433748 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.434332 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.434385 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.434941 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.435490 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.435897 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.436055 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.436210 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.436439 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.436853 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.437373 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.437385 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.437419 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.437587 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.437701 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.437817 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.438121 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.438441 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.438511 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.438643 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.438667 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.438713 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.438765 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.441704 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.441733 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.441742 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.441876 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.441934 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.442093 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.442170 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.442186 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.442243 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.442894 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.442907 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.442929 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.443165 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.443194 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.443432 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.443456 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.443467 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.443491 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.443690 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.443731 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.444094 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.444499 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.444584 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.445191 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.445225 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.445508 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.445599 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.445982 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.446266 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.446411 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.446588 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.446618 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.446799 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.446911 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.447142 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.447441 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.447603 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.448310 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.448550 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.448878 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.449421 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.449954 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.450097 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.450241 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.450379 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.451234 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.451367 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.451478 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.451668 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.451694 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.451864 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.451924 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.453745 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.454155 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.454232 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.454241 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.454335 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.454530 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.454633 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.454968 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.455059 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.455316 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.455894 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.455985 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.456177 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.456753 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.457290 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.457859 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.458334 4915 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.458947 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: E1124 21:20:02.459161 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:20:02.959137872 +0000 UTC m=+21.275390045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.459214 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: E1124 21:20:02.459225 4915 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.459410 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.459641 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: E1124 21:20:02.459836 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:20:02.959287416 +0000 UTC m=+21.275539599 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.460082 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 21:20:02 crc kubenswrapper[4915]: E1124 21:20:02.460265 4915 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:20:02 crc kubenswrapper[4915]: E1124 21:20:02.460319 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:20:02.960303272 +0000 UTC m=+21.276555455 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.460339 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.460673 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.463464 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.463593 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.475391 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.475280 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.475576 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.476228 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.476369 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.484452 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.487173 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.491107 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.491824 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.499878 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.502726 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.502944 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.503435 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.505927 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.508927 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: E1124 21:20:02.509232 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:20:02 crc kubenswrapper[4915]: E1124 21:20:02.509330 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:20:02 crc kubenswrapper[4915]: E1124 21:20:02.509422 4915 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:20:02 crc kubenswrapper[4915]: E1124 21:20:02.509571 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 21:20:03.009548403 +0000 UTC m=+21.325800636 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.510443 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.511352 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.511603 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.512274 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.512431 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.513112 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.513285 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: E1124 21:20:02.513499 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:20:02 crc kubenswrapper[4915]: E1124 21:20:02.513525 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:20:02 crc kubenswrapper[4915]: E1124 21:20:02.513540 4915 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:20:02 crc kubenswrapper[4915]: E1124 21:20:02.513604 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 21:20:03.013583618 +0000 UTC m=+21.329835871 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.523105 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.523226 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.523217 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.523410 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.523532 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.523548 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.524260 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.525213 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528118 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528156 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528171 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3edde8ec-c020-4be1-8007-edf769dd0ecc-host\") pod \"node-ca-kg8p2\" (UID: \"3edde8ec-c020-4be1-8007-edf769dd0ecc\") " pod="openshift-image-registry/node-ca-kg8p2" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528197 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7dd98449-7ae2-455f-aa42-fc277ebfd5f2-hosts-file\") pod \"node-resolver-vl494\" (UID: \"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\") " pod="openshift-dns/node-resolver-vl494" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528212 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pttv5\" (UniqueName: \"kubernetes.io/projected/3edde8ec-c020-4be1-8007-edf769dd0ecc-kube-api-access-pttv5\") pod \"node-ca-kg8p2\" (UID: \"3edde8ec-c020-4be1-8007-edf769dd0ecc\") " pod="openshift-image-registry/node-ca-kg8p2" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528225 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqrj5\" (UniqueName: \"kubernetes.io/projected/7dd98449-7ae2-455f-aa42-fc277ebfd5f2-kube-api-access-sqrj5\") pod \"node-resolver-vl494\" (UID: \"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\") " pod="openshift-dns/node-resolver-vl494" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528239 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3edde8ec-c020-4be1-8007-edf769dd0ecc-serviceca\") pod \"node-ca-kg8p2\" (UID: \"3edde8ec-c020-4be1-8007-edf769dd0ecc\") " pod="openshift-image-registry/node-ca-kg8p2" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528286 4915 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528296 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528305 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528313 4915 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528321 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528328 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528336 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528344 4915 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528352 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528360 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528369 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528376 4915 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528384 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528392 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528400 4915 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528408 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528415 4915 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528432 4915 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528441 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528449 4915 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528456 4915 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528463 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528471 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528480 4915 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528489 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528497 4915 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528504 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528513 4915 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528520 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528528 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528537 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528545 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528553 4915 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528561 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528569 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528579 4915 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528587 4915 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528595 4915 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528602 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528610 4915 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528617 4915 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528624 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528632 4915 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528639 4915 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528647 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528654 4915 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528661 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528669 4915 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528678 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528686 4915 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528694 4915 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528702 4915 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528709 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528718 4915 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528728 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528739 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528747 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528755 4915 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528763 4915 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528772 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528830 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528841 4915 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528852 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528862 4915 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528872 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528882 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528890 4915 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528899 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528907 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528915 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528922 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528930 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528937 4915 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528945 4915 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528953 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528960 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528968 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528975 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528983 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528991 4915 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.528998 4915 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529006 4915 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529014 4915 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529021 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529031 4915 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529038 4915 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529046 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529054 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529062 4915 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529070 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529078 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529087 4915 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529094 4915 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529101 4915 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529109 4915 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529118 4915 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529126 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529134 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529143 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529151 4915 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529158 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529166 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529173 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529181 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529189 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529197 4915 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529205 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529213 4915 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529221 4915 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529228 4915 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529236 4915 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529244 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529253 4915 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529261 4915 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529269 4915 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529277 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529285 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529292 4915 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529300 4915 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529308 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529315 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529324 4915 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529331 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529340 4915 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529348 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529356 4915 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529365 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529372 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529380 4915 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529388 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529395 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529403 4915 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529410 4915 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529418 4915 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529425 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529434 4915 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529443 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529450 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529459 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529466 4915 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529498 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529505 4915 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529513 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529522 4915 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529530 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529540 4915 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529707 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529864 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529895 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/7dd98449-7ae2-455f-aa42-fc277ebfd5f2-hosts-file\") pod \"node-resolver-vl494\" (UID: \"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\") " pod="openshift-dns/node-resolver-vl494" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529929 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529955 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3edde8ec-c020-4be1-8007-edf769dd0ecc-host\") pod \"node-ca-kg8p2\" (UID: \"3edde8ec-c020-4be1-8007-edf769dd0ecc\") " pod="openshift-image-registry/node-ca-kg8p2" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.529883 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.530767 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3edde8ec-c020-4be1-8007-edf769dd0ecc-serviceca\") pod \"node-ca-kg8p2\" (UID: \"3edde8ec-c020-4be1-8007-edf769dd0ecc\") " pod="openshift-image-registry/node-ca-kg8p2" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.533305 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.533527 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.533960 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.534498 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.534938 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.535603 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.551007 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.553758 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.553891 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.555847 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.556358 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.562315 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pttv5\" (UniqueName: \"kubernetes.io/projected/3edde8ec-c020-4be1-8007-edf769dd0ecc-kube-api-access-pttv5\") pod \"node-ca-kg8p2\" (UID: \"3edde8ec-c020-4be1-8007-edf769dd0ecc\") " pod="openshift-image-registry/node-ca-kg8p2" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.566159 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.567218 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.568308 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.569122 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.570086 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.570199 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqrj5\" (UniqueName: \"kubernetes.io/projected/7dd98449-7ae2-455f-aa42-fc277ebfd5f2-kube-api-access-sqrj5\") pod \"node-resolver-vl494\" (UID: \"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\") " pod="openshift-dns/node-resolver-vl494" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.570922 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.571474 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.572386 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.572869 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.572930 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.573610 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.574136 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.574651 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.575483 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.576058 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.576367 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.576846 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.577443 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.578086 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: E1124 21:20:02.579059 4915 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.579180 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.579854 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.581030 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.582026 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.583029 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.583702 4915 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.583849 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.586224 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.587209 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.587625 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.588390 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.589494 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.590661 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.591436 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.592151 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.593164 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.593596 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.594175 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.595099 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.596023 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.596490 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.597338 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.597819 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.598831 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.599272 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.600045 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.600474 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.601047 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.602131 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.602559 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.607501 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.617353 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.626526 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.630170 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.630196 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.630205 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.630214 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.630222 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.630231 4915 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.630239 4915 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.630248 4915 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.630256 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.633917 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.641713 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.651834 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.661421 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.666208 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.670665 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.672969 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 24 21:20:02 crc kubenswrapper[4915]: W1124 21:20:02.679749 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-c08a9aec54555c27d380365b61390812cc528fe82d9569f3289690a4cfa509ba WatchSource:0}: Error finding container c08a9aec54555c27d380365b61390812cc528fe82d9569f3289690a4cfa509ba: Status 404 returned error can't find the container with id c08a9aec54555c27d380365b61390812cc528fe82d9569f3289690a4cfa509ba Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.682108 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.683187 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.691584 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kg8p2" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.694190 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.697114 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vl494" Nov 24 21:20:02 crc kubenswrapper[4915]: W1124 21:20:02.723100 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dd98449_7ae2_455f_aa42_fc277ebfd5f2.slice/crio-5752c6e9b9e087fe834b102bc5fdce533721d033bdab646ff47b90fbc59ddbc3 WatchSource:0}: Error finding container 5752c6e9b9e087fe834b102bc5fdce533721d033bdab646ff47b90fbc59ddbc3: Status 404 returned error can't find the container with id 5752c6e9b9e087fe834b102bc5fdce533721d033bdab646ff47b90fbc59ddbc3 Nov 24 21:20:02 crc kubenswrapper[4915]: W1124 21:20:02.732288 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3edde8ec_c020_4be1_8007_edf769dd0ecc.slice/crio-91a3566ad472d5aed0bca82694ab4a8224c99b844e7a806a6c569946e890e1f2 WatchSource:0}: Error finding container 91a3566ad472d5aed0bca82694ab4a8224c99b844e7a806a6c569946e890e1f2: Status 404 returned error can't find the container with id 91a3566ad472d5aed0bca82694ab4a8224c99b844e7a806a6c569946e890e1f2 Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.923525 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-b8kq8"] Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.923738 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-lxwjd"] Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.923915 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-r7mbp"] Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.924312 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.924883 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jmqqt"] Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.925469 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.925564 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.925888 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-b8kq8" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.927802 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.927968 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.929963 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.930019 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.930257 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.930375 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.934766 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.936220 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.936370 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.936499 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.936918 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.937097 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.937125 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.937151 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.937279 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.937338 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.937534 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.937562 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.935214 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.956844 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.968587 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:02 crc kubenswrapper[4915]: I1124 21:20:02.992026 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.017806 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.028950 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.038847 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.041421 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.041516 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.041546 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-run-netns\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: E1124 21:20:03.041563 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:20:04.041545903 +0000 UTC m=+22.357798076 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.041636 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-host-var-lib-cni-bin\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: E1124 21:20:03.041656 4915 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:20:03 crc kubenswrapper[4915]: E1124 21:20:03.041688 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:20:04.041678786 +0000 UTC m=+22.357930959 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.041691 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3f235785-6b02-4304-99b8-3b216c369d45-ovnkube-script-lib\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.041709 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-host-var-lib-cni-multus\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.041726 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-log-socket\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.041809 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-cnibin\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.041832 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-host-var-lib-kubelet\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.041854 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-systemd-units\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.041875 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-system-cni-dir\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.041914 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-etc-kubernetes\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.041939 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3a95ccb9-af8d-493c-b3c5-4fcb2e28b992-rootfs\") pod \"machine-config-daemon-lxwjd\" (UID: \"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\") " pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.041964 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2w9k\" (UniqueName: \"kubernetes.io/projected/3f235785-6b02-4304-99b8-3b216c369d45-kube-api-access-l2w9k\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.041987 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-multus-conf-dir\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.042051 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-etc-openvswitch\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.042078 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3f235785-6b02-4304-99b8-3b216c369d45-ovn-node-metrics-cert\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.042096 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-host-run-multus-certs\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.042119 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.042136 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-host-run-netns\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.042164 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjn9v\" (UniqueName: \"kubernetes.io/projected/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-kube-api-access-xjn9v\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.042225 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-kubelet\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.042318 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-run-openvswitch\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.042355 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b71a6eb1-12c2-4f84-875b-868c12dd17b9-system-cni-dir\") pod \"multus-additional-cni-plugins-r7mbp\" (UID: \"b71a6eb1-12c2-4f84-875b-868c12dd17b9\") " pod="openshift-multus/multus-additional-cni-plugins-r7mbp" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.042377 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b71a6eb1-12c2-4f84-875b-868c12dd17b9-cni-binary-copy\") pod \"multus-additional-cni-plugins-r7mbp\" (UID: \"b71a6eb1-12c2-4f84-875b-868c12dd17b9\") " pod="openshift-multus/multus-additional-cni-plugins-r7mbp" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.042417 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b71a6eb1-12c2-4f84-875b-868c12dd17b9-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-r7mbp\" (UID: \"b71a6eb1-12c2-4f84-875b-868c12dd17b9\") " pod="openshift-multus/multus-additional-cni-plugins-r7mbp" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.042435 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-multus-cni-dir\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.042737 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-hostroot\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.042763 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.042807 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3f235785-6b02-4304-99b8-3b216c369d45-env-overrides\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.042823 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wqvc\" (UniqueName: \"kubernetes.io/projected/b71a6eb1-12c2-4f84-875b-868c12dd17b9-kube-api-access-7wqvc\") pod \"multus-additional-cni-plugins-r7mbp\" (UID: \"b71a6eb1-12c2-4f84-875b-868c12dd17b9\") " pod="openshift-multus/multus-additional-cni-plugins-r7mbp" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.042838 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-os-release\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.042852 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-multus-daemon-config\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.042894 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3f235785-6b02-4304-99b8-3b216c369d45-ovnkube-config\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.042916 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.042932 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-run-ovn-kubernetes\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.042950 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-cni-netd\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.042966 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b71a6eb1-12c2-4f84-875b-868c12dd17b9-os-release\") pod \"multus-additional-cni-plugins-r7mbp\" (UID: \"b71a6eb1-12c2-4f84-875b-868c12dd17b9\") " pod="openshift-multus/multus-additional-cni-plugins-r7mbp" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.042983 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-multus-socket-dir-parent\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: E1124 21:20:03.042966 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:20:03 crc kubenswrapper[4915]: E1124 21:20:03.043039 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:20:03 crc kubenswrapper[4915]: E1124 21:20:03.043051 4915 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:20:03 crc kubenswrapper[4915]: E1124 21:20:03.043090 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:20:03 crc kubenswrapper[4915]: E1124 21:20:03.043105 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:20:03 crc kubenswrapper[4915]: E1124 21:20:03.043116 4915 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.043135 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:03 crc kubenswrapper[4915]: E1124 21:20:03.043157 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 21:20:04.043144084 +0000 UTC m=+22.359396257 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:20:03 crc kubenswrapper[4915]: E1124 21:20:03.043194 4915 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:20:03 crc kubenswrapper[4915]: E1124 21:20:03.043209 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 21:20:04.043188305 +0000 UTC m=+22.359440558 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.043232 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45qlw\" (UniqueName: \"kubernetes.io/projected/3a95ccb9-af8d-493c-b3c5-4fcb2e28b992-kube-api-access-45qlw\") pod \"machine-config-daemon-lxwjd\" (UID: \"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\") " pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.043259 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-node-log\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.043282 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b71a6eb1-12c2-4f84-875b-868c12dd17b9-tuning-conf-dir\") pod \"multus-additional-cni-plugins-r7mbp\" (UID: \"b71a6eb1-12c2-4f84-875b-868c12dd17b9\") " pod="openshift-multus/multus-additional-cni-plugins-r7mbp" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.043304 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-run-ovn\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.043325 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-host-run-k8s-cni-cncf-io\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.043345 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-slash\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.043384 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3a95ccb9-af8d-493c-b3c5-4fcb2e28b992-mcd-auth-proxy-config\") pod \"machine-config-daemon-lxwjd\" (UID: \"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\") " pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.043408 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3a95ccb9-af8d-493c-b3c5-4fcb2e28b992-proxy-tls\") pod \"machine-config-daemon-lxwjd\" (UID: \"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\") " pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.043426 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-run-systemd\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.043445 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-var-lib-openvswitch\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.043464 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-cni-bin\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.043485 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b71a6eb1-12c2-4f84-875b-868c12dd17b9-cnibin\") pod \"multus-additional-cni-plugins-r7mbp\" (UID: \"b71a6eb1-12c2-4f84-875b-868c12dd17b9\") " pod="openshift-multus/multus-additional-cni-plugins-r7mbp" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.043508 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-cni-binary-copy\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: E1124 21:20:03.043600 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:20:04.043568215 +0000 UTC m=+22.359820508 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.047643 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.057080 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.072232 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.080932 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.090904 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.101271 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.110973 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.138535 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.144374 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-cni-netd\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.144415 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b71a6eb1-12c2-4f84-875b-868c12dd17b9-os-release\") pod \"multus-additional-cni-plugins-r7mbp\" (UID: \"b71a6eb1-12c2-4f84-875b-868c12dd17b9\") " pod="openshift-multus/multus-additional-cni-plugins-r7mbp" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.144438 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-multus-socket-dir-parent\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.144549 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-multus-socket-dir-parent\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.144534 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-cni-netd\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.144713 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b71a6eb1-12c2-4f84-875b-868c12dd17b9-os-release\") pod \"multus-additional-cni-plugins-r7mbp\" (UID: \"b71a6eb1-12c2-4f84-875b-868c12dd17b9\") " pod="openshift-multus/multus-additional-cni-plugins-r7mbp" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.144866 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-node-log\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.144912 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-node-log\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.144894 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b71a6eb1-12c2-4f84-875b-868c12dd17b9-tuning-conf-dir\") pod \"multus-additional-cni-plugins-r7mbp\" (UID: \"b71a6eb1-12c2-4f84-875b-868c12dd17b9\") " pod="openshift-multus/multus-additional-cni-plugins-r7mbp" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145017 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45qlw\" (UniqueName: \"kubernetes.io/projected/3a95ccb9-af8d-493c-b3c5-4fcb2e28b992-kube-api-access-45qlw\") pod \"machine-config-daemon-lxwjd\" (UID: \"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\") " pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145044 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-host-run-k8s-cni-cncf-io\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145068 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-slash\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145089 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-run-ovn\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145108 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3a95ccb9-af8d-493c-b3c5-4fcb2e28b992-proxy-tls\") pod \"machine-config-daemon-lxwjd\" (UID: \"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\") " pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145118 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-host-run-k8s-cni-cncf-io\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145126 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-slash\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145128 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3a95ccb9-af8d-493c-b3c5-4fcb2e28b992-mcd-auth-proxy-config\") pod \"machine-config-daemon-lxwjd\" (UID: \"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\") " pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145168 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-run-ovn\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145283 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-var-lib-openvswitch\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145324 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-cni-bin\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145356 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b71a6eb1-12c2-4f84-875b-868c12dd17b9-cnibin\") pod \"multus-additional-cni-plugins-r7mbp\" (UID: \"b71a6eb1-12c2-4f84-875b-868c12dd17b9\") " pod="openshift-multus/multus-additional-cni-plugins-r7mbp" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145367 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-var-lib-openvswitch\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145382 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-cni-binary-copy\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145400 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b71a6eb1-12c2-4f84-875b-868c12dd17b9-cnibin\") pod \"multus-additional-cni-plugins-r7mbp\" (UID: \"b71a6eb1-12c2-4f84-875b-868c12dd17b9\") " pod="openshift-multus/multus-additional-cni-plugins-r7mbp" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145399 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-cni-bin\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145410 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-run-systemd\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145435 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-run-netns\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145450 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-host-var-lib-cni-bin\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145462 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-run-systemd\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145477 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-host-var-lib-cni-multus\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145498 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-host-var-lib-cni-multus\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145505 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-run-netns\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145511 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3f235785-6b02-4304-99b8-3b216c369d45-ovnkube-script-lib\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145520 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-host-var-lib-cni-bin\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145542 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-cnibin\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145562 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-host-var-lib-kubelet\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145566 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b71a6eb1-12c2-4f84-875b-868c12dd17b9-tuning-conf-dir\") pod \"multus-additional-cni-plugins-r7mbp\" (UID: \"b71a6eb1-12c2-4f84-875b-868c12dd17b9\") " pod="openshift-multus/multus-additional-cni-plugins-r7mbp" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145584 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-systemd-units\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145604 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-log-socket\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145623 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-etc-kubernetes\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145633 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-host-var-lib-kubelet\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145649 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3a95ccb9-af8d-493c-b3c5-4fcb2e28b992-rootfs\") pod \"machine-config-daemon-lxwjd\" (UID: \"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\") " pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145661 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-log-socket\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145669 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-system-cni-dir\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145687 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-etc-kubernetes\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145695 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2w9k\" (UniqueName: \"kubernetes.io/projected/3f235785-6b02-4304-99b8-3b216c369d45-kube-api-access-l2w9k\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145626 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-cnibin\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145717 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-systemd-units\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145794 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-multus-conf-dir\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145818 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3a95ccb9-af8d-493c-b3c5-4fcb2e28b992-rootfs\") pod \"machine-config-daemon-lxwjd\" (UID: \"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\") " pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145863 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-system-cni-dir\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145875 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-multus-conf-dir\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145865 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-etc-openvswitch\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145899 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-etc-openvswitch\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145936 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3f235785-6b02-4304-99b8-3b216c369d45-ovn-node-metrics-cert\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145961 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-host-run-multus-certs\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.145984 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-host-run-netns\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.146006 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjn9v\" (UniqueName: \"kubernetes.io/projected/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-kube-api-access-xjn9v\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.146021 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3a95ccb9-af8d-493c-b3c5-4fcb2e28b992-mcd-auth-proxy-config\") pod \"machine-config-daemon-lxwjd\" (UID: \"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\") " pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.146030 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-kubelet\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.146058 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.146007 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-host-run-multus-certs\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.146099 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-kubelet\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.146117 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-cni-binary-copy\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.146130 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.146108 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-run-openvswitch\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.146032 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-host-run-netns\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.146084 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-run-openvswitch\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.146212 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b71a6eb1-12c2-4f84-875b-868c12dd17b9-system-cni-dir\") pod \"multus-additional-cni-plugins-r7mbp\" (UID: \"b71a6eb1-12c2-4f84-875b-868c12dd17b9\") " pod="openshift-multus/multus-additional-cni-plugins-r7mbp" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.146238 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b71a6eb1-12c2-4f84-875b-868c12dd17b9-cni-binary-copy\") pod \"multus-additional-cni-plugins-r7mbp\" (UID: \"b71a6eb1-12c2-4f84-875b-868c12dd17b9\") " pod="openshift-multus/multus-additional-cni-plugins-r7mbp" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.146272 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b71a6eb1-12c2-4f84-875b-868c12dd17b9-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-r7mbp\" (UID: \"b71a6eb1-12c2-4f84-875b-868c12dd17b9\") " pod="openshift-multus/multus-additional-cni-plugins-r7mbp" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.146294 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-multus-cni-dir\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.146302 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b71a6eb1-12c2-4f84-875b-868c12dd17b9-system-cni-dir\") pod \"multus-additional-cni-plugins-r7mbp\" (UID: \"b71a6eb1-12c2-4f84-875b-868c12dd17b9\") " pod="openshift-multus/multus-additional-cni-plugins-r7mbp" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.146315 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-hostroot\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.146343 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-hostroot\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.146399 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3f235785-6b02-4304-99b8-3b216c369d45-env-overrides\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.146436 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wqvc\" (UniqueName: \"kubernetes.io/projected/b71a6eb1-12c2-4f84-875b-868c12dd17b9-kube-api-access-7wqvc\") pod \"multus-additional-cni-plugins-r7mbp\" (UID: \"b71a6eb1-12c2-4f84-875b-868c12dd17b9\") " pod="openshift-multus/multus-additional-cni-plugins-r7mbp" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.146449 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-multus-cni-dir\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.146470 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-os-release\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.146503 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-multus-daemon-config\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.146546 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-run-ovn-kubernetes\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.146573 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3f235785-6b02-4304-99b8-3b216c369d45-ovnkube-config\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.146902 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b71a6eb1-12c2-4f84-875b-868c12dd17b9-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-r7mbp\" (UID: \"b71a6eb1-12c2-4f84-875b-868c12dd17b9\") " pod="openshift-multus/multus-additional-cni-plugins-r7mbp" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.146941 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3f235785-6b02-4304-99b8-3b216c369d45-ovnkube-script-lib\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.146967 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3f235785-6b02-4304-99b8-3b216c369d45-env-overrides\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.147011 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-os-release\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.147212 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-run-ovn-kubernetes\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.147362 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b71a6eb1-12c2-4f84-875b-868c12dd17b9-cni-binary-copy\") pod \"multus-additional-cni-plugins-r7mbp\" (UID: \"b71a6eb1-12c2-4f84-875b-868c12dd17b9\") " pod="openshift-multus/multus-additional-cni-plugins-r7mbp" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.147473 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3f235785-6b02-4304-99b8-3b216c369d45-ovnkube-config\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.147572 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-multus-daemon-config\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.152573 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3f235785-6b02-4304-99b8-3b216c369d45-ovn-node-metrics-cert\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.155295 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3a95ccb9-af8d-493c-b3c5-4fcb2e28b992-proxy-tls\") pod \"machine-config-daemon-lxwjd\" (UID: \"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\") " pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.185053 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.194828 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.212548 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.220787 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.228318 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.235033 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.241958 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.256617 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.260152 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wqvc\" (UniqueName: \"kubernetes.io/projected/b71a6eb1-12c2-4f84-875b-868c12dd17b9-kube-api-access-7wqvc\") pod \"multus-additional-cni-plugins-r7mbp\" (UID: \"b71a6eb1-12c2-4f84-875b-868c12dd17b9\") " pod="openshift-multus/multus-additional-cni-plugins-r7mbp" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.260202 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjn9v\" (UniqueName: \"kubernetes.io/projected/f5b8930d-4919-4a02-a962-c93b5f8f4ad3-kube-api-access-xjn9v\") pod \"multus-b8kq8\" (UID: \"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\") " pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.260293 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45qlw\" (UniqueName: \"kubernetes.io/projected/3a95ccb9-af8d-493c-b3c5-4fcb2e28b992-kube-api-access-45qlw\") pod \"machine-config-daemon-lxwjd\" (UID: \"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\") " pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.260443 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2w9k\" (UniqueName: \"kubernetes.io/projected/3f235785-6b02-4304-99b8-3b216c369d45-kube-api-access-l2w9k\") pod \"ovnkube-node-jmqqt\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.265281 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.274291 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.274811 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.281058 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.285119 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:03 crc kubenswrapper[4915]: W1124 21:20:03.293898 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f235785_6b02_4304_99b8_3b216c369d45.slice/crio-284e5a7af05772bce2a87ba77f8d70473ed3a2118ef6cd86790c844416f36e91 WatchSource:0}: Error finding container 284e5a7af05772bce2a87ba77f8d70473ed3a2118ef6cd86790c844416f36e91: Status 404 returned error can't find the container with id 284e5a7af05772bce2a87ba77f8d70473ed3a2118ef6cd86790c844416f36e91 Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.300192 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.306328 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-b8kq8" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.313194 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 21:20:03 crc kubenswrapper[4915]: W1124 21:20:03.323886 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5b8930d_4919_4a02_a962_c93b5f8f4ad3.slice/crio-5931a81b0f0f1c608b2f0b1a9fb9e168b73a3c6bc0a5480eaa06ac5757af2399 WatchSource:0}: Error finding container 5931a81b0f0f1c608b2f0b1a9fb9e168b73a3c6bc0a5480eaa06ac5757af2399: Status 404 returned error can't find the container with id 5931a81b0f0f1c608b2f0b1a9fb9e168b73a3c6bc0a5480eaa06ac5757af2399 Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.426065 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:03 crc kubenswrapper[4915]: E1124 21:20:03.426250 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.562423 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b8kq8" event={"ID":"f5b8930d-4919-4a02-a962-c93b5f8f4ad3","Type":"ContainerStarted","Data":"5931a81b0f0f1c608b2f0b1a9fb9e168b73a3c6bc0a5480eaa06ac5757af2399"} Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.563860 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18"} Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.563889 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa"} Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.563900 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ae3a25aeeb16504b8575519857d1fa52caf9f4923510eb66006d2da45b5213b9"} Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.565732 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"3ea24cb3cc210149ea886c382a5231ce8d4972920ebd660788745bf43c6388e4"} Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.567414 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kg8p2" event={"ID":"3edde8ec-c020-4be1-8007-edf769dd0ecc","Type":"ContainerStarted","Data":"30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc"} Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.567445 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kg8p2" event={"ID":"3edde8ec-c020-4be1-8007-edf769dd0ecc","Type":"ContainerStarted","Data":"91a3566ad472d5aed0bca82694ab4a8224c99b844e7a806a6c569946e890e1f2"} Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.568484 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" event={"ID":"b71a6eb1-12c2-4f84-875b-868c12dd17b9","Type":"ContainerStarted","Data":"6546d53b85d8856c9c808dc5fdd4ba182e9bd443d3b48a695efab0248bce6683"} Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.572928 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vl494" event={"ID":"7dd98449-7ae2-455f-aa42-fc277ebfd5f2","Type":"ContainerStarted","Data":"3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6"} Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.573024 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vl494" event={"ID":"7dd98449-7ae2-455f-aa42-fc277ebfd5f2","Type":"ContainerStarted","Data":"5752c6e9b9e087fe834b102bc5fdce533721d033bdab646ff47b90fbc59ddbc3"} Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.574454 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b"} Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.574496 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c08a9aec54555c27d380365b61390812cc528fe82d9569f3289690a4cfa509ba"} Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.576133 4915 generic.go:334] "Generic (PLEG): container finished" podID="3f235785-6b02-4304-99b8-3b216c369d45" containerID="e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b" exitCode=0 Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.576202 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" event={"ID":"3f235785-6b02-4304-99b8-3b216c369d45","Type":"ContainerDied","Data":"e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b"} Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.576227 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" event={"ID":"3f235785-6b02-4304-99b8-3b216c369d45","Type":"ContainerStarted","Data":"284e5a7af05772bce2a87ba77f8d70473ed3a2118ef6cd86790c844416f36e91"} Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.578434 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerStarted","Data":"04d0b0c2671f1e76e79128e0f4e2ee6e064a583b85ae99429c52b9e75dd7f54d"} Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.582439 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.596807 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.607247 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.624978 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.641063 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.665192 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.679733 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.705690 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.716927 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.740686 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.754132 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.768383 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.777960 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.794620 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.837399 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.879397 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.917585 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:03 crc kubenswrapper[4915]: I1124 21:20:03.961654 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:03Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.003463 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.036962 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.057007 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.057124 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:04 crc kubenswrapper[4915]: E1124 21:20:04.057145 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:20:06.057124883 +0000 UTC m=+24.373377066 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.057174 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.057204 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.057234 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:04 crc kubenswrapper[4915]: E1124 21:20:04.057251 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:20:04 crc kubenswrapper[4915]: E1124 21:20:04.057268 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:20:04 crc kubenswrapper[4915]: E1124 21:20:04.057281 4915 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:20:04 crc kubenswrapper[4915]: E1124 21:20:04.057322 4915 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:20:04 crc kubenswrapper[4915]: E1124 21:20:04.057326 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 21:20:06.057315228 +0000 UTC m=+24.373567411 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:20:04 crc kubenswrapper[4915]: E1124 21:20:04.057361 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:20:06.057352839 +0000 UTC m=+24.373605012 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:20:04 crc kubenswrapper[4915]: E1124 21:20:04.057373 4915 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:20:04 crc kubenswrapper[4915]: E1124 21:20:04.057395 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:20:04 crc kubenswrapper[4915]: E1124 21:20:04.057420 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:20:04 crc kubenswrapper[4915]: E1124 21:20:04.057450 4915 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:20:04 crc kubenswrapper[4915]: E1124 21:20:04.057451 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:20:06.057435201 +0000 UTC m=+24.373687374 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:20:04 crc kubenswrapper[4915]: E1124 21:20:04.057518 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 21:20:06.057507043 +0000 UTC m=+24.373759306 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.082433 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.117413 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.157478 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.199368 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.240813 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.278720 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.315137 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.364937 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.426115 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:04 crc kubenswrapper[4915]: E1124 21:20:04.426228 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.426254 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:04 crc kubenswrapper[4915]: E1124 21:20:04.426294 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.430316 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.431192 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.432485 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.583524 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerStarted","Data":"ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669"} Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.583575 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerStarted","Data":"34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336"} Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.585620 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b8kq8" event={"ID":"f5b8930d-4919-4a02-a962-c93b5f8f4ad3","Type":"ContainerStarted","Data":"5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff"} Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.589464 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" event={"ID":"3f235785-6b02-4304-99b8-3b216c369d45","Type":"ContainerStarted","Data":"6932314dff48e102e97de6739f7af9e8c748a9a027a00f6cb31e70c19b9f2027"} Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.589498 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" event={"ID":"3f235785-6b02-4304-99b8-3b216c369d45","Type":"ContainerStarted","Data":"27d8ac7b09233a7e864b5dfba1aefafe23e08de6ef34879478cf63c7ec594794"} Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.589514 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" event={"ID":"3f235785-6b02-4304-99b8-3b216c369d45","Type":"ContainerStarted","Data":"ca8862d30ccca23f9831d25c0184b0e342ca7e8a4c67b0fe96241323ed74bc49"} Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.589529 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" event={"ID":"3f235785-6b02-4304-99b8-3b216c369d45","Type":"ContainerStarted","Data":"d6520b970c1e0279fea2d39233569ea1cb355259398e39a9fd63a1b61218dcf7"} Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.589545 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" event={"ID":"3f235785-6b02-4304-99b8-3b216c369d45","Type":"ContainerStarted","Data":"a07279f43e6efab7e8ce1739eb703e344027c7da1e758e62f581fe05c5028c52"} Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.589563 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" event={"ID":"3f235785-6b02-4304-99b8-3b216c369d45","Type":"ContainerStarted","Data":"ce0174ac3e69992dce18dea349b0c6ec98cd95c67d12cbd93110c0d604f76e93"} Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.595579 4915 generic.go:334] "Generic (PLEG): container finished" podID="b71a6eb1-12c2-4f84-875b-868c12dd17b9" containerID="ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5" exitCode=0 Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.595686 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" event={"ID":"b71a6eb1-12c2-4f84-875b-868c12dd17b9","Type":"ContainerDied","Data":"ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5"} Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.616503 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.629212 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.644868 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.661919 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.676085 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.687196 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.701043 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.719644 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.731970 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.764601 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.809444 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.837554 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.874904 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.913902 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:04 crc kubenswrapper[4915]: I1124 21:20:04.957475 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:05 crc kubenswrapper[4915]: I1124 21:20:05.002266 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:04Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:05 crc kubenswrapper[4915]: I1124 21:20:05.036441 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:05 crc kubenswrapper[4915]: I1124 21:20:05.076249 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:05 crc kubenswrapper[4915]: I1124 21:20:05.117744 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:05 crc kubenswrapper[4915]: I1124 21:20:05.157646 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:05 crc kubenswrapper[4915]: I1124 21:20:05.197230 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:05 crc kubenswrapper[4915]: I1124 21:20:05.234605 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:05 crc kubenswrapper[4915]: I1124 21:20:05.275789 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:05 crc kubenswrapper[4915]: I1124 21:20:05.324072 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:05 crc kubenswrapper[4915]: I1124 21:20:05.358093 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:05 crc kubenswrapper[4915]: I1124 21:20:05.398578 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:05 crc kubenswrapper[4915]: I1124 21:20:05.426511 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:05 crc kubenswrapper[4915]: E1124 21:20:05.426653 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:20:05 crc kubenswrapper[4915]: I1124 21:20:05.435644 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:05 crc kubenswrapper[4915]: I1124 21:20:05.475791 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:05 crc kubenswrapper[4915]: I1124 21:20:05.600928 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a62fc83839ca150ad93c6e2e7c52a9f0d31efd20e3a994bfaf5f06d2fcf4a892"} Nov 24 21:20:05 crc kubenswrapper[4915]: I1124 21:20:05.604182 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" event={"ID":"b71a6eb1-12c2-4f84-875b-868c12dd17b9","Type":"ContainerStarted","Data":"6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc"} Nov 24 21:20:05 crc kubenswrapper[4915]: I1124 21:20:05.615636 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:05 crc kubenswrapper[4915]: I1124 21:20:05.638269 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:05 crc kubenswrapper[4915]: I1124 21:20:05.657035 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:05 crc kubenswrapper[4915]: I1124 21:20:05.673300 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:05 crc kubenswrapper[4915]: I1124 21:20:05.686250 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:05 crc kubenswrapper[4915]: I1124 21:20:05.716014 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:05 crc kubenswrapper[4915]: I1124 21:20:05.760342 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:05 crc kubenswrapper[4915]: I1124 21:20:05.795047 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:05 crc kubenswrapper[4915]: I1124 21:20:05.834491 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:05 crc kubenswrapper[4915]: I1124 21:20:05.880667 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:05 crc kubenswrapper[4915]: I1124 21:20:05.918377 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:05 crc kubenswrapper[4915]: I1124 21:20:05.957703 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.000161 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:05Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.037824 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a62fc83839ca150ad93c6e2e7c52a9f0d31efd20e3a994bfaf5f06d2fcf4a892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.077197 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.078424 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.078496 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.078519 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.078543 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.078570 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:06 crc kubenswrapper[4915]: E1124 21:20:06.078669 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:20:10.078636724 +0000 UTC m=+28.394888897 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:20:06 crc kubenswrapper[4915]: E1124 21:20:06.078676 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:20:06 crc kubenswrapper[4915]: E1124 21:20:06.078723 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:20:06 crc kubenswrapper[4915]: E1124 21:20:06.078767 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:20:06 crc kubenswrapper[4915]: E1124 21:20:06.078743 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:20:06 crc kubenswrapper[4915]: E1124 21:20:06.078876 4915 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:20:06 crc kubenswrapper[4915]: E1124 21:20:06.078673 4915 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:20:06 crc kubenswrapper[4915]: E1124 21:20:06.078909 4915 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:20:06 crc kubenswrapper[4915]: E1124 21:20:06.078964 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:20:10.078956382 +0000 UTC m=+28.395208555 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:20:06 crc kubenswrapper[4915]: E1124 21:20:06.078981 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 21:20:10.078975242 +0000 UTC m=+28.395227415 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:20:06 crc kubenswrapper[4915]: E1124 21:20:06.078825 4915 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:20:06 crc kubenswrapper[4915]: E1124 21:20:06.079063 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:20:10.079025634 +0000 UTC m=+28.395277817 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:20:06 crc kubenswrapper[4915]: E1124 21:20:06.079129 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 21:20:10.079092775 +0000 UTC m=+28.395345098 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.123497 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.161561 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.199886 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.255997 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.276843 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.321398 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.344484 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.349701 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.357930 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.378034 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.422034 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.426368 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:06 crc kubenswrapper[4915]: E1124 21:20:06.426615 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.426382 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:06 crc kubenswrapper[4915]: E1124 21:20:06.427373 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.469004 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a62fc83839ca150ad93c6e2e7c52a9f0d31efd20e3a994bfaf5f06d2fcf4a892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.501827 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.543393 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.577610 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.616298 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" event={"ID":"3f235785-6b02-4304-99b8-3b216c369d45","Type":"ContainerStarted","Data":"8e3c0b39454a6a8c631a99c9c4faab3c4b956f476ba652178ee7b11041b825f2"} Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.617967 4915 generic.go:334] "Generic (PLEG): container finished" podID="b71a6eb1-12c2-4f84-875b-868c12dd17b9" containerID="6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc" exitCode=0 Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.617989 4915 generic.go:334] "Generic (PLEG): container finished" podID="b71a6eb1-12c2-4f84-875b-868c12dd17b9" containerID="6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f" exitCode=0 Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.618925 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" event={"ID":"b71a6eb1-12c2-4f84-875b-868c12dd17b9","Type":"ContainerDied","Data":"6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc"} Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.618951 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" event={"ID":"b71a6eb1-12c2-4f84-875b-868c12dd17b9","Type":"ContainerDied","Data":"6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f"} Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.626875 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.659027 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.696273 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.737388 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.776666 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01094696-9a80-41b4-8341-f721d64dcba9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6566ed68b170603480f4f1938004e3c9cde1a814ceba7575a792ee272c03c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f65a630ef6d66b01511f0e4ce0279addf8b6f72ee79283bccc1325b5f198b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://430687a31d119ccef92a1bc31318cef87e3c53b4a12df207f9b9ef8bcc968c9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fa9d2e4c65eb47a5bdf7ef3f569abd936b8bf98e79ec9656e0edbf1f1cdd60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.823344 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.862411 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.902563 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.936363 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:06 crc kubenswrapper[4915]: I1124 21:20:06.979834 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a62fc83839ca150ad93c6e2e7c52a9f0d31efd20e3a994bfaf5f06d2fcf4a892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:06Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.016875 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.055437 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.095199 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.138067 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.178942 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.224176 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.427558 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:07 crc kubenswrapper[4915]: E1124 21:20:07.427689 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.628089 4915 generic.go:334] "Generic (PLEG): container finished" podID="b71a6eb1-12c2-4f84-875b-868c12dd17b9" containerID="3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28" exitCode=0 Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.628153 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" event={"ID":"b71a6eb1-12c2-4f84-875b-868c12dd17b9","Type":"ContainerDied","Data":"3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28"} Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.656096 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.670132 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.692847 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.706450 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.716856 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.732413 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01094696-9a80-41b4-8341-f721d64dcba9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6566ed68b170603480f4f1938004e3c9cde1a814ceba7575a792ee272c03c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f65a630ef6d66b01511f0e4ce0279addf8b6f72ee79283bccc1325b5f198b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://430687a31d119ccef92a1bc31318cef87e3c53b4a12df207f9b9ef8bcc968c9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fa9d2e4c65eb47a5bdf7ef3f569abd936b8bf98e79ec9656e0edbf1f1cdd60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.754713 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.769290 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.782949 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.795011 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.810179 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.829012 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.846566 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a62fc83839ca150ad93c6e2e7c52a9f0d31efd20e3a994bfaf5f06d2fcf4a892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.866559 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.884243 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.910568 4915 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.912475 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.912514 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.912526 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.912605 4915 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.921522 4915 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.921804 4915 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.923371 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.923469 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.923542 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.923618 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.923650 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:07Z","lastTransitionTime":"2025-11-24T21:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:07 crc kubenswrapper[4915]: E1124 21:20:07.942844 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.944529 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.949670 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.949716 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.949733 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.949751 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.949762 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:07Z","lastTransitionTime":"2025-11-24T21:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.966030 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:07 crc kubenswrapper[4915]: E1124 21:20:07.966310 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.971938 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.972009 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.972031 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.972059 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.972077 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:07Z","lastTransitionTime":"2025-11-24T21:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.985275 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:07 crc kubenswrapper[4915]: E1124 21:20:07.991742 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:07Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.996325 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.996383 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.996407 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.996435 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:07 crc kubenswrapper[4915]: I1124 21:20:07.996457 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:07Z","lastTransitionTime":"2025-11-24T21:20:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.009690 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a62fc83839ca150ad93c6e2e7c52a9f0d31efd20e3a994bfaf5f06d2fcf4a892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:08 crc kubenswrapper[4915]: E1124 21:20:08.015752 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.019507 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.019564 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.019581 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.019606 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.019622 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:08Z","lastTransitionTime":"2025-11-24T21:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.030303 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:08 crc kubenswrapper[4915]: E1124 21:20:08.032533 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:08 crc kubenswrapper[4915]: E1124 21:20:08.032699 4915 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.034966 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.035008 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.035020 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.035044 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.035059 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:08Z","lastTransitionTime":"2025-11-24T21:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.058793 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.103091 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.138392 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.138438 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.138476 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.138493 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.138506 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:08Z","lastTransitionTime":"2025-11-24T21:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.143332 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.179687 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.218352 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.241669 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.241729 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.241747 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.241798 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.241822 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:08Z","lastTransitionTime":"2025-11-24T21:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.259181 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01094696-9a80-41b4-8341-f721d64dcba9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6566ed68b170603480f4f1938004e3c9cde1a814ceba7575a792ee272c03c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f65a630ef6d66b01511f0e4ce0279addf8b6f72ee79283bccc1325b5f198b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://430687a31d119ccef92a1bc31318cef87e3c53b4a12df207f9b9ef8bcc968c9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fa9d2e4c65eb47a5bdf7ef3f569abd936b8bf98e79ec9656e0edbf1f1cdd60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.305729 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.343638 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.345382 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.345432 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.345444 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.345466 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.345479 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:08Z","lastTransitionTime":"2025-11-24T21:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.382625 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.416768 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.426147 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.426284 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:08 crc kubenswrapper[4915]: E1124 21:20:08.426356 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:20:08 crc kubenswrapper[4915]: E1124 21:20:08.426529 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.448278 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.448322 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.448331 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.448345 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.448355 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:08Z","lastTransitionTime":"2025-11-24T21:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.459406 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.551489 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.551546 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.551559 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.551580 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.551594 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:08Z","lastTransitionTime":"2025-11-24T21:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.637679 4915 generic.go:334] "Generic (PLEG): container finished" podID="b71a6eb1-12c2-4f84-875b-868c12dd17b9" containerID="efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b" exitCode=0 Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.637768 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" event={"ID":"b71a6eb1-12c2-4f84-875b-868c12dd17b9","Type":"ContainerDied","Data":"efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b"} Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.655608 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.655692 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.655721 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.655770 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.655835 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:08Z","lastTransitionTime":"2025-11-24T21:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.661281 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.682321 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.700579 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a62fc83839ca150ad93c6e2e7c52a9f0d31efd20e3a994bfaf5f06d2fcf4a892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.718768 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.736143 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.749494 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.758994 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.759039 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.759052 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.759071 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.759084 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:08Z","lastTransitionTime":"2025-11-24T21:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.763918 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.784284 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.814760 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.857326 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01094696-9a80-41b4-8341-f721d64dcba9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6566ed68b170603480f4f1938004e3c9cde1a814ceba7575a792ee272c03c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f65a630ef6d66b01511f0e4ce0279addf8b6f72ee79283bccc1325b5f198b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://430687a31d119ccef92a1bc31318cef87e3c53b4a12df207f9b9ef8bcc968c9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fa9d2e4c65eb47a5bdf7ef3f569abd936b8bf98e79ec9656e0edbf1f1cdd60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.861995 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.862033 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.862043 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.862059 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.862069 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:08Z","lastTransitionTime":"2025-11-24T21:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.904881 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.941432 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.964992 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.965030 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.965041 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.965058 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.965070 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:08Z","lastTransitionTime":"2025-11-24T21:20:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:08 crc kubenswrapper[4915]: I1124 21:20:08.977271 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:08Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.015259 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.055441 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.067196 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.067254 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.067272 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.067290 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.067303 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:09Z","lastTransitionTime":"2025-11-24T21:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.170385 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.170446 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.170458 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.170475 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.170490 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:09Z","lastTransitionTime":"2025-11-24T21:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.278383 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.278466 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.278481 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.278506 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.278522 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:09Z","lastTransitionTime":"2025-11-24T21:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.381842 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.381901 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.381915 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.381937 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.381952 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:09Z","lastTransitionTime":"2025-11-24T21:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.425963 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:09 crc kubenswrapper[4915]: E1124 21:20:09.426125 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.485116 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.485192 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.485212 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.485242 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.485263 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:09Z","lastTransitionTime":"2025-11-24T21:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.589005 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.589067 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.589085 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.589111 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.589129 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:09Z","lastTransitionTime":"2025-11-24T21:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.651031 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" event={"ID":"3f235785-6b02-4304-99b8-3b216c369d45","Type":"ContainerStarted","Data":"1c93fae1a68d186c3aef1f328516c9bbae933e51f5d9f2f2e7a48a6de64c77f9"} Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.651901 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.652170 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.658673 4915 generic.go:334] "Generic (PLEG): container finished" podID="b71a6eb1-12c2-4f84-875b-868c12dd17b9" containerID="bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2" exitCode=0 Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.658725 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" event={"ID":"b71a6eb1-12c2-4f84-875b-868c12dd17b9","Type":"ContainerDied","Data":"bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2"} Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.692247 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.692679 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.693264 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.693300 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.693312 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.693328 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.693339 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:09Z","lastTransitionTime":"2025-11-24T21:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.695899 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.710335 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.723034 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.734986 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.752285 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01094696-9a80-41b4-8341-f721d64dcba9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6566ed68b170603480f4f1938004e3c9cde1a814ceba7575a792ee272c03c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f65a630ef6d66b01511f0e4ce0279addf8b6f72ee79283bccc1325b5f198b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://430687a31d119ccef92a1bc31318cef87e3c53b4a12df207f9b9ef8bcc968c9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fa9d2e4c65eb47a5bdf7ef3f569abd936b8bf98e79ec9656e0edbf1f1cdd60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.774181 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.794166 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.795629 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.795660 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.795669 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.795681 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.795690 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:09Z","lastTransitionTime":"2025-11-24T21:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.808115 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.821532 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.834548 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a62fc83839ca150ad93c6e2e7c52a9f0d31efd20e3a994bfaf5f06d2fcf4a892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.847177 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.865476 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6520b970c1e0279fea2d39233569ea1cb355259398e39a9fd63a1b61218dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8862d30ccca23f9831d25c0184b0e342ca7e8a4c67b0fe96241323ed74bc49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6932314dff48e102e97de6739f7af9e8c748a9a027a00f6cb31e70c19b9f2027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d8ac7b09233a7e864b5dfba1aefafe23e08de6ef34879478cf63c7ec594794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07279f43e6efab7e8ce1739eb703e344027c7da1e758e62f581fe05c5028c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce0174ac3e69992dce18dea349b0c6ec98cd95c67d12cbd93110c0d604f76e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c93fae1a68d186c3aef1f328516c9bbae933e51f5d9f2f2e7a48a6de64c77f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e3c0b39454a6a8c631a99c9c4faab3c4b956f476ba652178ee7b11041b825f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.876761 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.892943 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.898058 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.898101 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.898112 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.898133 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.898146 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:09Z","lastTransitionTime":"2025-11-24T21:20:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.904910 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.925265 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.937996 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.951097 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a62fc83839ca150ad93c6e2e7c52a9f0d31efd20e3a994bfaf5f06d2fcf4a892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.967935 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.980575 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:09 crc kubenswrapper[4915]: I1124 21:20:09.997311 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.001505 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.001540 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.001553 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.001571 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.001584 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:10Z","lastTransitionTime":"2025-11-24T21:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.011978 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.031311 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6520b970c1e0279fea2d39233569ea1cb355259398e39a9fd63a1b61218dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8862d30ccca23f9831d25c0184b0e342ca7e8a4c67b0fe96241323ed74bc49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6932314dff48e102e97de6739f7af9e8c748a9a027a00f6cb31e70c19b9f2027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d8ac7b09233a7e864b5dfba1aefafe23e08de6ef34879478cf63c7ec594794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07279f43e6efab7e8ce1739eb703e344027c7da1e758e62f581fe05c5028c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce0174ac3e69992dce18dea349b0c6ec98cd95c67d12cbd93110c0d604f76e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c93fae1a68d186c3aef1f328516c9bbae933e51f5d9f2f2e7a48a6de64c77f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e3c0b39454a6a8c631a99c9c4faab3c4b956f476ba652178ee7b11041b825f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.042514 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.055709 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01094696-9a80-41b4-8341-f721d64dcba9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6566ed68b170603480f4f1938004e3c9cde1a814ceba7575a792ee272c03c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f65a630ef6d66b01511f0e4ce0279addf8b6f72ee79283bccc1325b5f198b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://430687a31d119ccef92a1bc31318cef87e3c53b4a12df207f9b9ef8bcc968c9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fa9d2e4c65eb47a5bdf7ef3f569abd936b8bf98e79ec9656e0edbf1f1cdd60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.104721 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.104771 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.104803 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.104821 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.104832 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:10Z","lastTransitionTime":"2025-11-24T21:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.106716 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.127007 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.127163 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.127219 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.127282 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.127323 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:10 crc kubenswrapper[4915]: E1124 21:20:10.127517 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:20:10 crc kubenswrapper[4915]: E1124 21:20:10.127553 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:20:10 crc kubenswrapper[4915]: E1124 21:20:10.127553 4915 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:20:10 crc kubenswrapper[4915]: E1124 21:20:10.127574 4915 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:20:10 crc kubenswrapper[4915]: E1124 21:20:10.127523 4915 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:20:10 crc kubenswrapper[4915]: E1124 21:20:10.127615 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:20:18.127598038 +0000 UTC m=+36.443850211 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:20:10 crc kubenswrapper[4915]: E1124 21:20:10.127633 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 21:20:18.127625009 +0000 UTC m=+36.443877182 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:20:10 crc kubenswrapper[4915]: E1124 21:20:10.127646 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:20:18.127639669 +0000 UTC m=+36.443891842 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:20:10 crc kubenswrapper[4915]: E1124 21:20:10.127657 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:20:18.12765138 +0000 UTC m=+36.443903553 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:20:10 crc kubenswrapper[4915]: E1124 21:20:10.127832 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:20:10 crc kubenswrapper[4915]: E1124 21:20:10.127894 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:20:10 crc kubenswrapper[4915]: E1124 21:20:10.127923 4915 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:20:10 crc kubenswrapper[4915]: E1124 21:20:10.128034 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 21:20:18.128001139 +0000 UTC m=+36.444253342 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.137637 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.179043 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.207373 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.207414 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.207425 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.207440 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.207451 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:10Z","lastTransitionTime":"2025-11-24T21:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.213863 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.254607 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.309606 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.309851 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.309912 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.309997 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.310053 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:10Z","lastTransitionTime":"2025-11-24T21:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.413006 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.413332 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.413452 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.413570 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.413654 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:10Z","lastTransitionTime":"2025-11-24T21:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.426636 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:10 crc kubenswrapper[4915]: E1124 21:20:10.426870 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.426647 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:10 crc kubenswrapper[4915]: E1124 21:20:10.427202 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.516319 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.516369 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.516384 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.516403 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.516416 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:10Z","lastTransitionTime":"2025-11-24T21:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.618859 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.618926 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.618950 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.618977 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.619000 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:10Z","lastTransitionTime":"2025-11-24T21:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.665069 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" event={"ID":"b71a6eb1-12c2-4f84-875b-868c12dd17b9","Type":"ContainerStarted","Data":"17e99580d33be14d59916fae025c81700120a64889ea614c1f04d18be178b533"} Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.665174 4915 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.683871 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.699574 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.712881 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01094696-9a80-41b4-8341-f721d64dcba9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6566ed68b170603480f4f1938004e3c9cde1a814ceba7575a792ee272c03c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f65a630ef6d66b01511f0e4ce0279addf8b6f72ee79283bccc1325b5f198b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://430687a31d119ccef92a1bc31318cef87e3c53b4a12df207f9b9ef8bcc968c9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fa9d2e4c65eb47a5bdf7ef3f569abd936b8bf98e79ec9656e0edbf1f1cdd60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.721590 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.721750 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.721866 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.721961 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.722064 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:10Z","lastTransitionTime":"2025-11-24T21:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.743344 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.758506 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.772558 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.787340 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.802380 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.816180 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a62fc83839ca150ad93c6e2e7c52a9f0d31efd20e3a994bfaf5f06d2fcf4a892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.824901 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.825233 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.825305 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.825373 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.825475 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:10Z","lastTransitionTime":"2025-11-24T21:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.833752 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e99580d33be14d59916fae025c81700120a64889ea614c1f04d18be178b533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.848869 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.864154 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.878820 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.898488 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6520b970c1e0279fea2d39233569ea1cb355259398e39a9fd63a1b61218dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8862d30ccca23f9831d25c0184b0e342ca7e8a4c67b0fe96241323ed74bc49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6932314dff48e102e97de6739f7af9e8c748a9a027a00f6cb31e70c19b9f2027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d8ac7b09233a7e864b5dfba1aefafe23e08de6ef34879478cf63c7ec594794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07279f43e6efab7e8ce1739eb703e344027c7da1e758e62f581fe05c5028c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce0174ac3e69992dce18dea349b0c6ec98cd95c67d12cbd93110c0d604f76e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c93fae1a68d186c3aef1f328516c9bbae933e51f5d9f2f2e7a48a6de64c77f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e3c0b39454a6a8c631a99c9c4faab3c4b956f476ba652178ee7b11041b825f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.910937 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:10Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.928892 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.928958 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.928976 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.929002 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:10 crc kubenswrapper[4915]: I1124 21:20:10.929019 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:10Z","lastTransitionTime":"2025-11-24T21:20:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.031417 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.031461 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.031470 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.031489 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.031501 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:11Z","lastTransitionTime":"2025-11-24T21:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.135982 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.136042 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.136072 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.136096 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.136110 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:11Z","lastTransitionTime":"2025-11-24T21:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.239348 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.239393 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.239406 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.239425 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.239436 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:11Z","lastTransitionTime":"2025-11-24T21:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.342467 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.342515 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.342527 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.342546 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.342558 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:11Z","lastTransitionTime":"2025-11-24T21:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.425986 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:11 crc kubenswrapper[4915]: E1124 21:20:11.426135 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.444343 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.444372 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.444383 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.444394 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.444403 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:11Z","lastTransitionTime":"2025-11-24T21:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.547583 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.547634 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.547647 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.547667 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.547697 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:11Z","lastTransitionTime":"2025-11-24T21:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.652486 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.653103 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.653224 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.653255 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.653286 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:11Z","lastTransitionTime":"2025-11-24T21:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.668888 4915 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.756201 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.756239 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.756250 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.756268 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.756280 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:11Z","lastTransitionTime":"2025-11-24T21:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.858828 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.858869 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.858881 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.858897 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.858909 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:11Z","lastTransitionTime":"2025-11-24T21:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.961345 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.961387 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.961396 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.961412 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:11 crc kubenswrapper[4915]: I1124 21:20:11.961422 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:11Z","lastTransitionTime":"2025-11-24T21:20:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.063907 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.063939 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.063949 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.063966 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.063976 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:12Z","lastTransitionTime":"2025-11-24T21:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.167179 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.167257 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.167277 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.167303 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.167323 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:12Z","lastTransitionTime":"2025-11-24T21:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.271805 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.271864 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.271876 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.271900 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.271915 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:12Z","lastTransitionTime":"2025-11-24T21:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.375329 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.375391 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.375404 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.375422 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.375440 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:12Z","lastTransitionTime":"2025-11-24T21:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.426103 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.426212 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:12 crc kubenswrapper[4915]: E1124 21:20:12.426329 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:20:12 crc kubenswrapper[4915]: E1124 21:20:12.426437 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.444089 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:12Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.460505 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:12Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.475287 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:12Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.477991 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.478041 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.478066 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.478099 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.478125 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:12Z","lastTransitionTime":"2025-11-24T21:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.505865 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6520b970c1e0279fea2d39233569ea1cb355259398e39a9fd63a1b61218dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8862d30ccca23f9831d25c0184b0e342ca7e8a4c67b0fe96241323ed74bc49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6932314dff48e102e97de6739f7af9e8c748a9a027a00f6cb31e70c19b9f2027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d8ac7b09233a7e864b5dfba1aefafe23e08de6ef34879478cf63c7ec594794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07279f43e6efab7e8ce1739eb703e344027c7da1e758e62f581fe05c5028c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce0174ac3e69992dce18dea349b0c6ec98cd95c67d12cbd93110c0d604f76e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c93fae1a68d186c3aef1f328516c9bbae933e51f5d9f2f2e7a48a6de64c77f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e3c0b39454a6a8c631a99c9c4faab3c4b956f476ba652178ee7b11041b825f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:12Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.518741 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:12Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.530637 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:12Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.551609 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:12Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.572900 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01094696-9a80-41b4-8341-f721d64dcba9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6566ed68b170603480f4f1938004e3c9cde1a814ceba7575a792ee272c03c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f65a630ef6d66b01511f0e4ce0279addf8b6f72ee79283bccc1325b5f198b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://430687a31d119ccef92a1bc31318cef87e3c53b4a12df207f9b9ef8bcc968c9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fa9d2e4c65eb47a5bdf7ef3f569abd936b8bf98e79ec9656e0edbf1f1cdd60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:12Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.579961 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.580011 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.580024 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.580043 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.580055 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:12Z","lastTransitionTime":"2025-11-24T21:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.596303 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:12Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.611555 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:12Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.633065 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:12Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.648627 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:12Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.668619 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a62fc83839ca150ad93c6e2e7c52a9f0d31efd20e3a994bfaf5f06d2fcf4a892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:12Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.679260 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jmqqt_3f235785-6b02-4304-99b8-3b216c369d45/ovnkube-controller/0.log" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.684259 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.684315 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.684329 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.684353 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.684367 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:12Z","lastTransitionTime":"2025-11-24T21:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.687281 4915 generic.go:334] "Generic (PLEG): container finished" podID="3f235785-6b02-4304-99b8-3b216c369d45" containerID="1c93fae1a68d186c3aef1f328516c9bbae933e51f5d9f2f2e7a48a6de64c77f9" exitCode=1 Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.687338 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" event={"ID":"3f235785-6b02-4304-99b8-3b216c369d45","Type":"ContainerDied","Data":"1c93fae1a68d186c3aef1f328516c9bbae933e51f5d9f2f2e7a48a6de64c77f9"} Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.688145 4915 scope.go:117] "RemoveContainer" containerID="1c93fae1a68d186c3aef1f328516c9bbae933e51f5d9f2f2e7a48a6de64c77f9" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.689956 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e99580d33be14d59916fae025c81700120a64889ea614c1f04d18be178b533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:12Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.706317 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:12Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.722366 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:12Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.733550 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:12Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.755244 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6520b970c1e0279fea2d39233569ea1cb355259398e39a9fd63a1b61218dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8862d30ccca23f9831d25c0184b0e342ca7e8a4c67b0fe96241323ed74bc49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6932314dff48e102e97de6739f7af9e8c748a9a027a00f6cb31e70c19b9f2027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d8ac7b09233a7e864b5dfba1aefafe23e08de6ef34879478cf63c7ec594794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07279f43e6efab7e8ce1739eb703e344027c7da1e758e62f581fe05c5028c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce0174ac3e69992dce18dea349b0c6ec98cd95c67d12cbd93110c0d604f76e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c93fae1a68d186c3aef1f328516c9bbae933e51f5d9f2f2e7a48a6de64c77f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c93fae1a68d186c3aef1f328516c9bbae933e51f5d9f2f2e7a48a6de64c77f9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:12Z\\\",\\\"message\\\":\\\"ping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 21:20:11.989925 6173 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 21:20:11.989970 6173 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 21:20:11.989058 6173 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 21:20:11.990002 6173 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:20:11.990704 6173 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 21:20:11.991397 6173 factory.go:656] Stopping watch factory\\\\nI1124 21:20:11.991479 6173 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:20:11.992223 6173 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:20:11.990722 6173 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 21:20:11.992973 6173 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e3c0b39454a6a8c631a99c9c4faab3c4b956f476ba652178ee7b11041b825f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:12Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.768282 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:12Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.778851 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:12Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.787698 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.787732 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.787740 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.787755 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.787765 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:12Z","lastTransitionTime":"2025-11-24T21:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.789609 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01094696-9a80-41b4-8341-f721d64dcba9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6566ed68b170603480f4f1938004e3c9cde1a814ceba7575a792ee272c03c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f65a630ef6d66b01511f0e4ce0279addf8b6f72ee79283bccc1325b5f198b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://430687a31d119ccef92a1bc31318cef87e3c53b4a12df207f9b9ef8bcc968c9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fa9d2e4c65eb47a5bdf7ef3f569abd936b8bf98e79ec9656e0edbf1f1cdd60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:12Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.813904 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:12Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.824292 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:12Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.836692 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:12Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.847270 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:12Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.859265 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:12Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.875055 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:12Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.887737 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a62fc83839ca150ad93c6e2e7c52a9f0d31efd20e3a994bfaf5f06d2fcf4a892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:12Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.890159 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.890185 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.890197 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.890212 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.890222 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:12Z","lastTransitionTime":"2025-11-24T21:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.903038 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e99580d33be14d59916fae025c81700120a64889ea614c1f04d18be178b533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:12Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.915086 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:12Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.992851 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.992912 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.992932 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.992957 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:12 crc kubenswrapper[4915]: I1124 21:20:12.992975 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:12Z","lastTransitionTime":"2025-11-24T21:20:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.095544 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.095592 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.095608 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.095628 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.095642 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:13Z","lastTransitionTime":"2025-11-24T21:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.198833 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.198864 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.198873 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.198886 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.198897 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:13Z","lastTransitionTime":"2025-11-24T21:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.303314 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.303345 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.303356 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.303390 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.303402 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:13Z","lastTransitionTime":"2025-11-24T21:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.406154 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.406227 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.406248 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.406277 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.406301 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:13Z","lastTransitionTime":"2025-11-24T21:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.426499 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:13 crc kubenswrapper[4915]: E1124 21:20:13.426651 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.508547 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.508588 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.508595 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.508609 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.508619 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:13Z","lastTransitionTime":"2025-11-24T21:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.611603 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.611678 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.611696 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.611719 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.611733 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:13Z","lastTransitionTime":"2025-11-24T21:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.697172 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jmqqt_3f235785-6b02-4304-99b8-3b216c369d45/ovnkube-controller/1.log" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.699324 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jmqqt_3f235785-6b02-4304-99b8-3b216c369d45/ovnkube-controller/0.log" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.704275 4915 generic.go:334] "Generic (PLEG): container finished" podID="3f235785-6b02-4304-99b8-3b216c369d45" containerID="82364b77290f45179454c0fad8a2e9d055652c9b827f70d5e677b85b782c891d" exitCode=1 Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.704334 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" event={"ID":"3f235785-6b02-4304-99b8-3b216c369d45","Type":"ContainerDied","Data":"82364b77290f45179454c0fad8a2e9d055652c9b827f70d5e677b85b782c891d"} Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.704438 4915 scope.go:117] "RemoveContainer" containerID="1c93fae1a68d186c3aef1f328516c9bbae933e51f5d9f2f2e7a48a6de64c77f9" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.706090 4915 scope.go:117] "RemoveContainer" containerID="82364b77290f45179454c0fad8a2e9d055652c9b827f70d5e677b85b782c891d" Nov 24 21:20:13 crc kubenswrapper[4915]: E1124 21:20:13.706407 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jmqqt_openshift-ovn-kubernetes(3f235785-6b02-4304-99b8-3b216c369d45)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" podUID="3f235785-6b02-4304-99b8-3b216c369d45" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.716146 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.716180 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.716189 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.716205 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.716217 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:13Z","lastTransitionTime":"2025-11-24T21:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.721896 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:13Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.738969 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01094696-9a80-41b4-8341-f721d64dcba9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6566ed68b170603480f4f1938004e3c9cde1a814ceba7575a792ee272c03c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f65a630ef6d66b01511f0e4ce0279addf8b6f72ee79283bccc1325b5f198b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://430687a31d119ccef92a1bc31318cef87e3c53b4a12df207f9b9ef8bcc968c9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fa9d2e4c65eb47a5bdf7ef3f569abd936b8bf98e79ec9656e0edbf1f1cdd60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:13Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.760501 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:13Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.773936 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:13Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.789075 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:13Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.804142 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:13Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.818759 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.818810 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.818820 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.818837 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.818846 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:13Z","lastTransitionTime":"2025-11-24T21:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.825747 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:13Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.839910 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:13Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.853471 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a62fc83839ca150ad93c6e2e7c52a9f0d31efd20e3a994bfaf5f06d2fcf4a892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:13Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.869187 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e99580d33be14d59916fae025c81700120a64889ea614c1f04d18be178b533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:13Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.882984 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:13Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.896676 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:13Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.908610 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:13Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.921755 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.921805 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.921816 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.921832 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.921842 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:13Z","lastTransitionTime":"2025-11-24T21:20:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.925970 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6520b970c1e0279fea2d39233569ea1cb355259398e39a9fd63a1b61218dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8862d30ccca23f9831d25c0184b0e342ca7e8a4c67b0fe96241323ed74bc49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6932314dff48e102e97de6739f7af9e8c748a9a027a00f6cb31e70c19b9f2027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d8ac7b09233a7e864b5dfba1aefafe23e08de6ef34879478cf63c7ec594794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07279f43e6efab7e8ce1739eb703e344027c7da1e758e62f581fe05c5028c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce0174ac3e69992dce18dea349b0c6ec98cd95c67d12cbd93110c0d604f76e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82364b77290f45179454c0fad8a2e9d055652c9b827f70d5e677b85b782c891d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c93fae1a68d186c3aef1f328516c9bbae933e51f5d9f2f2e7a48a6de64c77f9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:12Z\\\",\\\"message\\\":\\\"ping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 21:20:11.989925 6173 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 21:20:11.989970 6173 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 21:20:11.989058 6173 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 21:20:11.990002 6173 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:20:11.990704 6173 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 21:20:11.991397 6173 factory.go:656] Stopping watch factory\\\\nI1124 21:20:11.991479 6173 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:20:11.992223 6173 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:20:11.990722 6173 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 21:20:11.992973 6173 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82364b77290f45179454c0fad8a2e9d055652c9b827f70d5e677b85b782c891d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:13Z\\\",\\\"message\\\":\\\"56] Processing sync for service openshift-console/downloads for network=default\\\\nI1124 21:20:13.519616 6341 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-b8kq8\\\\nI1124 21:20:13.519493 6341 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nF1124 21:20:13.519625 6341 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:13Z is after 2025-08-24T17:21:41Z]\\\\nI1124 21:20:13.519633 6341 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-b8kq8 in node crc\\\\nI1124 21:20:13.519637 6341 ovn.go:134] Ensuring zone local for Pod opensh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e3c0b39454a6a8c631a99c9c4faab3c4b956f476ba652178ee7b11041b825f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:13Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:13 crc kubenswrapper[4915]: I1124 21:20:13.938315 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:13Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.024607 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.024651 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.024665 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.024681 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.024693 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:14Z","lastTransitionTime":"2025-11-24T21:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.128395 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.128439 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.128450 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.128466 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.128478 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:14Z","lastTransitionTime":"2025-11-24T21:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.231457 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.231506 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.231518 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.231539 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.231550 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:14Z","lastTransitionTime":"2025-11-24T21:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.335580 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.335641 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.335659 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.335682 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.335699 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:14Z","lastTransitionTime":"2025-11-24T21:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.426201 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:14 crc kubenswrapper[4915]: E1124 21:20:14.426377 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.427132 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:14 crc kubenswrapper[4915]: E1124 21:20:14.427328 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.437657 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.437699 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.437710 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.437725 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.437738 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:14Z","lastTransitionTime":"2025-11-24T21:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.541506 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.541562 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.541579 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.541602 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.541621 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:14Z","lastTransitionTime":"2025-11-24T21:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.645022 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.645059 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.645068 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.645083 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.645157 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:14Z","lastTransitionTime":"2025-11-24T21:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.709520 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jmqqt_3f235785-6b02-4304-99b8-3b216c369d45/ovnkube-controller/1.log" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.748390 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.748434 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.748442 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.748457 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.748466 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:14Z","lastTransitionTime":"2025-11-24T21:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.851328 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.851378 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.851389 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.851414 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.851429 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:14Z","lastTransitionTime":"2025-11-24T21:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.918091 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr"] Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.918581 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.921436 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.924266 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.950443 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:14Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.955570 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.955862 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.955950 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.956063 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.956161 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:14Z","lastTransitionTime":"2025-11-24T21:20:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.967844 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:14Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.988841 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6520b970c1e0279fea2d39233569ea1cb355259398e39a9fd63a1b61218dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8862d30ccca23f9831d25c0184b0e342ca7e8a4c67b0fe96241323ed74bc49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6932314dff48e102e97de6739f7af9e8c748a9a027a00f6cb31e70c19b9f2027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d8ac7b09233a7e864b5dfba1aefafe23e08de6ef34879478cf63c7ec594794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07279f43e6efab7e8ce1739eb703e344027c7da1e758e62f581fe05c5028c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce0174ac3e69992dce18dea349b0c6ec98cd95c67d12cbd93110c0d604f76e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82364b77290f45179454c0fad8a2e9d055652c9b827f70d5e677b85b782c891d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c93fae1a68d186c3aef1f328516c9bbae933e51f5d9f2f2e7a48a6de64c77f9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:12Z\\\",\\\"message\\\":\\\"ping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 21:20:11.989925 6173 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 21:20:11.989970 6173 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 21:20:11.989058 6173 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 21:20:11.990002 6173 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:20:11.990704 6173 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 21:20:11.991397 6173 factory.go:656] Stopping watch factory\\\\nI1124 21:20:11.991479 6173 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:20:11.992223 6173 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:20:11.990722 6173 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 21:20:11.992973 6173 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82364b77290f45179454c0fad8a2e9d055652c9b827f70d5e677b85b782c891d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:13Z\\\",\\\"message\\\":\\\"56] Processing sync for service openshift-console/downloads for network=default\\\\nI1124 21:20:13.519616 6341 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-b8kq8\\\\nI1124 21:20:13.519493 6341 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nF1124 21:20:13.519625 6341 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:13Z is after 2025-08-24T17:21:41Z]\\\\nI1124 21:20:13.519633 6341 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-b8kq8 in node crc\\\\nI1124 21:20:13.519637 6341 ovn.go:134] Ensuring zone local for Pod opensh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e3c0b39454a6a8c631a99c9c4faab3c4b956f476ba652178ee7b11041b825f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:14Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.989391 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/42f3e92c-d4a6-421c-b970-3d6f6baf0ee9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lpnnr\" (UID: \"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.989429 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvs4t\" (UniqueName: \"kubernetes.io/projected/42f3e92c-d4a6-421c-b970-3d6f6baf0ee9-kube-api-access-cvs4t\") pod \"ovnkube-control-plane-749d76644c-lpnnr\" (UID: \"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.989446 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/42f3e92c-d4a6-421c-b970-3d6f6baf0ee9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lpnnr\" (UID: \"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" Nov 24 21:20:14 crc kubenswrapper[4915]: I1124 21:20:14.989462 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/42f3e92c-d4a6-421c-b970-3d6f6baf0ee9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lpnnr\" (UID: \"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.002879 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.016722 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01094696-9a80-41b4-8341-f721d64dcba9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6566ed68b170603480f4f1938004e3c9cde1a814ceba7575a792ee272c03c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f65a630ef6d66b01511f0e4ce0279addf8b6f72ee79283bccc1325b5f198b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://430687a31d119ccef92a1bc31318cef87e3c53b4a12df207f9b9ef8bcc968c9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fa9d2e4c65eb47a5bdf7ef3f569abd936b8bf98e79ec9656e0edbf1f1cdd60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.038267 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.056997 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.058209 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.058241 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.058250 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.058264 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.058273 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:15Z","lastTransitionTime":"2025-11-24T21:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.068336 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.076648 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.085116 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.089988 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/42f3e92c-d4a6-421c-b970-3d6f6baf0ee9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lpnnr\" (UID: \"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.090019 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/42f3e92c-d4a6-421c-b970-3d6f6baf0ee9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lpnnr\" (UID: \"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.090058 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/42f3e92c-d4a6-421c-b970-3d6f6baf0ee9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lpnnr\" (UID: \"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.090091 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvs4t\" (UniqueName: \"kubernetes.io/projected/42f3e92c-d4a6-421c-b970-3d6f6baf0ee9-kube-api-access-cvs4t\") pod \"ovnkube-control-plane-749d76644c-lpnnr\" (UID: \"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.090582 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/42f3e92c-d4a6-421c-b970-3d6f6baf0ee9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lpnnr\" (UID: \"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.091054 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/42f3e92c-d4a6-421c-b970-3d6f6baf0ee9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lpnnr\" (UID: \"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.095647 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/42f3e92c-d4a6-421c-b970-3d6f6baf0ee9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lpnnr\" (UID: \"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.101354 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.109110 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvs4t\" (UniqueName: \"kubernetes.io/projected/42f3e92c-d4a6-421c-b970-3d6f6baf0ee9-kube-api-access-cvs4t\") pod \"ovnkube-control-plane-749d76644c-lpnnr\" (UID: \"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.112170 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.122628 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a62fc83839ca150ad93c6e2e7c52a9f0d31efd20e3a994bfaf5f06d2fcf4a892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.134697 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e99580d33be14d59916fae025c81700120a64889ea614c1f04d18be178b533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.143216 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lpnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.157256 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.161183 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.161226 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.161237 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.161253 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.161264 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:15Z","lastTransitionTime":"2025-11-24T21:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.244262 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" Nov 24 21:20:15 crc kubenswrapper[4915]: W1124 21:20:15.258712 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42f3e92c_d4a6_421c_b970_3d6f6baf0ee9.slice/crio-8b035c8bee19e04f0b462c11a09aa6b0ed60eaeaef149138736d895c31d93732 WatchSource:0}: Error finding container 8b035c8bee19e04f0b462c11a09aa6b0ed60eaeaef149138736d895c31d93732: Status 404 returned error can't find the container with id 8b035c8bee19e04f0b462c11a09aa6b0ed60eaeaef149138736d895c31d93732 Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.262893 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.262928 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.262939 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.262955 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.262965 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:15Z","lastTransitionTime":"2025-11-24T21:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.365579 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.365648 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.365661 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.365683 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.365700 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:15Z","lastTransitionTime":"2025-11-24T21:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.426563 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:15 crc kubenswrapper[4915]: E1124 21:20:15.426702 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.469949 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.470015 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.470028 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.470053 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.470070 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:15Z","lastTransitionTime":"2025-11-24T21:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.573941 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.573995 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.574007 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.574030 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.574047 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:15Z","lastTransitionTime":"2025-11-24T21:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.676954 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.676994 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.677004 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.677022 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.677033 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:15Z","lastTransitionTime":"2025-11-24T21:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.722035 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" event={"ID":"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9","Type":"ContainerStarted","Data":"1fc03c3831c2b38d521328d59ec14fc608766952c77abc524de05c7d6abc6679"} Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.722114 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" event={"ID":"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9","Type":"ContainerStarted","Data":"27035ec78eaedd29238e13769c857bb0f8b79cfa4f5218f99438636691166caf"} Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.722132 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" event={"ID":"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9","Type":"ContainerStarted","Data":"8b035c8bee19e04f0b462c11a09aa6b0ed60eaeaef149138736d895c31d93732"} Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.743577 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6520b970c1e0279fea2d39233569ea1cb355259398e39a9fd63a1b61218dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8862d30ccca23f9831d25c0184b0e342ca7e8a4c67b0fe96241323ed74bc49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6932314dff48e102e97de6739f7af9e8c748a9a027a00f6cb31e70c19b9f2027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d8ac7b09233a7e864b5dfba1aefafe23e08de6ef34879478cf63c7ec594794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07279f43e6efab7e8ce1739eb703e344027c7da1e758e62f581fe05c5028c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce0174ac3e69992dce18dea349b0c6ec98cd95c67d12cbd93110c0d604f76e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82364b77290f45179454c0fad8a2e9d055652c9b827f70d5e677b85b782c891d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c93fae1a68d186c3aef1f328516c9bbae933e51f5d9f2f2e7a48a6de64c77f9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:12Z\\\",\\\"message\\\":\\\"ping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 21:20:11.989925 6173 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 21:20:11.989970 6173 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 21:20:11.989058 6173 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 21:20:11.990002 6173 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:20:11.990704 6173 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 21:20:11.991397 6173 factory.go:656] Stopping watch factory\\\\nI1124 21:20:11.991479 6173 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:20:11.992223 6173 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:20:11.990722 6173 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 21:20:11.992973 6173 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82364b77290f45179454c0fad8a2e9d055652c9b827f70d5e677b85b782c891d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:13Z\\\",\\\"message\\\":\\\"56] Processing sync for service openshift-console/downloads for network=default\\\\nI1124 21:20:13.519616 6341 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-b8kq8\\\\nI1124 21:20:13.519493 6341 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nF1124 21:20:13.519625 6341 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:13Z is after 2025-08-24T17:21:41Z]\\\\nI1124 21:20:13.519633 6341 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-b8kq8 in node crc\\\\nI1124 21:20:13.519637 6341 ovn.go:134] Ensuring zone local for Pod opensh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e3c0b39454a6a8c631a99c9c4faab3c4b956f476ba652178ee7b11041b825f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.756462 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.771600 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.779756 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.779825 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.779839 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.779855 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.779868 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:15Z","lastTransitionTime":"2025-11-24T21:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.787891 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.806372 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.823308 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.834603 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.845411 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.862946 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01094696-9a80-41b4-8341-f721d64dcba9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6566ed68b170603480f4f1938004e3c9cde1a814ceba7575a792ee272c03c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f65a630ef6d66b01511f0e4ce0279addf8b6f72ee79283bccc1325b5f198b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://430687a31d119ccef92a1bc31318cef87e3c53b4a12df207f9b9ef8bcc968c9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fa9d2e4c65eb47a5bdf7ef3f569abd936b8bf98e79ec9656e0edbf1f1cdd60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.881814 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.881855 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.881869 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.881885 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.881897 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:15Z","lastTransitionTime":"2025-11-24T21:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.891328 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.909173 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e99580d33be14d59916fae025c81700120a64889ea614c1f04d18be178b533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.922675 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27035ec78eaedd29238e13769c857bb0f8b79cfa4f5218f99438636691166caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc03c3831c2b38d521328d59ec14fc608766952c77abc524de05c7d6abc6679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lpnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.940222 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.954652 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.969140 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a62fc83839ca150ad93c6e2e7c52a9f0d31efd20e3a994bfaf5f06d2fcf4a892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.982342 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:15Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.984614 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.984651 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.984665 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.984684 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:15 crc kubenswrapper[4915]: I1124 21:20:15.984696 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:15Z","lastTransitionTime":"2025-11-24T21:20:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.087626 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.087686 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.087702 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.087726 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.087744 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:16Z","lastTransitionTime":"2025-11-24T21:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.191378 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.191435 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.191446 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.191468 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.191480 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:16Z","lastTransitionTime":"2025-11-24T21:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.294079 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.294137 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.294148 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.294172 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.294186 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:16Z","lastTransitionTime":"2025-11-24T21:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.396963 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.397002 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.397011 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.397026 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.397036 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:16Z","lastTransitionTime":"2025-11-24T21:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.425912 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.425912 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:16 crc kubenswrapper[4915]: E1124 21:20:16.426159 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:20:16 crc kubenswrapper[4915]: E1124 21:20:16.426288 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.449864 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-hkc4w"] Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.450347 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:16 crc kubenswrapper[4915]: E1124 21:20:16.450412 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.471544 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.487663 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.500652 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.500708 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.500722 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.500755 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.500795 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:16Z","lastTransitionTime":"2025-11-24T21:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.502828 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.506232 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a785aaf6-e561-47e9-a3ff-69e6930c5941-metrics-certs\") pod \"network-metrics-daemon-hkc4w\" (UID: \"a785aaf6-e561-47e9-a3ff-69e6930c5941\") " pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.506286 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlb48\" (UniqueName: \"kubernetes.io/projected/a785aaf6-e561-47e9-a3ff-69e6930c5941-kube-api-access-wlb48\") pod \"network-metrics-daemon-hkc4w\" (UID: \"a785aaf6-e561-47e9-a3ff-69e6930c5941\") " pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.524326 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6520b970c1e0279fea2d39233569ea1cb355259398e39a9fd63a1b61218dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8862d30ccca23f9831d25c0184b0e342ca7e8a4c67b0fe96241323ed74bc49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6932314dff48e102e97de6739f7af9e8c748a9a027a00f6cb31e70c19b9f2027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d8ac7b09233a7e864b5dfba1aefafe23e08de6ef34879478cf63c7ec594794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07279f43e6efab7e8ce1739eb703e344027c7da1e758e62f581fe05c5028c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce0174ac3e69992dce18dea349b0c6ec98cd95c67d12cbd93110c0d604f76e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82364b77290f45179454c0fad8a2e9d055652c9b827f70d5e677b85b782c891d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c93fae1a68d186c3aef1f328516c9bbae933e51f5d9f2f2e7a48a6de64c77f9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:12Z\\\",\\\"message\\\":\\\"ping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 21:20:11.989925 6173 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 21:20:11.989970 6173 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 21:20:11.989058 6173 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 21:20:11.990002 6173 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:20:11.990704 6173 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 21:20:11.991397 6173 factory.go:656] Stopping watch factory\\\\nI1124 21:20:11.991479 6173 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:20:11.992223 6173 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:20:11.990722 6173 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 21:20:11.992973 6173 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82364b77290f45179454c0fad8a2e9d055652c9b827f70d5e677b85b782c891d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:13Z\\\",\\\"message\\\":\\\"56] Processing sync for service openshift-console/downloads for network=default\\\\nI1124 21:20:13.519616 6341 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-b8kq8\\\\nI1124 21:20:13.519493 6341 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nF1124 21:20:13.519625 6341 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:13Z is after 2025-08-24T17:21:41Z]\\\\nI1124 21:20:13.519633 6341 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-b8kq8 in node crc\\\\nI1124 21:20:13.519637 6341 ovn.go:134] Ensuring zone local for Pod opensh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e3c0b39454a6a8c631a99c9c4faab3c4b956f476ba652178ee7b11041b825f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.535227 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.546309 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hkc4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a785aaf6-e561-47e9-a3ff-69e6930c5941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hkc4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.560015 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01094696-9a80-41b4-8341-f721d64dcba9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6566ed68b170603480f4f1938004e3c9cde1a814ceba7575a792ee272c03c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f65a630ef6d66b01511f0e4ce0279addf8b6f72ee79283bccc1325b5f198b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://430687a31d119ccef92a1bc31318cef87e3c53b4a12df207f9b9ef8bcc968c9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fa9d2e4c65eb47a5bdf7ef3f569abd936b8bf98e79ec9656e0edbf1f1cdd60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.580897 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.597633 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.603045 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.603094 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.603107 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.603124 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.603135 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:16Z","lastTransitionTime":"2025-11-24T21:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.607465 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlb48\" (UniqueName: \"kubernetes.io/projected/a785aaf6-e561-47e9-a3ff-69e6930c5941-kube-api-access-wlb48\") pod \"network-metrics-daemon-hkc4w\" (UID: \"a785aaf6-e561-47e9-a3ff-69e6930c5941\") " pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.607551 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a785aaf6-e561-47e9-a3ff-69e6930c5941-metrics-certs\") pod \"network-metrics-daemon-hkc4w\" (UID: \"a785aaf6-e561-47e9-a3ff-69e6930c5941\") " pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:16 crc kubenswrapper[4915]: E1124 21:20:16.607712 4915 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:20:16 crc kubenswrapper[4915]: E1124 21:20:16.607865 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a785aaf6-e561-47e9-a3ff-69e6930c5941-metrics-certs podName:a785aaf6-e561-47e9-a3ff-69e6930c5941 nodeName:}" failed. No retries permitted until 2025-11-24 21:20:17.10783037 +0000 UTC m=+35.424082583 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a785aaf6-e561-47e9-a3ff-69e6930c5941-metrics-certs") pod "network-metrics-daemon-hkc4w" (UID: "a785aaf6-e561-47e9-a3ff-69e6930c5941") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.611066 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.624952 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.634837 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.638058 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlb48\" (UniqueName: \"kubernetes.io/projected/a785aaf6-e561-47e9-a3ff-69e6930c5941-kube-api-access-wlb48\") pod \"network-metrics-daemon-hkc4w\" (UID: \"a785aaf6-e561-47e9-a3ff-69e6930c5941\") " pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.647881 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.660957 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.672065 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a62fc83839ca150ad93c6e2e7c52a9f0d31efd20e3a994bfaf5f06d2fcf4a892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.684309 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e99580d33be14d59916fae025c81700120a64889ea614c1f04d18be178b533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.698279 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27035ec78eaedd29238e13769c857bb0f8b79cfa4f5218f99438636691166caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc03c3831c2b38d521328d59ec14fc608766952c77abc524de05c7d6abc6679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lpnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:16Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.705765 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.705899 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.705967 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.706033 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.706102 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:16Z","lastTransitionTime":"2025-11-24T21:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.809638 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.809682 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.809692 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.809711 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.809723 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:16Z","lastTransitionTime":"2025-11-24T21:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.912422 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.912863 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.913016 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.913185 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:16 crc kubenswrapper[4915]: I1124 21:20:16.913326 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:16Z","lastTransitionTime":"2025-11-24T21:20:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.016274 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.016367 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.016386 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.016408 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.016424 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:17Z","lastTransitionTime":"2025-11-24T21:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.112371 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a785aaf6-e561-47e9-a3ff-69e6930c5941-metrics-certs\") pod \"network-metrics-daemon-hkc4w\" (UID: \"a785aaf6-e561-47e9-a3ff-69e6930c5941\") " pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:17 crc kubenswrapper[4915]: E1124 21:20:17.112580 4915 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:20:17 crc kubenswrapper[4915]: E1124 21:20:17.112679 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a785aaf6-e561-47e9-a3ff-69e6930c5941-metrics-certs podName:a785aaf6-e561-47e9-a3ff-69e6930c5941 nodeName:}" failed. No retries permitted until 2025-11-24 21:20:18.112655169 +0000 UTC m=+36.428907352 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a785aaf6-e561-47e9-a3ff-69e6930c5941-metrics-certs") pod "network-metrics-daemon-hkc4w" (UID: "a785aaf6-e561-47e9-a3ff-69e6930c5941") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.119121 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.119180 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.119206 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.119238 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.119260 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:17Z","lastTransitionTime":"2025-11-24T21:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.222924 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.222973 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.222986 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.223004 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.223016 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:17Z","lastTransitionTime":"2025-11-24T21:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.325500 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.325542 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.325731 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.325746 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.325757 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:17Z","lastTransitionTime":"2025-11-24T21:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.425985 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:17 crc kubenswrapper[4915]: E1124 21:20:17.426109 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.428073 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.428098 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.428106 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.428119 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.428129 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:17Z","lastTransitionTime":"2025-11-24T21:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.530688 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.530764 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.530819 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.530848 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.530871 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:17Z","lastTransitionTime":"2025-11-24T21:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.633689 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.633726 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.633736 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.633751 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.633760 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:17Z","lastTransitionTime":"2025-11-24T21:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.735464 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.735509 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.735519 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.735534 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.735545 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:17Z","lastTransitionTime":"2025-11-24T21:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.838731 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.838909 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.838922 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.838941 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.838953 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:17Z","lastTransitionTime":"2025-11-24T21:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.942474 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.942524 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.942543 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.942568 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:17 crc kubenswrapper[4915]: I1124 21:20:17.942586 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:17Z","lastTransitionTime":"2025-11-24T21:20:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.045269 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.045568 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.045679 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.045747 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.045834 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:18Z","lastTransitionTime":"2025-11-24T21:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.123507 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a785aaf6-e561-47e9-a3ff-69e6930c5941-metrics-certs\") pod \"network-metrics-daemon-hkc4w\" (UID: \"a785aaf6-e561-47e9-a3ff-69e6930c5941\") " pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:18 crc kubenswrapper[4915]: E1124 21:20:18.123767 4915 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:20:18 crc kubenswrapper[4915]: E1124 21:20:18.123922 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a785aaf6-e561-47e9-a3ff-69e6930c5941-metrics-certs podName:a785aaf6-e561-47e9-a3ff-69e6930c5941 nodeName:}" failed. No retries permitted until 2025-11-24 21:20:20.123888076 +0000 UTC m=+38.440140279 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a785aaf6-e561-47e9-a3ff-69e6930c5941-metrics-certs") pod "network-metrics-daemon-hkc4w" (UID: "a785aaf6-e561-47e9-a3ff-69e6930c5941") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.148547 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.148577 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.148585 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.148597 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.148606 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:18Z","lastTransitionTime":"2025-11-24T21:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.224518 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.224629 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.224672 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.224707 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.224755 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:18 crc kubenswrapper[4915]: E1124 21:20:18.224916 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:20:18 crc kubenswrapper[4915]: E1124 21:20:18.224945 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:20:18 crc kubenswrapper[4915]: E1124 21:20:18.224960 4915 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:20:18 crc kubenswrapper[4915]: E1124 21:20:18.225013 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 21:20:34.224995745 +0000 UTC m=+52.541247918 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:20:18 crc kubenswrapper[4915]: E1124 21:20:18.225096 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:20:34.225071057 +0000 UTC m=+52.541323230 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:20:18 crc kubenswrapper[4915]: E1124 21:20:18.225157 4915 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:20:18 crc kubenswrapper[4915]: E1124 21:20:18.225166 4915 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:20:18 crc kubenswrapper[4915]: E1124 21:20:18.225247 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:20:34.225225061 +0000 UTC m=+52.541477234 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:20:18 crc kubenswrapper[4915]: E1124 21:20:18.225338 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:20:34.225326553 +0000 UTC m=+52.541578726 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:20:18 crc kubenswrapper[4915]: E1124 21:20:18.225454 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:20:18 crc kubenswrapper[4915]: E1124 21:20:18.225522 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:20:18 crc kubenswrapper[4915]: E1124 21:20:18.225577 4915 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:20:18 crc kubenswrapper[4915]: E1124 21:20:18.225660 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 21:20:34.225651202 +0000 UTC m=+52.541903375 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.250396 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.250435 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.250443 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.250460 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.250469 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:18Z","lastTransitionTime":"2025-11-24T21:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.352621 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.352676 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.352693 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.352714 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.352732 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:18Z","lastTransitionTime":"2025-11-24T21:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.362454 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.362498 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.362509 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.362524 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.362536 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:18Z","lastTransitionTime":"2025-11-24T21:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:18 crc kubenswrapper[4915]: E1124 21:20:18.378405 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.383730 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.383798 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.383813 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.383832 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.383843 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:18Z","lastTransitionTime":"2025-11-24T21:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:18 crc kubenswrapper[4915]: E1124 21:20:18.402929 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.407264 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.407333 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.407350 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.407372 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.407388 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:18Z","lastTransitionTime":"2025-11-24T21:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.426363 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.426387 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:18 crc kubenswrapper[4915]: E1124 21:20:18.426531 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.426499 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:18 crc kubenswrapper[4915]: E1124 21:20:18.427381 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:20:18 crc kubenswrapper[4915]: E1124 21:20:18.426718 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:20:18 crc kubenswrapper[4915]: E1124 21:20:18.426717 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.436178 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.436235 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.436252 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.436275 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.436292 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:18Z","lastTransitionTime":"2025-11-24T21:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:18 crc kubenswrapper[4915]: E1124 21:20:18.455527 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.461765 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.461837 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.461849 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.461869 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.461882 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:18Z","lastTransitionTime":"2025-11-24T21:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:18 crc kubenswrapper[4915]: E1124 21:20:18.484320 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:18Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:18 crc kubenswrapper[4915]: E1124 21:20:18.484625 4915 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.486729 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.486760 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.486768 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.486796 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.486805 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:18Z","lastTransitionTime":"2025-11-24T21:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.589452 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.589487 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.589498 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.589511 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.589520 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:18Z","lastTransitionTime":"2025-11-24T21:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.692047 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.692113 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.692136 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.692167 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.692191 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:18Z","lastTransitionTime":"2025-11-24T21:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.794621 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.794695 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.794720 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.794749 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.794815 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:18Z","lastTransitionTime":"2025-11-24T21:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.897864 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.897918 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.897935 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.897959 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:18 crc kubenswrapper[4915]: I1124 21:20:18.897977 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:18Z","lastTransitionTime":"2025-11-24T21:20:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.003190 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.003253 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.003269 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.003296 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.003315 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:19Z","lastTransitionTime":"2025-11-24T21:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.111022 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.111087 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.111104 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.111130 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.111148 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:19Z","lastTransitionTime":"2025-11-24T21:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.214154 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.214214 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.214232 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.214256 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.214273 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:19Z","lastTransitionTime":"2025-11-24T21:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.317139 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.317198 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.317216 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.317241 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.317258 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:19Z","lastTransitionTime":"2025-11-24T21:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.420533 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.420595 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.420617 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.420646 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.420670 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:19Z","lastTransitionTime":"2025-11-24T21:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.426149 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:19 crc kubenswrapper[4915]: E1124 21:20:19.426310 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.523511 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.523576 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.523594 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.523622 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.523645 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:19Z","lastTransitionTime":"2025-11-24T21:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.627128 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.627208 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.627232 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.627277 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.627301 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:19Z","lastTransitionTime":"2025-11-24T21:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.729594 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.729997 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.730017 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.730033 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.730044 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:19Z","lastTransitionTime":"2025-11-24T21:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.833066 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.833106 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.833118 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.833133 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.833143 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:19Z","lastTransitionTime":"2025-11-24T21:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.935395 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.935461 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.935477 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.935500 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:19 crc kubenswrapper[4915]: I1124 21:20:19.935516 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:19Z","lastTransitionTime":"2025-11-24T21:20:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.038492 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.038556 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.038570 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.038587 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.038599 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:20Z","lastTransitionTime":"2025-11-24T21:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.141457 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.141525 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.141543 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.141578 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.141612 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:20Z","lastTransitionTime":"2025-11-24T21:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.146488 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a785aaf6-e561-47e9-a3ff-69e6930c5941-metrics-certs\") pod \"network-metrics-daemon-hkc4w\" (UID: \"a785aaf6-e561-47e9-a3ff-69e6930c5941\") " pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:20 crc kubenswrapper[4915]: E1124 21:20:20.147178 4915 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:20:20 crc kubenswrapper[4915]: E1124 21:20:20.147249 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a785aaf6-e561-47e9-a3ff-69e6930c5941-metrics-certs podName:a785aaf6-e561-47e9-a3ff-69e6930c5941 nodeName:}" failed. No retries permitted until 2025-11-24 21:20:24.147232363 +0000 UTC m=+42.463484556 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a785aaf6-e561-47e9-a3ff-69e6930c5941-metrics-certs") pod "network-metrics-daemon-hkc4w" (UID: "a785aaf6-e561-47e9-a3ff-69e6930c5941") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.245544 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.245598 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.245615 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.245638 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.245654 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:20Z","lastTransitionTime":"2025-11-24T21:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.349658 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.349713 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.349723 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.349749 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.349764 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:20Z","lastTransitionTime":"2025-11-24T21:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.426436 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:20 crc kubenswrapper[4915]: E1124 21:20:20.426904 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.427090 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.427092 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:20 crc kubenswrapper[4915]: E1124 21:20:20.427381 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:20:20 crc kubenswrapper[4915]: E1124 21:20:20.427479 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.453109 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.453170 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.453189 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.453217 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.453237 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:20Z","lastTransitionTime":"2025-11-24T21:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.556563 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.556617 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.556636 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.556659 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.556677 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:20Z","lastTransitionTime":"2025-11-24T21:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.659692 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.659748 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.659764 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.659847 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.659887 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:20Z","lastTransitionTime":"2025-11-24T21:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.761592 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.761636 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.761645 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.761667 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.761679 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:20Z","lastTransitionTime":"2025-11-24T21:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.864743 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.865099 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.865201 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.865289 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.865380 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:20Z","lastTransitionTime":"2025-11-24T21:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.968733 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.969034 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.969136 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.969243 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:20 crc kubenswrapper[4915]: I1124 21:20:20.969327 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:20Z","lastTransitionTime":"2025-11-24T21:20:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.072634 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.072714 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.072738 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.072768 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.072843 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:21Z","lastTransitionTime":"2025-11-24T21:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.176006 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.176374 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.176521 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.176662 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.176833 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:21Z","lastTransitionTime":"2025-11-24T21:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.280755 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.280850 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.280869 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.280900 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.280920 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:21Z","lastTransitionTime":"2025-11-24T21:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.384759 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.384876 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.384906 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.384934 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.384952 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:21Z","lastTransitionTime":"2025-11-24T21:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.426583 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:21 crc kubenswrapper[4915]: E1124 21:20:21.426770 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.488018 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.488129 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.488157 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.488187 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.488212 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:21Z","lastTransitionTime":"2025-11-24T21:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.591048 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.591144 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.591162 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.591187 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.591203 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:21Z","lastTransitionTime":"2025-11-24T21:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.694620 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.694686 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.694709 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.694738 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.694758 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:21Z","lastTransitionTime":"2025-11-24T21:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.797438 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.797887 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.798166 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.798396 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.798615 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:21Z","lastTransitionTime":"2025-11-24T21:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.902029 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.902057 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.902067 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.902080 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:21 crc kubenswrapper[4915]: I1124 21:20:21.902089 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:21Z","lastTransitionTime":"2025-11-24T21:20:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.005467 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.005535 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.005552 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.005577 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.005594 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:22Z","lastTransitionTime":"2025-11-24T21:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.108742 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.108859 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.108885 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.108918 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.108943 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:22Z","lastTransitionTime":"2025-11-24T21:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.211425 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.211504 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.211528 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.211556 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.211578 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:22Z","lastTransitionTime":"2025-11-24T21:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.314763 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.314853 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.314871 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.314895 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.314915 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:22Z","lastTransitionTime":"2025-11-24T21:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.417356 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.417386 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.417396 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.417408 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.417417 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:22Z","lastTransitionTime":"2025-11-24T21:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.426099 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.426132 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.426185 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:22 crc kubenswrapper[4915]: E1124 21:20:22.427190 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:20:22 crc kubenswrapper[4915]: E1124 21:20:22.427373 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:20:22 crc kubenswrapper[4915]: E1124 21:20:22.427525 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.446639 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27035ec78eaedd29238e13769c857bb0f8b79cfa4f5218f99438636691166caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc03c3831c2b38d521328d59ec14fc608766952c77abc524de05c7d6abc6679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lpnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:22Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.467956 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:22Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.488098 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:22Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.504751 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a62fc83839ca150ad93c6e2e7c52a9f0d31efd20e3a994bfaf5f06d2fcf4a892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:22Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.519903 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.519942 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.519953 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.519966 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.519975 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:22Z","lastTransitionTime":"2025-11-24T21:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.525612 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e99580d33be14d59916fae025c81700120a64889ea614c1f04d18be178b533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:22Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.540093 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:22Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.558254 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:22Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.572572 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hkc4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a785aaf6-e561-47e9-a3ff-69e6930c5941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hkc4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:22Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.590882 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:22Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.607662 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:22Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.622194 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.622241 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.622257 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.622282 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.622301 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:22Z","lastTransitionTime":"2025-11-24T21:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.631532 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6520b970c1e0279fea2d39233569ea1cb355259398e39a9fd63a1b61218dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8862d30ccca23f9831d25c0184b0e342ca7e8a4c67b0fe96241323ed74bc49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6932314dff48e102e97de6739f7af9e8c748a9a027a00f6cb31e70c19b9f2027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d8ac7b09233a7e864b5dfba1aefafe23e08de6ef34879478cf63c7ec594794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07279f43e6efab7e8ce1739eb703e344027c7da1e758e62f581fe05c5028c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce0174ac3e69992dce18dea349b0c6ec98cd95c67d12cbd93110c0d604f76e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82364b77290f45179454c0fad8a2e9d055652c9b827f70d5e677b85b782c891d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c93fae1a68d186c3aef1f328516c9bbae933e51f5d9f2f2e7a48a6de64c77f9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:12Z\\\",\\\"message\\\":\\\"ping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 21:20:11.989925 6173 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 21:20:11.989970 6173 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 21:20:11.989058 6173 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1124 21:20:11.990002 6173 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:20:11.990704 6173 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 21:20:11.991397 6173 factory.go:656] Stopping watch factory\\\\nI1124 21:20:11.991479 6173 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:20:11.992223 6173 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:20:11.990722 6173 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 21:20:11.992973 6173 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82364b77290f45179454c0fad8a2e9d055652c9b827f70d5e677b85b782c891d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:13Z\\\",\\\"message\\\":\\\"56] Processing sync for service openshift-console/downloads for network=default\\\\nI1124 21:20:13.519616 6341 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-b8kq8\\\\nI1124 21:20:13.519493 6341 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nF1124 21:20:13.519625 6341 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:13Z is after 2025-08-24T17:21:41Z]\\\\nI1124 21:20:13.519633 6341 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-b8kq8 in node crc\\\\nI1124 21:20:13.519637 6341 ovn.go:134] Ensuring zone local for Pod opensh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e3c0b39454a6a8c631a99c9c4faab3c4b956f476ba652178ee7b11041b825f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:22Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.648037 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:22Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.659716 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:22Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.672572 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:22Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.687754 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01094696-9a80-41b4-8341-f721d64dcba9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6566ed68b170603480f4f1938004e3c9cde1a814ceba7575a792ee272c03c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f65a630ef6d66b01511f0e4ce0279addf8b6f72ee79283bccc1325b5f198b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://430687a31d119ccef92a1bc31318cef87e3c53b4a12df207f9b9ef8bcc968c9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fa9d2e4c65eb47a5bdf7ef3f569abd936b8bf98e79ec9656e0edbf1f1cdd60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:22Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.721069 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:22Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.729079 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.729186 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.729218 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.729262 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.729351 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:22Z","lastTransitionTime":"2025-11-24T21:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.741239 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:22Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.832253 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.832360 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.832385 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.832416 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.832439 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:22Z","lastTransitionTime":"2025-11-24T21:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.934733 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.934765 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.934805 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.934825 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:22 crc kubenswrapper[4915]: I1124 21:20:22.934833 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:22Z","lastTransitionTime":"2025-11-24T21:20:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.037642 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.037694 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.037709 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.037730 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.037746 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:23Z","lastTransitionTime":"2025-11-24T21:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.141377 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.141422 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.141534 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.141556 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.141567 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:23Z","lastTransitionTime":"2025-11-24T21:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.244346 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.244424 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.244442 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.244464 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.244509 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:23Z","lastTransitionTime":"2025-11-24T21:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.346723 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.346803 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.346815 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.346880 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.346892 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:23Z","lastTransitionTime":"2025-11-24T21:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.472252 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:23 crc kubenswrapper[4915]: E1124 21:20:23.472469 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.475253 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.475321 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.475343 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.475372 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.475394 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:23Z","lastTransitionTime":"2025-11-24T21:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.578152 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.578214 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.578230 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.578254 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.578273 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:23Z","lastTransitionTime":"2025-11-24T21:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.680914 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.680971 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.680986 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.681007 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.681022 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:23Z","lastTransitionTime":"2025-11-24T21:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.783771 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.784154 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.784192 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.784220 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.784241 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:23Z","lastTransitionTime":"2025-11-24T21:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.888100 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.888170 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.888190 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.888215 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.888231 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:23Z","lastTransitionTime":"2025-11-24T21:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.990828 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.990862 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.990873 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.990888 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:23 crc kubenswrapper[4915]: I1124 21:20:23.990899 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:23Z","lastTransitionTime":"2025-11-24T21:20:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.094136 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.094184 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.094196 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.094212 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.094227 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:24Z","lastTransitionTime":"2025-11-24T21:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.181406 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a785aaf6-e561-47e9-a3ff-69e6930c5941-metrics-certs\") pod \"network-metrics-daemon-hkc4w\" (UID: \"a785aaf6-e561-47e9-a3ff-69e6930c5941\") " pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:24 crc kubenswrapper[4915]: E1124 21:20:24.181577 4915 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:20:24 crc kubenswrapper[4915]: E1124 21:20:24.182007 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a785aaf6-e561-47e9-a3ff-69e6930c5941-metrics-certs podName:a785aaf6-e561-47e9-a3ff-69e6930c5941 nodeName:}" failed. No retries permitted until 2025-11-24 21:20:32.181987246 +0000 UTC m=+50.498239419 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a785aaf6-e561-47e9-a3ff-69e6930c5941-metrics-certs") pod "network-metrics-daemon-hkc4w" (UID: "a785aaf6-e561-47e9-a3ff-69e6930c5941") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.196815 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.196857 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.196870 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.196889 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.196904 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:24Z","lastTransitionTime":"2025-11-24T21:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.299201 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.299453 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.299670 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.299863 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.300018 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:24Z","lastTransitionTime":"2025-11-24T21:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.402984 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.403035 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.403048 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.403068 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.403091 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:24Z","lastTransitionTime":"2025-11-24T21:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.427076 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:24 crc kubenswrapper[4915]: E1124 21:20:24.427372 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.428010 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:24 crc kubenswrapper[4915]: E1124 21:20:24.428079 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.428230 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:24 crc kubenswrapper[4915]: E1124 21:20:24.428319 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.506727 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.507094 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.507146 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.507186 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.507209 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:24Z","lastTransitionTime":"2025-11-24T21:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.610439 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.611085 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.611131 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.611151 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.611163 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:24Z","lastTransitionTime":"2025-11-24T21:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.714721 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.715958 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.716145 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.716283 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.716435 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:24Z","lastTransitionTime":"2025-11-24T21:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.819232 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.819571 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.819753 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.819951 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.820131 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:24Z","lastTransitionTime":"2025-11-24T21:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.922368 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.922439 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.922452 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.922468 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:24 crc kubenswrapper[4915]: I1124 21:20:24.922479 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:24Z","lastTransitionTime":"2025-11-24T21:20:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.025095 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.025549 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.025687 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.025869 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.025991 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:25Z","lastTransitionTime":"2025-11-24T21:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.129451 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.129719 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.129996 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.130104 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.130193 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:25Z","lastTransitionTime":"2025-11-24T21:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.233514 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.233565 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.233582 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.233606 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.233624 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:25Z","lastTransitionTime":"2025-11-24T21:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.337079 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.337118 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.337128 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.337142 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.337151 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:25Z","lastTransitionTime":"2025-11-24T21:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.426566 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:25 crc kubenswrapper[4915]: E1124 21:20:25.426716 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.427445 4915 scope.go:117] "RemoveContainer" containerID="82364b77290f45179454c0fad8a2e9d055652c9b827f70d5e677b85b782c891d" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.439134 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.439224 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.439627 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.439717 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.440045 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:25Z","lastTransitionTime":"2025-11-24T21:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.451221 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.469433 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.485924 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a62fc83839ca150ad93c6e2e7c52a9f0d31efd20e3a994bfaf5f06d2fcf4a892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.504712 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e99580d33be14d59916fae025c81700120a64889ea614c1f04d18be178b533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.519090 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27035ec78eaedd29238e13769c857bb0f8b79cfa4f5218f99438636691166caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc03c3831c2b38d521328d59ec14fc608766952c77abc524de05c7d6abc6679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lpnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.538513 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.541913 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.541949 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.541959 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.541976 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.541988 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:25Z","lastTransitionTime":"2025-11-24T21:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.557811 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.575821 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.593581 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6520b970c1e0279fea2d39233569ea1cb355259398e39a9fd63a1b61218dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8862d30ccca23f9831d25c0184b0e342ca7e8a4c67b0fe96241323ed74bc49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6932314dff48e102e97de6739f7af9e8c748a9a027a00f6cb31e70c19b9f2027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d8ac7b09233a7e864b5dfba1aefafe23e08de6ef34879478cf63c7ec594794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07279f43e6efab7e8ce1739eb703e344027c7da1e758e62f581fe05c5028c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce0174ac3e69992dce18dea349b0c6ec98cd95c67d12cbd93110c0d604f76e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://82364b77290f45179454c0fad8a2e9d055652c9b827f70d5e677b85b782c891d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82364b77290f45179454c0fad8a2e9d055652c9b827f70d5e677b85b782c891d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:13Z\\\",\\\"message\\\":\\\"56] Processing sync for service openshift-console/downloads for network=default\\\\nI1124 21:20:13.519616 6341 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-b8kq8\\\\nI1124 21:20:13.519493 6341 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nF1124 21:20:13.519625 6341 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:13Z is after 2025-08-24T17:21:41Z]\\\\nI1124 21:20:13.519633 6341 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-b8kq8 in node crc\\\\nI1124 21:20:13.519637 6341 ovn.go:134] Ensuring zone local for Pod opensh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-jmqqt_openshift-ovn-kubernetes(3f235785-6b02-4304-99b8-3b216c369d45)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e3c0b39454a6a8c631a99c9c4faab3c4b956f476ba652178ee7b11041b825f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.605208 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.617744 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hkc4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a785aaf6-e561-47e9-a3ff-69e6930c5941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hkc4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.634697 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.646474 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.646496 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.646503 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.646516 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.646524 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:25Z","lastTransitionTime":"2025-11-24T21:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.648216 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01094696-9a80-41b4-8341-f721d64dcba9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6566ed68b170603480f4f1938004e3c9cde1a814ceba7575a792ee272c03c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f65a630ef6d66b01511f0e4ce0279addf8b6f72ee79283bccc1325b5f198b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://430687a31d119ccef92a1bc31318cef87e3c53b4a12df207f9b9ef8bcc968c9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fa9d2e4c65eb47a5bdf7ef3f569abd936b8bf98e79ec9656e0edbf1f1cdd60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.673963 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.687840 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.698240 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.707416 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.754895 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.754977 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.754990 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.755015 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.755030 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:25Z","lastTransitionTime":"2025-11-24T21:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.759744 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jmqqt_3f235785-6b02-4304-99b8-3b216c369d45/ovnkube-controller/1.log" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.761929 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" event={"ID":"3f235785-6b02-4304-99b8-3b216c369d45","Type":"ContainerStarted","Data":"13f9fc5ec2725019a1b1d6cd3593fd90e19f9e035d5480c8c7556500f34bfbf4"} Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.762038 4915 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.781055 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e99580d33be14d59916fae025c81700120a64889ea614c1f04d18be178b533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.797894 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27035ec78eaedd29238e13769c857bb0f8b79cfa4f5218f99438636691166caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc03c3831c2b38d521328d59ec14fc608766952c77abc524de05c7d6abc6679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lpnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.822544 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.840055 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.854377 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a62fc83839ca150ad93c6e2e7c52a9f0d31efd20e3a994bfaf5f06d2fcf4a892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.857653 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.857847 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.857909 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.857968 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.858042 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:25Z","lastTransitionTime":"2025-11-24T21:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.871049 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.893499 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6520b970c1e0279fea2d39233569ea1cb355259398e39a9fd63a1b61218dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8862d30ccca23f9831d25c0184b0e342ca7e8a4c67b0fe96241323ed74bc49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6932314dff48e102e97de6739f7af9e8c748a9a027a00f6cb31e70c19b9f2027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d8ac7b09233a7e864b5dfba1aefafe23e08de6ef34879478cf63c7ec594794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07279f43e6efab7e8ce1739eb703e344027c7da1e758e62f581fe05c5028c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce0174ac3e69992dce18dea349b0c6ec98cd95c67d12cbd93110c0d604f76e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f9fc5ec2725019a1b1d6cd3593fd90e19f9e035d5480c8c7556500f34bfbf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82364b77290f45179454c0fad8a2e9d055652c9b827f70d5e677b85b782c891d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:13Z\\\",\\\"message\\\":\\\"56] Processing sync for service openshift-console/downloads for network=default\\\\nI1124 21:20:13.519616 6341 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-b8kq8\\\\nI1124 21:20:13.519493 6341 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nF1124 21:20:13.519625 6341 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:13Z is after 2025-08-24T17:21:41Z]\\\\nI1124 21:20:13.519633 6341 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-b8kq8 in node crc\\\\nI1124 21:20:13.519637 6341 ovn.go:134] Ensuring zone local for Pod opensh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e3c0b39454a6a8c631a99c9c4faab3c4b956f476ba652178ee7b11041b825f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.903526 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.912186 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hkc4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a785aaf6-e561-47e9-a3ff-69e6930c5941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hkc4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.925860 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.937475 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.952251 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.960592 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.960635 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.960647 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.960668 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.960682 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:25Z","lastTransitionTime":"2025-11-24T21:20:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.964890 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.974322 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.984104 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:25 crc kubenswrapper[4915]: I1124 21:20:25.998611 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01094696-9a80-41b4-8341-f721d64dcba9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6566ed68b170603480f4f1938004e3c9cde1a814ceba7575a792ee272c03c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f65a630ef6d66b01511f0e4ce0279addf8b6f72ee79283bccc1325b5f198b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://430687a31d119ccef92a1bc31318cef87e3c53b4a12df207f9b9ef8bcc968c9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fa9d2e4c65eb47a5bdf7ef3f569abd936b8bf98e79ec9656e0edbf1f1cdd60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:25Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.020603 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:26Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.063433 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.063465 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.063472 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.063486 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.063494 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:26Z","lastTransitionTime":"2025-11-24T21:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.166086 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.166358 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.166447 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.166607 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.166799 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:26Z","lastTransitionTime":"2025-11-24T21:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.270723 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.270887 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.270987 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.271101 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.271195 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:26Z","lastTransitionTime":"2025-11-24T21:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.374725 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.374769 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.374791 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.374807 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.374817 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:26Z","lastTransitionTime":"2025-11-24T21:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.425987 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.426105 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:26 crc kubenswrapper[4915]: E1124 21:20:26.426148 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.426218 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:26 crc kubenswrapper[4915]: E1124 21:20:26.426340 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:20:26 crc kubenswrapper[4915]: E1124 21:20:26.426427 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.477625 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.477696 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.477720 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.477751 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.477809 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:26Z","lastTransitionTime":"2025-11-24T21:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.581222 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.581300 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.581332 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.581359 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.581377 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:26Z","lastTransitionTime":"2025-11-24T21:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.684229 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.684266 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.684277 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.684292 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.684306 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:26Z","lastTransitionTime":"2025-11-24T21:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.766991 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jmqqt_3f235785-6b02-4304-99b8-3b216c369d45/ovnkube-controller/2.log" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.767854 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jmqqt_3f235785-6b02-4304-99b8-3b216c369d45/ovnkube-controller/1.log" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.770606 4915 generic.go:334] "Generic (PLEG): container finished" podID="3f235785-6b02-4304-99b8-3b216c369d45" containerID="13f9fc5ec2725019a1b1d6cd3593fd90e19f9e035d5480c8c7556500f34bfbf4" exitCode=1 Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.770655 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" event={"ID":"3f235785-6b02-4304-99b8-3b216c369d45","Type":"ContainerDied","Data":"13f9fc5ec2725019a1b1d6cd3593fd90e19f9e035d5480c8c7556500f34bfbf4"} Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.770700 4915 scope.go:117] "RemoveContainer" containerID="82364b77290f45179454c0fad8a2e9d055652c9b827f70d5e677b85b782c891d" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.771950 4915 scope.go:117] "RemoveContainer" containerID="13f9fc5ec2725019a1b1d6cd3593fd90e19f9e035d5480c8c7556500f34bfbf4" Nov 24 21:20:26 crc kubenswrapper[4915]: E1124 21:20:26.772232 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jmqqt_openshift-ovn-kubernetes(3f235785-6b02-4304-99b8-3b216c369d45)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" podUID="3f235785-6b02-4304-99b8-3b216c369d45" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.786376 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.786591 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.786707 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.786822 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.786905 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:26Z","lastTransitionTime":"2025-11-24T21:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.796574 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:26Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.851506 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:26Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.878583 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6520b970c1e0279fea2d39233569ea1cb355259398e39a9fd63a1b61218dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8862d30ccca23f9831d25c0184b0e342ca7e8a4c67b0fe96241323ed74bc49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6932314dff48e102e97de6739f7af9e8c748a9a027a00f6cb31e70c19b9f2027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d8ac7b09233a7e864b5dfba1aefafe23e08de6ef34879478cf63c7ec594794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07279f43e6efab7e8ce1739eb703e344027c7da1e758e62f581fe05c5028c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce0174ac3e69992dce18dea349b0c6ec98cd95c67d12cbd93110c0d604f76e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f9fc5ec2725019a1b1d6cd3593fd90e19f9e035d5480c8c7556500f34bfbf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82364b77290f45179454c0fad8a2e9d055652c9b827f70d5e677b85b782c891d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:13Z\\\",\\\"message\\\":\\\"56] Processing sync for service openshift-console/downloads for network=default\\\\nI1124 21:20:13.519616 6341 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-b8kq8\\\\nI1124 21:20:13.519493 6341 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nF1124 21:20:13.519625 6341 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:13Z is after 2025-08-24T17:21:41Z]\\\\nI1124 21:20:13.519633 6341 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-b8kq8 in node crc\\\\nI1124 21:20:13.519637 6341 ovn.go:134] Ensuring zone local for Pod opensh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13f9fc5ec2725019a1b1d6cd3593fd90e19f9e035d5480c8c7556500f34bfbf4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:26Z\\\",\\\"message\\\":\\\"k controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:26Z is after 2025-08-24T17:21:41Z]\\\\nI1124 21:20:26.251407 6556 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr\\\\nI1124 21:20:26.251418 6556 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr\\\\nI1124 21:20:26.251327 6556 services_controller.go:434] Service openshift-apiserver/check-endpoints retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{check-endpoints openshift-apiserver 5f356be5-5c32-4923-9be9-f4ede1a71efd 6150 0 2025-02-23 05:23:46 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:openshift-apiserver-check-endpoints] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0003a7637 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e3c0b39454a6a8c631a99c9c4faab3c4b956f476ba652178ee7b11041b825f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:26Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.890036 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.890077 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.890132 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.890157 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.890174 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:26Z","lastTransitionTime":"2025-11-24T21:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.894152 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:26Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.907899 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hkc4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a785aaf6-e561-47e9-a3ff-69e6930c5941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hkc4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:26Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.920293 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01094696-9a80-41b4-8341-f721d64dcba9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6566ed68b170603480f4f1938004e3c9cde1a814ceba7575a792ee272c03c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f65a630ef6d66b01511f0e4ce0279addf8b6f72ee79283bccc1325b5f198b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://430687a31d119ccef92a1bc31318cef87e3c53b4a12df207f9b9ef8bcc968c9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fa9d2e4c65eb47a5bdf7ef3f569abd936b8bf98e79ec9656e0edbf1f1cdd60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:26Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.942654 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:26Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.963446 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:26Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.975734 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:26Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.984048 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:26Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.991849 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.991977 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.992108 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.992193 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.992269 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:26Z","lastTransitionTime":"2025-11-24T21:20:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:26 crc kubenswrapper[4915]: I1124 21:20:26.994720 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:26Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.006403 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:27Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.019250 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:27Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.030615 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a62fc83839ca150ad93c6e2e7c52a9f0d31efd20e3a994bfaf5f06d2fcf4a892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:27Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.045553 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e99580d33be14d59916fae025c81700120a64889ea614c1f04d18be178b533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:27Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.059049 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27035ec78eaedd29238e13769c857bb0f8b79cfa4f5218f99438636691166caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc03c3831c2b38d521328d59ec14fc608766952c77abc524de05c7d6abc6679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lpnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:27Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.070275 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:27Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.095334 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.095385 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.095398 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.095418 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.095430 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:27Z","lastTransitionTime":"2025-11-24T21:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.197552 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.197621 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.197639 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.197663 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.197679 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:27Z","lastTransitionTime":"2025-11-24T21:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.302863 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.302943 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.302956 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.302984 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.302999 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:27Z","lastTransitionTime":"2025-11-24T21:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.406582 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.406655 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.406679 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.406712 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.406737 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:27Z","lastTransitionTime":"2025-11-24T21:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.426131 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:27 crc kubenswrapper[4915]: E1124 21:20:27.426472 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.510017 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.510076 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.510094 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.510118 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.510138 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:27Z","lastTransitionTime":"2025-11-24T21:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.612703 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.612741 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.612754 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.612797 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.612812 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:27Z","lastTransitionTime":"2025-11-24T21:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.715382 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.715721 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.715976 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.716338 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.716487 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:27Z","lastTransitionTime":"2025-11-24T21:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.777087 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jmqqt_3f235785-6b02-4304-99b8-3b216c369d45/ovnkube-controller/2.log" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.819883 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.819935 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.819952 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.819976 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.819992 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:27Z","lastTransitionTime":"2025-11-24T21:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.924006 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.924054 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.924067 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.924086 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:27 crc kubenswrapper[4915]: I1124 21:20:27.924099 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:27Z","lastTransitionTime":"2025-11-24T21:20:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.026357 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.026395 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.026410 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.026432 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.026444 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:28Z","lastTransitionTime":"2025-11-24T21:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.128874 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.128950 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.128974 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.129004 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.129025 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:28Z","lastTransitionTime":"2025-11-24T21:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.232021 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.232084 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.232102 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.232126 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.232144 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:28Z","lastTransitionTime":"2025-11-24T21:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.335599 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.335660 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.335679 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.335705 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.335727 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:28Z","lastTransitionTime":"2025-11-24T21:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.426333 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.426358 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.426430 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:28 crc kubenswrapper[4915]: E1124 21:20:28.426598 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:20:28 crc kubenswrapper[4915]: E1124 21:20:28.426757 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:20:28 crc kubenswrapper[4915]: E1124 21:20:28.426953 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.437952 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.438032 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.438054 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.438080 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.438100 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:28Z","lastTransitionTime":"2025-11-24T21:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.540367 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.540430 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.540453 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.540479 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.540500 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:28Z","lastTransitionTime":"2025-11-24T21:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.642686 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.642758 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.642869 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.642906 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.642932 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:28Z","lastTransitionTime":"2025-11-24T21:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.671060 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.671124 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.671142 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.671168 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.671185 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:28Z","lastTransitionTime":"2025-11-24T21:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:28 crc kubenswrapper[4915]: E1124 21:20:28.686627 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.691009 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.691053 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.691069 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.691086 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.691097 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:28Z","lastTransitionTime":"2025-11-24T21:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:28 crc kubenswrapper[4915]: E1124 21:20:28.705096 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.709813 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.709911 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.709926 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.709947 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.710326 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:28Z","lastTransitionTime":"2025-11-24T21:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:28 crc kubenswrapper[4915]: E1124 21:20:28.728751 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.733297 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.733352 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.733368 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.733388 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.733402 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:28Z","lastTransitionTime":"2025-11-24T21:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:28 crc kubenswrapper[4915]: E1124 21:20:28.748588 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.753329 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.753381 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.753399 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.753425 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.753443 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:28Z","lastTransitionTime":"2025-11-24T21:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:28 crc kubenswrapper[4915]: E1124 21:20:28.768165 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:28Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:28 crc kubenswrapper[4915]: E1124 21:20:28.768335 4915 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.770427 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.770474 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.770489 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.770517 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.770531 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:28Z","lastTransitionTime":"2025-11-24T21:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.873738 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.873813 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.873832 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.873864 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.873883 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:28Z","lastTransitionTime":"2025-11-24T21:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.976726 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.976867 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.976887 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.976918 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:28 crc kubenswrapper[4915]: I1124 21:20:28.976936 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:28Z","lastTransitionTime":"2025-11-24T21:20:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.085426 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.085513 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.085531 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.085560 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.085580 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:29Z","lastTransitionTime":"2025-11-24T21:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.188645 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.188707 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.188722 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.188747 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.188763 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:29Z","lastTransitionTime":"2025-11-24T21:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.292095 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.292174 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.292193 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.292221 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.292239 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:29Z","lastTransitionTime":"2025-11-24T21:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.394745 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.394799 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.394810 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.394825 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.394836 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:29Z","lastTransitionTime":"2025-11-24T21:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.426374 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:29 crc kubenswrapper[4915]: E1124 21:20:29.426520 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.498291 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.498340 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.498379 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.498401 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.498415 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:29Z","lastTransitionTime":"2025-11-24T21:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.601511 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.601574 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.601592 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.601614 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.601631 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:29Z","lastTransitionTime":"2025-11-24T21:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.703866 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.703900 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.703911 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.703927 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.703940 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:29Z","lastTransitionTime":"2025-11-24T21:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.805648 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.805700 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.805716 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.805738 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.805755 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:29Z","lastTransitionTime":"2025-11-24T21:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.909265 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.909333 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.909380 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.909405 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:29 crc kubenswrapper[4915]: I1124 21:20:29.909421 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:29Z","lastTransitionTime":"2025-11-24T21:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.013103 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.013169 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.013188 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.013212 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.013229 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:30Z","lastTransitionTime":"2025-11-24T21:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.115200 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.115243 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.115254 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.115271 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.115282 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:30Z","lastTransitionTime":"2025-11-24T21:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.218824 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.218874 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.218887 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.218906 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.218917 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:30Z","lastTransitionTime":"2025-11-24T21:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.321761 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.321833 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.321846 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.321866 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.321898 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:30Z","lastTransitionTime":"2025-11-24T21:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.424020 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.424055 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.424063 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.424095 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.424105 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:30Z","lastTransitionTime":"2025-11-24T21:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.426351 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.426414 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.426420 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:30 crc kubenswrapper[4915]: E1124 21:20:30.426464 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:20:30 crc kubenswrapper[4915]: E1124 21:20:30.426600 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:20:30 crc kubenswrapper[4915]: E1124 21:20:30.426671 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.527460 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.527516 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.527533 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.527554 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.527569 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:30Z","lastTransitionTime":"2025-11-24T21:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.630287 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.630376 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.630411 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.630444 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.630470 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:30Z","lastTransitionTime":"2025-11-24T21:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.734239 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.734301 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.734318 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.734344 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.734362 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:30Z","lastTransitionTime":"2025-11-24T21:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.837118 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.837184 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.837200 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.837223 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.837239 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:30Z","lastTransitionTime":"2025-11-24T21:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.940419 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.940493 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.940519 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.940556 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:30 crc kubenswrapper[4915]: I1124 21:20:30.940580 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:30Z","lastTransitionTime":"2025-11-24T21:20:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.043447 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.043507 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.043548 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.043582 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.043608 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:31Z","lastTransitionTime":"2025-11-24T21:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.129511 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.130561 4915 scope.go:117] "RemoveContainer" containerID="13f9fc5ec2725019a1b1d6cd3593fd90e19f9e035d5480c8c7556500f34bfbf4" Nov 24 21:20:31 crc kubenswrapper[4915]: E1124 21:20:31.130770 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jmqqt_openshift-ovn-kubernetes(3f235785-6b02-4304-99b8-3b216c369d45)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" podUID="3f235785-6b02-4304-99b8-3b216c369d45" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.146811 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.146863 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.146879 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.146898 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.146910 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:31Z","lastTransitionTime":"2025-11-24T21:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.154193 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.170374 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.183757 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a62fc83839ca150ad93c6e2e7c52a9f0d31efd20e3a994bfaf5f06d2fcf4a892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.206120 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e99580d33be14d59916fae025c81700120a64889ea614c1f04d18be178b533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.225975 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27035ec78eaedd29238e13769c857bb0f8b79cfa4f5218f99438636691166caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc03c3831c2b38d521328d59ec14fc608766952c77abc524de05c7d6abc6679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lpnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.245909 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.251151 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.251247 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.251270 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.251297 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.251316 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:31Z","lastTransitionTime":"2025-11-24T21:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.265694 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.282921 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.305770 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6520b970c1e0279fea2d39233569ea1cb355259398e39a9fd63a1b61218dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8862d30ccca23f9831d25c0184b0e342ca7e8a4c67b0fe96241323ed74bc49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6932314dff48e102e97de6739f7af9e8c748a9a027a00f6cb31e70c19b9f2027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d8ac7b09233a7e864b5dfba1aefafe23e08de6ef34879478cf63c7ec594794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07279f43e6efab7e8ce1739eb703e344027c7da1e758e62f581fe05c5028c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce0174ac3e69992dce18dea349b0c6ec98cd95c67d12cbd93110c0d604f76e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f9fc5ec2725019a1b1d6cd3593fd90e19f9e035d5480c8c7556500f34bfbf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13f9fc5ec2725019a1b1d6cd3593fd90e19f9e035d5480c8c7556500f34bfbf4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:26Z\\\",\\\"message\\\":\\\"k controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:26Z is after 2025-08-24T17:21:41Z]\\\\nI1124 21:20:26.251407 6556 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr\\\\nI1124 21:20:26.251418 6556 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr\\\\nI1124 21:20:26.251327 6556 services_controller.go:434] Service openshift-apiserver/check-endpoints retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{check-endpoints openshift-apiserver 5f356be5-5c32-4923-9be9-f4ede1a71efd 6150 0 2025-02-23 05:23:46 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:openshift-apiserver-check-endpoints] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0003a7637 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jmqqt_openshift-ovn-kubernetes(3f235785-6b02-4304-99b8-3b216c369d45)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e3c0b39454a6a8c631a99c9c4faab3c4b956f476ba652178ee7b11041b825f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.321019 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.335465 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hkc4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a785aaf6-e561-47e9-a3ff-69e6930c5941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hkc4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.350893 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01094696-9a80-41b4-8341-f721d64dcba9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6566ed68b170603480f4f1938004e3c9cde1a814ceba7575a792ee272c03c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f65a630ef6d66b01511f0e4ce0279addf8b6f72ee79283bccc1325b5f198b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://430687a31d119ccef92a1bc31318cef87e3c53b4a12df207f9b9ef8bcc968c9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fa9d2e4c65eb47a5bdf7ef3f569abd936b8bf98e79ec9656e0edbf1f1cdd60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.354143 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.354197 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.354211 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.354233 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.354247 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:31Z","lastTransitionTime":"2025-11-24T21:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.373627 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.385794 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.395683 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.407742 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.415554 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:31Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.425746 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:31 crc kubenswrapper[4915]: E1124 21:20:31.425870 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.457143 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.457500 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.457745 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.457913 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.457992 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:31Z","lastTransitionTime":"2025-11-24T21:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.561258 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.561398 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.561422 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.561454 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.561481 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:31Z","lastTransitionTime":"2025-11-24T21:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.665091 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.665150 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.665183 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.665248 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.665273 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:31Z","lastTransitionTime":"2025-11-24T21:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.768427 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.768510 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.768535 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.768563 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.768584 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:31Z","lastTransitionTime":"2025-11-24T21:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.872357 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.872439 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.872460 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.872492 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.872511 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:31Z","lastTransitionTime":"2025-11-24T21:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.976988 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.977052 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.977070 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.977098 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:31 crc kubenswrapper[4915]: I1124 21:20:31.977120 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:31Z","lastTransitionTime":"2025-11-24T21:20:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.080101 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.080167 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.080185 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.080215 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.080230 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:32Z","lastTransitionTime":"2025-11-24T21:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.183644 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.183712 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.183729 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.183757 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.183849 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:32Z","lastTransitionTime":"2025-11-24T21:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.212305 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a785aaf6-e561-47e9-a3ff-69e6930c5941-metrics-certs\") pod \"network-metrics-daemon-hkc4w\" (UID: \"a785aaf6-e561-47e9-a3ff-69e6930c5941\") " pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:32 crc kubenswrapper[4915]: E1124 21:20:32.212489 4915 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:20:32 crc kubenswrapper[4915]: E1124 21:20:32.212580 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a785aaf6-e561-47e9-a3ff-69e6930c5941-metrics-certs podName:a785aaf6-e561-47e9-a3ff-69e6930c5941 nodeName:}" failed. No retries permitted until 2025-11-24 21:20:48.212554161 +0000 UTC m=+66.528806364 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a785aaf6-e561-47e9-a3ff-69e6930c5941-metrics-certs") pod "network-metrics-daemon-hkc4w" (UID: "a785aaf6-e561-47e9-a3ff-69e6930c5941") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.287542 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.287600 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.287611 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.287632 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.287650 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:32Z","lastTransitionTime":"2025-11-24T21:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.392239 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.392303 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.392324 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.392352 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.392371 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:32Z","lastTransitionTime":"2025-11-24T21:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.426118 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.426141 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:32 crc kubenswrapper[4915]: E1124 21:20:32.426312 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:20:32 crc kubenswrapper[4915]: E1124 21:20:32.426532 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.426931 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:32 crc kubenswrapper[4915]: E1124 21:20:32.427298 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.456032 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.467335 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.481074 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.491065 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.494834 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.494854 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.494863 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.494875 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.494884 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:32Z","lastTransitionTime":"2025-11-24T21:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.501136 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.517964 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01094696-9a80-41b4-8341-f721d64dcba9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6566ed68b170603480f4f1938004e3c9cde1a814ceba7575a792ee272c03c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f65a630ef6d66b01511f0e4ce0279addf8b6f72ee79283bccc1325b5f198b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://430687a31d119ccef92a1bc31318cef87e3c53b4a12df207f9b9ef8bcc968c9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fa9d2e4c65eb47a5bdf7ef3f569abd936b8bf98e79ec9656e0edbf1f1cdd60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.538258 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.552438 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a62fc83839ca150ad93c6e2e7c52a9f0d31efd20e3a994bfaf5f06d2fcf4a892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.576725 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e99580d33be14d59916fae025c81700120a64889ea614c1f04d18be178b533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.591465 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27035ec78eaedd29238e13769c857bb0f8b79cfa4f5218f99438636691166caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc03c3831c2b38d521328d59ec14fc608766952c77abc524de05c7d6abc6679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lpnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.599087 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.599147 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.599164 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.599188 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.599205 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:32Z","lastTransitionTime":"2025-11-24T21:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.606043 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.624729 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.643424 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.658436 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.679117 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6520b970c1e0279fea2d39233569ea1cb355259398e39a9fd63a1b61218dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8862d30ccca23f9831d25c0184b0e342ca7e8a4c67b0fe96241323ed74bc49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6932314dff48e102e97de6739f7af9e8c748a9a027a00f6cb31e70c19b9f2027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d8ac7b09233a7e864b5dfba1aefafe23e08de6ef34879478cf63c7ec594794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07279f43e6efab7e8ce1739eb703e344027c7da1e758e62f581fe05c5028c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce0174ac3e69992dce18dea349b0c6ec98cd95c67d12cbd93110c0d604f76e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f9fc5ec2725019a1b1d6cd3593fd90e19f9e035d5480c8c7556500f34bfbf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13f9fc5ec2725019a1b1d6cd3593fd90e19f9e035d5480c8c7556500f34bfbf4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:26Z\\\",\\\"message\\\":\\\"k controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:26Z is after 2025-08-24T17:21:41Z]\\\\nI1124 21:20:26.251407 6556 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr\\\\nI1124 21:20:26.251418 6556 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr\\\\nI1124 21:20:26.251327 6556 services_controller.go:434] Service openshift-apiserver/check-endpoints retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{check-endpoints openshift-apiserver 5f356be5-5c32-4923-9be9-f4ede1a71efd 6150 0 2025-02-23 05:23:46 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:openshift-apiserver-check-endpoints] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0003a7637 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jmqqt_openshift-ovn-kubernetes(3f235785-6b02-4304-99b8-3b216c369d45)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e3c0b39454a6a8c631a99c9c4faab3c4b956f476ba652178ee7b11041b825f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.701733 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.701797 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.701810 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.701829 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.701841 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:32Z","lastTransitionTime":"2025-11-24T21:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.701840 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.714274 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hkc4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a785aaf6-e561-47e9-a3ff-69e6930c5941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hkc4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:32Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.804394 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.804449 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.804466 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.804511 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.804528 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:32Z","lastTransitionTime":"2025-11-24T21:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.907764 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.907886 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.907909 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.907940 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:32 crc kubenswrapper[4915]: I1124 21:20:32.907961 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:32Z","lastTransitionTime":"2025-11-24T21:20:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.010883 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.010940 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.010956 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.010980 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.010998 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:33Z","lastTransitionTime":"2025-11-24T21:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.114147 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.114190 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.114206 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.114228 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.114245 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:33Z","lastTransitionTime":"2025-11-24T21:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.216754 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.216848 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.216866 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.216890 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.216909 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:33Z","lastTransitionTime":"2025-11-24T21:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.319659 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.319705 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.319720 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.319740 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.319757 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:33Z","lastTransitionTime":"2025-11-24T21:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.422568 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.422610 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.422622 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.422641 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.422654 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:33Z","lastTransitionTime":"2025-11-24T21:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.426367 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:33 crc kubenswrapper[4915]: E1124 21:20:33.426558 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.526055 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.526110 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.526128 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.526151 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.526169 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:33Z","lastTransitionTime":"2025-11-24T21:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.629542 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.629623 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.629643 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.629668 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.629685 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:33Z","lastTransitionTime":"2025-11-24T21:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.733385 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.733448 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.733467 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.733491 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.733509 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:33Z","lastTransitionTime":"2025-11-24T21:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.836279 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.836321 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.836331 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.836349 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.836362 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:33Z","lastTransitionTime":"2025-11-24T21:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.939596 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.939670 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.939680 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.939702 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:33 crc kubenswrapper[4915]: I1124 21:20:33.939714 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:33Z","lastTransitionTime":"2025-11-24T21:20:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.043110 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.043175 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.043196 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.043223 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.043242 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:34Z","lastTransitionTime":"2025-11-24T21:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.147054 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.147142 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.147156 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.147175 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.148193 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:34Z","lastTransitionTime":"2025-11-24T21:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.233095 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.233245 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:34 crc kubenswrapper[4915]: E1124 21:20:34.233386 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:21:06.233319431 +0000 UTC m=+84.549571644 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:20:34 crc kubenswrapper[4915]: E1124 21:20:34.233410 4915 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.233483 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:34 crc kubenswrapper[4915]: E1124 21:20:34.233490 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:21:06.233473035 +0000 UTC m=+84.549725208 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.233544 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.233580 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:34 crc kubenswrapper[4915]: E1124 21:20:34.233666 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:20:34 crc kubenswrapper[4915]: E1124 21:20:34.233680 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:20:34 crc kubenswrapper[4915]: E1124 21:20:34.233691 4915 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:20:34 crc kubenswrapper[4915]: E1124 21:20:34.233723 4915 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:20:34 crc kubenswrapper[4915]: E1124 21:20:34.233746 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:20:34 crc kubenswrapper[4915]: E1124 21:20:34.233816 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:20:34 crc kubenswrapper[4915]: E1124 21:20:34.233832 4915 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:20:34 crc kubenswrapper[4915]: E1124 21:20:34.233725 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 21:21:06.233717611 +0000 UTC m=+84.549969784 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:20:34 crc kubenswrapper[4915]: E1124 21:20:34.233919 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:21:06.233893666 +0000 UTC m=+84.550145929 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:20:34 crc kubenswrapper[4915]: E1124 21:20:34.233935 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 21:21:06.233927907 +0000 UTC m=+84.550180200 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.250800 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.250843 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.250855 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.250872 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.250884 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:34Z","lastTransitionTime":"2025-11-24T21:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.353244 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.353303 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.353316 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.353336 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.353352 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:34Z","lastTransitionTime":"2025-11-24T21:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.425724 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.425835 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:34 crc kubenswrapper[4915]: E1124 21:20:34.425903 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:20:34 crc kubenswrapper[4915]: E1124 21:20:34.426010 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.425850 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:34 crc kubenswrapper[4915]: E1124 21:20:34.426227 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.456020 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.456063 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.456074 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.456090 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.456099 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:34Z","lastTransitionTime":"2025-11-24T21:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.559402 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.559824 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.560057 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.560259 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.560496 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:34Z","lastTransitionTime":"2025-11-24T21:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.663593 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.663626 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.663636 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.663653 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.663665 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:34Z","lastTransitionTime":"2025-11-24T21:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.766495 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.766587 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.766614 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.766651 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.766675 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:34Z","lastTransitionTime":"2025-11-24T21:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.869976 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.870021 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.870034 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.870050 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.870063 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:34Z","lastTransitionTime":"2025-11-24T21:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.973804 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.973866 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.973883 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.973909 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:34 crc kubenswrapper[4915]: I1124 21:20:34.973928 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:34Z","lastTransitionTime":"2025-11-24T21:20:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.077252 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.077312 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.077330 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.077353 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.077371 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:35Z","lastTransitionTime":"2025-11-24T21:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.180168 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.180230 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.180247 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.180270 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.180288 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:35Z","lastTransitionTime":"2025-11-24T21:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.283891 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.284024 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.284055 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.284084 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.284107 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:35Z","lastTransitionTime":"2025-11-24T21:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.386435 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.386478 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.386490 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.386505 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.386518 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:35Z","lastTransitionTime":"2025-11-24T21:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.425956 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:35 crc kubenswrapper[4915]: E1124 21:20:35.426117 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.490037 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.490099 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.490109 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.490133 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.490143 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:35Z","lastTransitionTime":"2025-11-24T21:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.594010 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.594081 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.594094 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.594124 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.594139 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:35Z","lastTransitionTime":"2025-11-24T21:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.697226 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.697286 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.697303 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.697329 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.697351 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:35Z","lastTransitionTime":"2025-11-24T21:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.800934 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.801012 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.801035 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.801069 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.801096 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:35Z","lastTransitionTime":"2025-11-24T21:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.904872 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.904954 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.904975 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.905006 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:35 crc kubenswrapper[4915]: I1124 21:20:35.905028 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:35Z","lastTransitionTime":"2025-11-24T21:20:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.008181 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.008247 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.008271 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.008299 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.008322 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:36Z","lastTransitionTime":"2025-11-24T21:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.111077 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.111148 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.111166 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.111191 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.111207 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:36Z","lastTransitionTime":"2025-11-24T21:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.208014 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.216373 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.216424 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.216443 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.216473 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.216492 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:36Z","lastTransitionTime":"2025-11-24T21:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.219971 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.227735 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:36Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.246369 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:36Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.277400 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6520b970c1e0279fea2d39233569ea1cb355259398e39a9fd63a1b61218dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8862d30ccca23f9831d25c0184b0e342ca7e8a4c67b0fe96241323ed74bc49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6932314dff48e102e97de6739f7af9e8c748a9a027a00f6cb31e70c19b9f2027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d8ac7b09233a7e864b5dfba1aefafe23e08de6ef34879478cf63c7ec594794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07279f43e6efab7e8ce1739eb703e344027c7da1e758e62f581fe05c5028c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce0174ac3e69992dce18dea349b0c6ec98cd95c67d12cbd93110c0d604f76e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f9fc5ec2725019a1b1d6cd3593fd90e19f9e035d5480c8c7556500f34bfbf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13f9fc5ec2725019a1b1d6cd3593fd90e19f9e035d5480c8c7556500f34bfbf4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:26Z\\\",\\\"message\\\":\\\"k controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:26Z is after 2025-08-24T17:21:41Z]\\\\nI1124 21:20:26.251407 6556 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr\\\\nI1124 21:20:26.251418 6556 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr\\\\nI1124 21:20:26.251327 6556 services_controller.go:434] Service openshift-apiserver/check-endpoints retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{check-endpoints openshift-apiserver 5f356be5-5c32-4923-9be9-f4ede1a71efd 6150 0 2025-02-23 05:23:46 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:openshift-apiserver-check-endpoints] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0003a7637 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jmqqt_openshift-ovn-kubernetes(3f235785-6b02-4304-99b8-3b216c369d45)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e3c0b39454a6a8c631a99c9c4faab3c4b956f476ba652178ee7b11041b825f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:36Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.295397 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:36Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.310736 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hkc4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a785aaf6-e561-47e9-a3ff-69e6930c5941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hkc4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:36Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.322250 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.322311 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.322334 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.322384 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.322409 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:36Z","lastTransitionTime":"2025-11-24T21:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.334768 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:36Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.348404 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01094696-9a80-41b4-8341-f721d64dcba9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6566ed68b170603480f4f1938004e3c9cde1a814ceba7575a792ee272c03c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f65a630ef6d66b01511f0e4ce0279addf8b6f72ee79283bccc1325b5f198b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://430687a31d119ccef92a1bc31318cef87e3c53b4a12df207f9b9ef8bcc968c9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fa9d2e4c65eb47a5bdf7ef3f569abd936b8bf98e79ec9656e0edbf1f1cdd60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:36Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.369712 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:36Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.390435 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:36Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.408512 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:36Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.422003 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:36Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.425585 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.425639 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.425660 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.425687 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.425710 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:36Z","lastTransitionTime":"2025-11-24T21:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.426111 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:36 crc kubenswrapper[4915]: E1124 21:20:36.426272 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.426372 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:36 crc kubenswrapper[4915]: E1124 21:20:36.426487 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.426629 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:36 crc kubenswrapper[4915]: E1124 21:20:36.426734 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.442552 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:36Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.461232 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:36Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.475202 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a62fc83839ca150ad93c6e2e7c52a9f0d31efd20e3a994bfaf5f06d2fcf4a892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:36Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.496738 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e99580d33be14d59916fae025c81700120a64889ea614c1f04d18be178b533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:36Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.510735 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27035ec78eaedd29238e13769c857bb0f8b79cfa4f5218f99438636691166caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc03c3831c2b38d521328d59ec14fc608766952c77abc524de05c7d6abc6679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lpnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:36Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.527296 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:36Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.530315 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.530364 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.530375 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.530393 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.530404 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:36Z","lastTransitionTime":"2025-11-24T21:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.634490 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.634554 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.634570 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.634591 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.634607 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:36Z","lastTransitionTime":"2025-11-24T21:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.737823 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.737883 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.737900 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.737925 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.737942 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:36Z","lastTransitionTime":"2025-11-24T21:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.841344 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.841563 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.841597 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.841626 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.841653 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:36Z","lastTransitionTime":"2025-11-24T21:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.944434 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.944476 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.944485 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.944500 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:36 crc kubenswrapper[4915]: I1124 21:20:36.944512 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:36Z","lastTransitionTime":"2025-11-24T21:20:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.048367 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.048409 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.048422 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.048439 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.048451 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:37Z","lastTransitionTime":"2025-11-24T21:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.150501 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.150548 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.150559 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.150576 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.150606 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:37Z","lastTransitionTime":"2025-11-24T21:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.253886 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.253973 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.253996 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.254031 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.254051 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:37Z","lastTransitionTime":"2025-11-24T21:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.357394 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.357443 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.357453 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.357469 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.357481 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:37Z","lastTransitionTime":"2025-11-24T21:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.425665 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:37 crc kubenswrapper[4915]: E1124 21:20:37.425888 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.459771 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.459869 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.459889 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.459910 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.459927 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:37Z","lastTransitionTime":"2025-11-24T21:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.562869 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.562923 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.562940 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.562963 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.562979 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:37Z","lastTransitionTime":"2025-11-24T21:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.665628 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.665725 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.665746 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.665816 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.665837 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:37Z","lastTransitionTime":"2025-11-24T21:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.769361 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.769442 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.769460 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.769486 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.769503 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:37Z","lastTransitionTime":"2025-11-24T21:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.872466 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.872508 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.872519 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.872535 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.872546 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:37Z","lastTransitionTime":"2025-11-24T21:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.975197 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.975255 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.975276 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.975302 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:37 crc kubenswrapper[4915]: I1124 21:20:37.975319 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:37Z","lastTransitionTime":"2025-11-24T21:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.078292 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.078379 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.078404 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.078436 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.078459 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:38Z","lastTransitionTime":"2025-11-24T21:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.181700 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.181762 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.181824 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.181850 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.181866 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:38Z","lastTransitionTime":"2025-11-24T21:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.286193 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.286274 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.286298 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.286330 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.286355 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:38Z","lastTransitionTime":"2025-11-24T21:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.389345 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.389409 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.389434 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.389467 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.389490 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:38Z","lastTransitionTime":"2025-11-24T21:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.426243 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.426305 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.426390 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:38 crc kubenswrapper[4915]: E1124 21:20:38.426536 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:20:38 crc kubenswrapper[4915]: E1124 21:20:38.426756 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:20:38 crc kubenswrapper[4915]: E1124 21:20:38.427000 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.492747 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.492866 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.492890 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.492921 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.492947 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:38Z","lastTransitionTime":"2025-11-24T21:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.596256 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.596332 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.596359 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.596393 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.596418 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:38Z","lastTransitionTime":"2025-11-24T21:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.699224 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.699284 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.699305 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.699329 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.699347 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:38Z","lastTransitionTime":"2025-11-24T21:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.803243 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.803678 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.803980 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.804193 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.804466 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:38Z","lastTransitionTime":"2025-11-24T21:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.906947 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.907221 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.907356 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.907502 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:38 crc kubenswrapper[4915]: I1124 21:20:38.907644 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:38Z","lastTransitionTime":"2025-11-24T21:20:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.010643 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.011018 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.011167 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.011319 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.011468 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:39Z","lastTransitionTime":"2025-11-24T21:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.102271 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.102673 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.103041 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.103100 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.103125 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:39Z","lastTransitionTime":"2025-11-24T21:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:39 crc kubenswrapper[4915]: E1124 21:20:39.123764 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:39Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.128926 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.128996 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.129016 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.129043 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.129064 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:39Z","lastTransitionTime":"2025-11-24T21:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:39 crc kubenswrapper[4915]: E1124 21:20:39.148391 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:39Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.153369 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.153424 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.153434 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.153448 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.153457 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:39Z","lastTransitionTime":"2025-11-24T21:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:39 crc kubenswrapper[4915]: E1124 21:20:39.173990 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:39Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.179556 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.179628 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.179654 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.179684 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.179708 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:39Z","lastTransitionTime":"2025-11-24T21:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:39 crc kubenswrapper[4915]: E1124 21:20:39.202662 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:39Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.208122 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.208283 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.208303 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.208331 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.208349 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:39Z","lastTransitionTime":"2025-11-24T21:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:39 crc kubenswrapper[4915]: E1124 21:20:39.229685 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:39Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:39 crc kubenswrapper[4915]: E1124 21:20:39.230026 4915 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.232192 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.232270 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.232290 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.232318 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.232336 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:39Z","lastTransitionTime":"2025-11-24T21:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.335426 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.335474 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.335487 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.335508 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.335522 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:39Z","lastTransitionTime":"2025-11-24T21:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.426960 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:39 crc kubenswrapper[4915]: E1124 21:20:39.427089 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.437710 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.437739 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.437748 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.437759 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.437771 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:39Z","lastTransitionTime":"2025-11-24T21:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.540835 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.540904 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.540923 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.540950 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.540987 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:39Z","lastTransitionTime":"2025-11-24T21:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.643845 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.643899 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.643917 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.643940 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.643957 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:39Z","lastTransitionTime":"2025-11-24T21:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.746987 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.747048 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.747066 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.747083 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.747095 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:39Z","lastTransitionTime":"2025-11-24T21:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.849946 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.850019 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.850041 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.850070 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.850091 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:39Z","lastTransitionTime":"2025-11-24T21:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.953376 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.953423 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.953440 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.953463 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:39 crc kubenswrapper[4915]: I1124 21:20:39.953480 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:39Z","lastTransitionTime":"2025-11-24T21:20:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.056165 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.056241 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.056259 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.056284 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.056305 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:40Z","lastTransitionTime":"2025-11-24T21:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.160197 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.160316 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.160392 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.160432 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.160459 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:40Z","lastTransitionTime":"2025-11-24T21:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.263337 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.263405 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.263427 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.263453 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.263469 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:40Z","lastTransitionTime":"2025-11-24T21:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.366848 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.366915 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.366933 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.366959 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.366976 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:40Z","lastTransitionTime":"2025-11-24T21:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.425845 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.425930 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.425941 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:40 crc kubenswrapper[4915]: E1124 21:20:40.426035 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:20:40 crc kubenswrapper[4915]: E1124 21:20:40.426303 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:20:40 crc kubenswrapper[4915]: E1124 21:20:40.426438 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.470302 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.470422 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.470441 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.470469 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.470486 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:40Z","lastTransitionTime":"2025-11-24T21:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.573146 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.573209 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.573231 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.573260 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.573278 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:40Z","lastTransitionTime":"2025-11-24T21:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.676016 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.676085 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.676107 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.676138 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.676163 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:40Z","lastTransitionTime":"2025-11-24T21:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.779570 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.779645 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.779671 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.779702 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.779728 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:40Z","lastTransitionTime":"2025-11-24T21:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.883401 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.883474 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.883499 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.883529 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.883550 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:40Z","lastTransitionTime":"2025-11-24T21:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.987176 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.987256 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.987274 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.987300 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:40 crc kubenswrapper[4915]: I1124 21:20:40.987319 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:40Z","lastTransitionTime":"2025-11-24T21:20:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.090850 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.090909 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.090926 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.090950 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.090971 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:41Z","lastTransitionTime":"2025-11-24T21:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.194661 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.194734 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.194757 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.194833 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.194874 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:41Z","lastTransitionTime":"2025-11-24T21:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.297428 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.297496 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.297520 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.297551 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.297573 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:41Z","lastTransitionTime":"2025-11-24T21:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.400903 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.400981 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.401019 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.401049 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.401071 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:41Z","lastTransitionTime":"2025-11-24T21:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.425696 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:41 crc kubenswrapper[4915]: E1124 21:20:41.425914 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.503843 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.503998 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.504022 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.504055 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.504090 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:41Z","lastTransitionTime":"2025-11-24T21:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.606673 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.606749 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.606772 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.606839 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.606862 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:41Z","lastTransitionTime":"2025-11-24T21:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.710000 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.710078 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.710101 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.710131 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.710155 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:41Z","lastTransitionTime":"2025-11-24T21:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.813142 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.813224 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.813256 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.813285 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.813306 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:41Z","lastTransitionTime":"2025-11-24T21:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.916990 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.917076 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.917103 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.917132 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:41 crc kubenswrapper[4915]: I1124 21:20:41.917154 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:41Z","lastTransitionTime":"2025-11-24T21:20:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.020532 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.020606 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.020629 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.020657 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.020678 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:42Z","lastTransitionTime":"2025-11-24T21:20:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.124136 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.124245 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.124263 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.124293 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.124310 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:42Z","lastTransitionTime":"2025-11-24T21:20:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.227612 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.227688 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.227711 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.227743 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.227766 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:42Z","lastTransitionTime":"2025-11-24T21:20:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.330962 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.331024 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.331041 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.331066 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.331084 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:42Z","lastTransitionTime":"2025-11-24T21:20:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.426181 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.426295 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:42 crc kubenswrapper[4915]: E1124 21:20:42.426366 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.426527 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:42 crc kubenswrapper[4915]: E1124 21:20:42.426639 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:20:42 crc kubenswrapper[4915]: E1124 21:20:42.427900 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.429934 4915 scope.go:117] "RemoveContainer" containerID="13f9fc5ec2725019a1b1d6cd3593fd90e19f9e035d5480c8c7556500f34bfbf4" Nov 24 21:20:42 crc kubenswrapper[4915]: E1124 21:20:42.430572 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jmqqt_openshift-ovn-kubernetes(3f235785-6b02-4304-99b8-3b216c369d45)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" podUID="3f235785-6b02-4304-99b8-3b216c369d45" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.435294 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.435837 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.435869 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.435947 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.435979 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:42Z","lastTransitionTime":"2025-11-24T21:20:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.445690 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.465063 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.493746 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6520b970c1e0279fea2d39233569ea1cb355259398e39a9fd63a1b61218dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8862d30ccca23f9831d25c0184b0e342ca7e8a4c67b0fe96241323ed74bc49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6932314dff48e102e97de6739f7af9e8c748a9a027a00f6cb31e70c19b9f2027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d8ac7b09233a7e864b5dfba1aefafe23e08de6ef34879478cf63c7ec594794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07279f43e6efab7e8ce1739eb703e344027c7da1e758e62f581fe05c5028c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce0174ac3e69992dce18dea349b0c6ec98cd95c67d12cbd93110c0d604f76e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f9fc5ec2725019a1b1d6cd3593fd90e19f9e035d5480c8c7556500f34bfbf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13f9fc5ec2725019a1b1d6cd3593fd90e19f9e035d5480c8c7556500f34bfbf4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:26Z\\\",\\\"message\\\":\\\"k controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:26Z is after 2025-08-24T17:21:41Z]\\\\nI1124 21:20:26.251407 6556 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr\\\\nI1124 21:20:26.251418 6556 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr\\\\nI1124 21:20:26.251327 6556 services_controller.go:434] Service openshift-apiserver/check-endpoints retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{check-endpoints openshift-apiserver 5f356be5-5c32-4923-9be9-f4ede1a71efd 6150 0 2025-02-23 05:23:46 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:openshift-apiserver-check-endpoints] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0003a7637 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jmqqt_openshift-ovn-kubernetes(3f235785-6b02-4304-99b8-3b216c369d45)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e3c0b39454a6a8c631a99c9c4faab3c4b956f476ba652178ee7b11041b825f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.515955 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.530969 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hkc4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a785aaf6-e561-47e9-a3ff-69e6930c5941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hkc4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.538238 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.538266 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.538275 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.538309 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.538322 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:42Z","lastTransitionTime":"2025-11-24T21:20:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.544696 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01094696-9a80-41b4-8341-f721d64dcba9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6566ed68b170603480f4f1938004e3c9cde1a814ceba7575a792ee272c03c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f65a630ef6d66b01511f0e4ce0279addf8b6f72ee79283bccc1325b5f198b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://430687a31d119ccef92a1bc31318cef87e3c53b4a12df207f9b9ef8bcc968c9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fa9d2e4c65eb47a5bdf7ef3f569abd936b8bf98e79ec9656e0edbf1f1cdd60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.559117 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59702f-bbae-4a7e-91f2-292687063f63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c9823ea60f9f0adf6aca07204f39ef6bf6eeb622f4fcba7d3f804ae38f337d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa36275462843401ca1bae1eee8831ff2c28d8e6af24eaea56d765c16ecc8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014646fee05d1d964cd6d7b56ab09b6c95b56f92796b542c0675a50734e92f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://947c69acffc884b2bef5a6bb30b407f9e5dd0519fcaf967da29b2b9c1f983459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947c69acffc884b2bef5a6bb30b407f9e5dd0519fcaf967da29b2b9c1f983459\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.582854 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.595646 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.610458 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.625024 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.636669 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.640287 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.640353 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.640376 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.640407 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.640430 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:42Z","lastTransitionTime":"2025-11-24T21:20:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.652952 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.667271 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.683226 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a62fc83839ca150ad93c6e2e7c52a9f0d31efd20e3a994bfaf5f06d2fcf4a892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.701817 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e99580d33be14d59916fae025c81700120a64889ea614c1f04d18be178b533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.717050 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27035ec78eaedd29238e13769c857bb0f8b79cfa4f5218f99438636691166caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc03c3831c2b38d521328d59ec14fc608766952c77abc524de05c7d6abc6679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lpnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.733320 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:42Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.743913 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.743997 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.744014 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.744038 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.744082 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:42Z","lastTransitionTime":"2025-11-24T21:20:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.846901 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.846962 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.846973 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.846995 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.847008 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:42Z","lastTransitionTime":"2025-11-24T21:20:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.950128 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.950184 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.950202 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.950231 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:42 crc kubenswrapper[4915]: I1124 21:20:42.950253 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:42Z","lastTransitionTime":"2025-11-24T21:20:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.054251 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.054349 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.054376 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.054415 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.054440 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:43Z","lastTransitionTime":"2025-11-24T21:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.161834 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.161892 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.161904 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.161923 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.161936 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:43Z","lastTransitionTime":"2025-11-24T21:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.265010 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.265109 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.265127 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.265164 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.265199 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:43Z","lastTransitionTime":"2025-11-24T21:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.368199 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.368259 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.368273 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.368290 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.368303 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:43Z","lastTransitionTime":"2025-11-24T21:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.426557 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:43 crc kubenswrapper[4915]: E1124 21:20:43.426975 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.471690 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.471826 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.471848 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.471873 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.471892 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:43Z","lastTransitionTime":"2025-11-24T21:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.575327 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.575398 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.575414 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.575440 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.575458 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:43Z","lastTransitionTime":"2025-11-24T21:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.678840 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.678900 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.678917 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.678942 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.678961 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:43Z","lastTransitionTime":"2025-11-24T21:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.782399 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.782462 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.782471 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.782493 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.782506 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:43Z","lastTransitionTime":"2025-11-24T21:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.885640 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.885885 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.885910 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.885942 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.885965 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:43Z","lastTransitionTime":"2025-11-24T21:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.988661 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.988725 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.988747 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.988820 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:43 crc kubenswrapper[4915]: I1124 21:20:43.988843 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:43Z","lastTransitionTime":"2025-11-24T21:20:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.092360 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.092474 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.092499 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.092529 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.092551 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:44Z","lastTransitionTime":"2025-11-24T21:20:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.196071 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.196138 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.196155 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.196183 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.196201 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:44Z","lastTransitionTime":"2025-11-24T21:20:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.299549 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.299603 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.299615 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.299635 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.299648 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:44Z","lastTransitionTime":"2025-11-24T21:20:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.402428 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.402477 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.402490 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.402511 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.402526 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:44Z","lastTransitionTime":"2025-11-24T21:20:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.425820 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.425851 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:44 crc kubenswrapper[4915]: E1124 21:20:44.425939 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.425944 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:44 crc kubenswrapper[4915]: E1124 21:20:44.426104 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:20:44 crc kubenswrapper[4915]: E1124 21:20:44.426226 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.505184 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.505260 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.505285 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.505721 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.506076 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:44Z","lastTransitionTime":"2025-11-24T21:20:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.608875 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.608927 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.608949 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.608974 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.608994 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:44Z","lastTransitionTime":"2025-11-24T21:20:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.712479 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.712510 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.712520 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.712534 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.712546 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:44Z","lastTransitionTime":"2025-11-24T21:20:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.815044 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.815076 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.815088 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.815104 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.815116 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:44Z","lastTransitionTime":"2025-11-24T21:20:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.917236 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.917282 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.917297 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.917320 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:44 crc kubenswrapper[4915]: I1124 21:20:44.917337 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:44Z","lastTransitionTime":"2025-11-24T21:20:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.020533 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.020578 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.020595 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.020617 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.020633 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:45Z","lastTransitionTime":"2025-11-24T21:20:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.124052 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.124097 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.124115 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.124138 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.124155 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:45Z","lastTransitionTime":"2025-11-24T21:20:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.227083 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.227132 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.227151 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.227173 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.227190 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:45Z","lastTransitionTime":"2025-11-24T21:20:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.329817 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.329852 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.329866 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.329886 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.329901 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:45Z","lastTransitionTime":"2025-11-24T21:20:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.425649 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:45 crc kubenswrapper[4915]: E1124 21:20:45.425857 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.432092 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.432186 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.432204 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.432223 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.432238 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:45Z","lastTransitionTime":"2025-11-24T21:20:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.538067 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.538159 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.538183 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.538211 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.538232 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:45Z","lastTransitionTime":"2025-11-24T21:20:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.640765 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.640879 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.640903 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.640934 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.640956 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:45Z","lastTransitionTime":"2025-11-24T21:20:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.743389 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.743430 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.743443 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.743460 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.743473 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:45Z","lastTransitionTime":"2025-11-24T21:20:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.846109 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.846152 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.846185 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.846203 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.846215 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:45Z","lastTransitionTime":"2025-11-24T21:20:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.949179 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.949241 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.949258 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.949282 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:45 crc kubenswrapper[4915]: I1124 21:20:45.949300 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:45Z","lastTransitionTime":"2025-11-24T21:20:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.051615 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.051652 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.051660 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.051674 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.051683 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:46Z","lastTransitionTime":"2025-11-24T21:20:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.154893 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.154943 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.154954 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.154973 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.154986 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:46Z","lastTransitionTime":"2025-11-24T21:20:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.257328 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.257378 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.257398 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.257417 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.257429 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:46Z","lastTransitionTime":"2025-11-24T21:20:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.360096 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.360163 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.360176 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.360195 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.360208 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:46Z","lastTransitionTime":"2025-11-24T21:20:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.426476 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.426515 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.426554 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:46 crc kubenswrapper[4915]: E1124 21:20:46.426665 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:20:46 crc kubenswrapper[4915]: E1124 21:20:46.426724 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:20:46 crc kubenswrapper[4915]: E1124 21:20:46.426842 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.462169 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.462206 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.462216 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.462229 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.462239 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:46Z","lastTransitionTime":"2025-11-24T21:20:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.564305 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.564349 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.564362 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.564381 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.564394 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:46Z","lastTransitionTime":"2025-11-24T21:20:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.666791 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.666824 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.666834 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.666850 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.666863 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:46Z","lastTransitionTime":"2025-11-24T21:20:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.768914 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.768949 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.768960 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.768976 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.768987 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:46Z","lastTransitionTime":"2025-11-24T21:20:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.871629 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.871677 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.871692 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.871709 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.871719 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:46Z","lastTransitionTime":"2025-11-24T21:20:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.973965 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.973995 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.974005 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.974019 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:46 crc kubenswrapper[4915]: I1124 21:20:46.974028 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:46Z","lastTransitionTime":"2025-11-24T21:20:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.076044 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.076109 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.076128 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.076154 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.076174 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:47Z","lastTransitionTime":"2025-11-24T21:20:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.179001 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.179038 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.179048 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.179061 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.179071 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:47Z","lastTransitionTime":"2025-11-24T21:20:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.281991 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.282087 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.282146 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.282170 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.282186 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:47Z","lastTransitionTime":"2025-11-24T21:20:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.384894 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.384981 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.385002 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.385026 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.385043 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:47Z","lastTransitionTime":"2025-11-24T21:20:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.426534 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:47 crc kubenswrapper[4915]: E1124 21:20:47.426765 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.487977 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.488157 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.488185 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.488216 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.488239 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:47Z","lastTransitionTime":"2025-11-24T21:20:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.591369 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.591726 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.591824 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.591859 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.591882 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:47Z","lastTransitionTime":"2025-11-24T21:20:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.694936 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.695006 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.695023 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.695049 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.695070 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:47Z","lastTransitionTime":"2025-11-24T21:20:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.798032 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.798116 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.798144 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.798173 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.798198 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:47Z","lastTransitionTime":"2025-11-24T21:20:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.900685 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.900749 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.900767 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.900820 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:47 crc kubenswrapper[4915]: I1124 21:20:47.900837 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:47Z","lastTransitionTime":"2025-11-24T21:20:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.003576 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.003624 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.003636 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.003657 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.003670 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:48Z","lastTransitionTime":"2025-11-24T21:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.106637 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.106716 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.106729 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.106746 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.106759 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:48Z","lastTransitionTime":"2025-11-24T21:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.209525 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.209563 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.209573 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.209589 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.209600 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:48Z","lastTransitionTime":"2025-11-24T21:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.285461 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a785aaf6-e561-47e9-a3ff-69e6930c5941-metrics-certs\") pod \"network-metrics-daemon-hkc4w\" (UID: \"a785aaf6-e561-47e9-a3ff-69e6930c5941\") " pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:48 crc kubenswrapper[4915]: E1124 21:20:48.285588 4915 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:20:48 crc kubenswrapper[4915]: E1124 21:20:48.285651 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a785aaf6-e561-47e9-a3ff-69e6930c5941-metrics-certs podName:a785aaf6-e561-47e9-a3ff-69e6930c5941 nodeName:}" failed. No retries permitted until 2025-11-24 21:21:20.285634742 +0000 UTC m=+98.601886925 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a785aaf6-e561-47e9-a3ff-69e6930c5941-metrics-certs") pod "network-metrics-daemon-hkc4w" (UID: "a785aaf6-e561-47e9-a3ff-69e6930c5941") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.312038 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.312073 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.312081 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.312093 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.312101 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:48Z","lastTransitionTime":"2025-11-24T21:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.415173 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.415221 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.415233 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.415250 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.415263 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:48Z","lastTransitionTime":"2025-11-24T21:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.425787 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.425885 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.425952 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:48 crc kubenswrapper[4915]: E1124 21:20:48.425921 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:20:48 crc kubenswrapper[4915]: E1124 21:20:48.426114 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:20:48 crc kubenswrapper[4915]: E1124 21:20:48.426338 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.518467 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.518519 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.518532 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.518552 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.518564 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:48Z","lastTransitionTime":"2025-11-24T21:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.621060 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.621116 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.621145 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.621164 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.621177 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:48Z","lastTransitionTime":"2025-11-24T21:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.724422 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.724506 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.724529 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.724554 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.724573 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:48Z","lastTransitionTime":"2025-11-24T21:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.827420 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.827466 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.827477 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.827493 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.827507 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:48Z","lastTransitionTime":"2025-11-24T21:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.930152 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.930291 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.930375 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.930395 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:48 crc kubenswrapper[4915]: I1124 21:20:48.930407 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:48Z","lastTransitionTime":"2025-11-24T21:20:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.033118 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.033169 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.033183 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.033202 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.033216 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:49Z","lastTransitionTime":"2025-11-24T21:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.135873 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.135972 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.135996 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.136028 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.136049 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:49Z","lastTransitionTime":"2025-11-24T21:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.238865 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.238917 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.238929 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.238948 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.238960 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:49Z","lastTransitionTime":"2025-11-24T21:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.290648 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.290716 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.290733 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.290757 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.290800 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:49Z","lastTransitionTime":"2025-11-24T21:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:49 crc kubenswrapper[4915]: E1124 21:20:49.306809 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:49Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.310739 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.310769 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.310797 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.310813 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.310824 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:49Z","lastTransitionTime":"2025-11-24T21:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:49 crc kubenswrapper[4915]: E1124 21:20:49.324271 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:49Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.327443 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.327477 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.327489 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.327507 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.327517 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:49Z","lastTransitionTime":"2025-11-24T21:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:49 crc kubenswrapper[4915]: E1124 21:20:49.337684 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:49Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.341186 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.341212 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.341221 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.341234 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.341280 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:49Z","lastTransitionTime":"2025-11-24T21:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:49 crc kubenswrapper[4915]: E1124 21:20:49.352063 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:49Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.355209 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.355249 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.355260 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.355275 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.355286 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:49Z","lastTransitionTime":"2025-11-24T21:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:49 crc kubenswrapper[4915]: E1124 21:20:49.365848 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:49Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:49 crc kubenswrapper[4915]: E1124 21:20:49.365960 4915 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.367864 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.367896 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.367908 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.367923 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.367932 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:49Z","lastTransitionTime":"2025-11-24T21:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.425797 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:49 crc kubenswrapper[4915]: E1124 21:20:49.425933 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.470095 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.470132 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.470141 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.470156 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.470174 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:49Z","lastTransitionTime":"2025-11-24T21:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.573093 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.573192 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.573224 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.573255 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.573296 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:49Z","lastTransitionTime":"2025-11-24T21:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.676628 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.676670 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.676681 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.676700 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.676711 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:49Z","lastTransitionTime":"2025-11-24T21:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.779615 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.779651 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.779660 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.779675 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.779686 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:49Z","lastTransitionTime":"2025-11-24T21:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.882414 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.882479 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.882491 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.882511 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.882524 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:49Z","lastTransitionTime":"2025-11-24T21:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.985667 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.985722 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.985737 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.985758 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:49 crc kubenswrapper[4915]: I1124 21:20:49.985806 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:49Z","lastTransitionTime":"2025-11-24T21:20:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.088409 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.088485 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.088505 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.088529 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.088542 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:50Z","lastTransitionTime":"2025-11-24T21:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.191802 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.191858 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.191869 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.191893 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.191904 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:50Z","lastTransitionTime":"2025-11-24T21:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.294399 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.294460 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.294479 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.294503 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.294522 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:50Z","lastTransitionTime":"2025-11-24T21:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.396842 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.396871 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.396880 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.396895 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.396904 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:50Z","lastTransitionTime":"2025-11-24T21:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.426619 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:50 crc kubenswrapper[4915]: E1124 21:20:50.426835 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.426901 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.426962 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:50 crc kubenswrapper[4915]: E1124 21:20:50.427095 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:20:50 crc kubenswrapper[4915]: E1124 21:20:50.427201 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.499193 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.499226 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.499236 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.499253 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.499263 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:50Z","lastTransitionTime":"2025-11-24T21:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.602065 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.602120 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.602132 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.602152 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.602164 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:50Z","lastTransitionTime":"2025-11-24T21:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.703987 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.704017 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.704026 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.704039 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.704050 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:50Z","lastTransitionTime":"2025-11-24T21:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.806509 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.806546 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.806558 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.806575 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.806586 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:50Z","lastTransitionTime":"2025-11-24T21:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.865839 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b8kq8_f5b8930d-4919-4a02-a962-c93b5f8f4ad3/kube-multus/0.log" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.865913 4915 generic.go:334] "Generic (PLEG): container finished" podID="f5b8930d-4919-4a02-a962-c93b5f8f4ad3" containerID="5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff" exitCode=1 Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.865957 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b8kq8" event={"ID":"f5b8930d-4919-4a02-a962-c93b5f8f4ad3","Type":"ContainerDied","Data":"5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff"} Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.866554 4915 scope.go:117] "RemoveContainer" containerID="5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.890274 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:50Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.908617 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.908661 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.908670 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.908688 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.908699 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:50Z","lastTransitionTime":"2025-11-24T21:20:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.911557 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:50Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.931401 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6520b970c1e0279fea2d39233569ea1cb355259398e39a9fd63a1b61218dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8862d30ccca23f9831d25c0184b0e342ca7e8a4c67b0fe96241323ed74bc49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6932314dff48e102e97de6739f7af9e8c748a9a027a00f6cb31e70c19b9f2027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d8ac7b09233a7e864b5dfba1aefafe23e08de6ef34879478cf63c7ec594794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07279f43e6efab7e8ce1739eb703e344027c7da1e758e62f581fe05c5028c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce0174ac3e69992dce18dea349b0c6ec98cd95c67d12cbd93110c0d604f76e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f9fc5ec2725019a1b1d6cd3593fd90e19f9e035d5480c8c7556500f34bfbf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13f9fc5ec2725019a1b1d6cd3593fd90e19f9e035d5480c8c7556500f34bfbf4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:26Z\\\",\\\"message\\\":\\\"k controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:26Z is after 2025-08-24T17:21:41Z]\\\\nI1124 21:20:26.251407 6556 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr\\\\nI1124 21:20:26.251418 6556 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr\\\\nI1124 21:20:26.251327 6556 services_controller.go:434] Service openshift-apiserver/check-endpoints retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{check-endpoints openshift-apiserver 5f356be5-5c32-4923-9be9-f4ede1a71efd 6150 0 2025-02-23 05:23:46 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:openshift-apiserver-check-endpoints] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0003a7637 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jmqqt_openshift-ovn-kubernetes(3f235785-6b02-4304-99b8-3b216c369d45)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e3c0b39454a6a8c631a99c9c4faab3c4b956f476ba652178ee7b11041b825f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:50Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.946812 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:50Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.958220 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hkc4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a785aaf6-e561-47e9-a3ff-69e6930c5941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hkc4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:50Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.969934 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:50Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.987930 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01094696-9a80-41b4-8341-f721d64dcba9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6566ed68b170603480f4f1938004e3c9cde1a814ceba7575a792ee272c03c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f65a630ef6d66b01511f0e4ce0279addf8b6f72ee79283bccc1325b5f198b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://430687a31d119ccef92a1bc31318cef87e3c53b4a12df207f9b9ef8bcc968c9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fa9d2e4c65eb47a5bdf7ef3f569abd936b8bf98e79ec9656e0edbf1f1cdd60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:50Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:50 crc kubenswrapper[4915]: I1124 21:20:50.999109 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59702f-bbae-4a7e-91f2-292687063f63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c9823ea60f9f0adf6aca07204f39ef6bf6eeb622f4fcba7d3f804ae38f337d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa36275462843401ca1bae1eee8831ff2c28d8e6af24eaea56d765c16ecc8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014646fee05d1d964cd6d7b56ab09b6c95b56f92796b542c0675a50734e92f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://947c69acffc884b2bef5a6bb30b407f9e5dd0519fcaf967da29b2b9c1f983459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947c69acffc884b2bef5a6bb30b407f9e5dd0519fcaf967da29b2b9c1f983459\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:50Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.010360 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.010387 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.010395 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.010418 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.010427 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:51Z","lastTransitionTime":"2025-11-24T21:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.017767 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.028597 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.039855 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.051906 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.068067 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.081920 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.093376 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a62fc83839ca150ad93c6e2e7c52a9f0d31efd20e3a994bfaf5f06d2fcf4a892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.111595 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e99580d33be14d59916fae025c81700120a64889ea614c1f04d18be178b533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.112762 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.112899 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.112965 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.112992 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.113010 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:51Z","lastTransitionTime":"2025-11-24T21:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.127451 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27035ec78eaedd29238e13769c857bb0f8b79cfa4f5218f99438636691166caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc03c3831c2b38d521328d59ec14fc608766952c77abc524de05c7d6abc6679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lpnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.141537 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:50Z\\\",\\\"message\\\":\\\"2025-11-24T21:20:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0f44749b-b8d6-4b46-95e9-9d3afc3cbd35\\\\n2025-11-24T21:20:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0f44749b-b8d6-4b46-95e9-9d3afc3cbd35 to /host/opt/cni/bin/\\\\n2025-11-24T21:20:05Z [verbose] multus-daemon started\\\\n2025-11-24T21:20:05Z [verbose] Readiness Indicator file check\\\\n2025-11-24T21:20:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.215424 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.215467 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.215479 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.215498 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.215511 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:51Z","lastTransitionTime":"2025-11-24T21:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.318303 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.318350 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.318362 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.318378 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.318390 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:51Z","lastTransitionTime":"2025-11-24T21:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.421684 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.421738 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.421751 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.421772 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.421808 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:51Z","lastTransitionTime":"2025-11-24T21:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.425873 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:51 crc kubenswrapper[4915]: E1124 21:20:51.425996 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.526764 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.526858 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.526875 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.526900 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.526918 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:51Z","lastTransitionTime":"2025-11-24T21:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.629037 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.629095 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.629111 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.629133 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.629148 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:51Z","lastTransitionTime":"2025-11-24T21:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.731498 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.731539 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.731550 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.731564 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.731574 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:51Z","lastTransitionTime":"2025-11-24T21:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.834044 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.834095 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.834107 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.834123 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.834135 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:51Z","lastTransitionTime":"2025-11-24T21:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.871823 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b8kq8_f5b8930d-4919-4a02-a962-c93b5f8f4ad3/kube-multus/0.log" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.871882 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b8kq8" event={"ID":"f5b8930d-4919-4a02-a962-c93b5f8f4ad3","Type":"ContainerStarted","Data":"926013354edf1382934bf5829af75dc38d00843d1d93ae599bfcedd1322571d7"} Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.892964 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01094696-9a80-41b4-8341-f721d64dcba9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6566ed68b170603480f4f1938004e3c9cde1a814ceba7575a792ee272c03c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f65a630ef6d66b01511f0e4ce0279addf8b6f72ee79283bccc1325b5f198b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://430687a31d119ccef92a1bc31318cef87e3c53b4a12df207f9b9ef8bcc968c9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fa9d2e4c65eb47a5bdf7ef3f569abd936b8bf98e79ec9656e0edbf1f1cdd60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.904679 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59702f-bbae-4a7e-91f2-292687063f63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c9823ea60f9f0adf6aca07204f39ef6bf6eeb622f4fcba7d3f804ae38f337d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa36275462843401ca1bae1eee8831ff2c28d8e6af24eaea56d765c16ecc8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014646fee05d1d964cd6d7b56ab09b6c95b56f92796b542c0675a50734e92f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://947c69acffc884b2bef5a6bb30b407f9e5dd0519fcaf967da29b2b9c1f983459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947c69acffc884b2bef5a6bb30b407f9e5dd0519fcaf967da29b2b9c1f983459\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.924939 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.937542 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.937594 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.937604 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.937620 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.937630 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:51Z","lastTransitionTime":"2025-11-24T21:20:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.937994 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.952420 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.962201 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.971587 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.984256 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:51 crc kubenswrapper[4915]: I1124 21:20:51.998610 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:51Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.010405 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a62fc83839ca150ad93c6e2e7c52a9f0d31efd20e3a994bfaf5f06d2fcf4a892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.023429 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e99580d33be14d59916fae025c81700120a64889ea614c1f04d18be178b533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.033991 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27035ec78eaedd29238e13769c857bb0f8b79cfa4f5218f99438636691166caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc03c3831c2b38d521328d59ec14fc608766952c77abc524de05c7d6abc6679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lpnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.039268 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.039296 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.039305 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.039319 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.039327 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:52Z","lastTransitionTime":"2025-11-24T21:20:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.046860 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://926013354edf1382934bf5829af75dc38d00843d1d93ae599bfcedd1322571d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:50Z\\\",\\\"message\\\":\\\"2025-11-24T21:20:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0f44749b-b8d6-4b46-95e9-9d3afc3cbd35\\\\n2025-11-24T21:20:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0f44749b-b8d6-4b46-95e9-9d3afc3cbd35 to /host/opt/cni/bin/\\\\n2025-11-24T21:20:05Z [verbose] multus-daemon started\\\\n2025-11-24T21:20:05Z [verbose] Readiness Indicator file check\\\\n2025-11-24T21:20:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.059250 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.073488 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.094331 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6520b970c1e0279fea2d39233569ea1cb355259398e39a9fd63a1b61218dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8862d30ccca23f9831d25c0184b0e342ca7e8a4c67b0fe96241323ed74bc49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6932314dff48e102e97de6739f7af9e8c748a9a027a00f6cb31e70c19b9f2027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d8ac7b09233a7e864b5dfba1aefafe23e08de6ef34879478cf63c7ec594794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07279f43e6efab7e8ce1739eb703e344027c7da1e758e62f581fe05c5028c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce0174ac3e69992dce18dea349b0c6ec98cd95c67d12cbd93110c0d604f76e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f9fc5ec2725019a1b1d6cd3593fd90e19f9e035d5480c8c7556500f34bfbf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13f9fc5ec2725019a1b1d6cd3593fd90e19f9e035d5480c8c7556500f34bfbf4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:26Z\\\",\\\"message\\\":\\\"k controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:26Z is after 2025-08-24T17:21:41Z]\\\\nI1124 21:20:26.251407 6556 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr\\\\nI1124 21:20:26.251418 6556 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr\\\\nI1124 21:20:26.251327 6556 services_controller.go:434] Service openshift-apiserver/check-endpoints retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{check-endpoints openshift-apiserver 5f356be5-5c32-4923-9be9-f4ede1a71efd 6150 0 2025-02-23 05:23:46 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:openshift-apiserver-check-endpoints] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0003a7637 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jmqqt_openshift-ovn-kubernetes(3f235785-6b02-4304-99b8-3b216c369d45)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e3c0b39454a6a8c631a99c9c4faab3c4b956f476ba652178ee7b11041b825f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.105635 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.117963 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hkc4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a785aaf6-e561-47e9-a3ff-69e6930c5941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hkc4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.142033 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.142078 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.142090 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.142106 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.142121 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:52Z","lastTransitionTime":"2025-11-24T21:20:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.244262 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.244305 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.244314 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.244329 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.244339 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:52Z","lastTransitionTime":"2025-11-24T21:20:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.347210 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.347431 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.347530 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.347604 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.347665 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:52Z","lastTransitionTime":"2025-11-24T21:20:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.426522 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.426557 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.426542 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:52 crc kubenswrapper[4915]: E1124 21:20:52.426679 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:20:52 crc kubenswrapper[4915]: E1124 21:20:52.426751 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:20:52 crc kubenswrapper[4915]: E1124 21:20:52.426847 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.441022 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.452203 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.452251 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.452261 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.452278 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.452296 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:52Z","lastTransitionTime":"2025-11-24T21:20:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.453538 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.472688 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6520b970c1e0279fea2d39233569ea1cb355259398e39a9fd63a1b61218dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8862d30ccca23f9831d25c0184b0e342ca7e8a4c67b0fe96241323ed74bc49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6932314dff48e102e97de6739f7af9e8c748a9a027a00f6cb31e70c19b9f2027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d8ac7b09233a7e864b5dfba1aefafe23e08de6ef34879478cf63c7ec594794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07279f43e6efab7e8ce1739eb703e344027c7da1e758e62f581fe05c5028c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce0174ac3e69992dce18dea349b0c6ec98cd95c67d12cbd93110c0d604f76e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://13f9fc5ec2725019a1b1d6cd3593fd90e19f9e035d5480c8c7556500f34bfbf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13f9fc5ec2725019a1b1d6cd3593fd90e19f9e035d5480c8c7556500f34bfbf4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:26Z\\\",\\\"message\\\":\\\"k controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:26Z is after 2025-08-24T17:21:41Z]\\\\nI1124 21:20:26.251407 6556 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr\\\\nI1124 21:20:26.251418 6556 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr\\\\nI1124 21:20:26.251327 6556 services_controller.go:434] Service openshift-apiserver/check-endpoints retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{check-endpoints openshift-apiserver 5f356be5-5c32-4923-9be9-f4ede1a71efd 6150 0 2025-02-23 05:23:46 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:openshift-apiserver-check-endpoints] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0003a7637 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-jmqqt_openshift-ovn-kubernetes(3f235785-6b02-4304-99b8-3b216c369d45)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e3c0b39454a6a8c631a99c9c4faab3c4b956f476ba652178ee7b11041b825f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.484094 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.494048 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hkc4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a785aaf6-e561-47e9-a3ff-69e6930c5941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hkc4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.505138 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01094696-9a80-41b4-8341-f721d64dcba9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6566ed68b170603480f4f1938004e3c9cde1a814ceba7575a792ee272c03c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f65a630ef6d66b01511f0e4ce0279addf8b6f72ee79283bccc1325b5f198b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://430687a31d119ccef92a1bc31318cef87e3c53b4a12df207f9b9ef8bcc968c9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fa9d2e4c65eb47a5bdf7ef3f569abd936b8bf98e79ec9656e0edbf1f1cdd60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.518152 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59702f-bbae-4a7e-91f2-292687063f63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c9823ea60f9f0adf6aca07204f39ef6bf6eeb622f4fcba7d3f804ae38f337d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa36275462843401ca1bae1eee8831ff2c28d8e6af24eaea56d765c16ecc8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014646fee05d1d964cd6d7b56ab09b6c95b56f92796b542c0675a50734e92f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://947c69acffc884b2bef5a6bb30b407f9e5dd0519fcaf967da29b2b9c1f983459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947c69acffc884b2bef5a6bb30b407f9e5dd0519fcaf967da29b2b9c1f983459\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.535908 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.550378 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.554745 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.554873 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.554888 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.554903 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.555170 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:52Z","lastTransitionTime":"2025-11-24T21:20:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.565081 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.576940 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.587303 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.605766 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.621109 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.635624 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a62fc83839ca150ad93c6e2e7c52a9f0d31efd20e3a994bfaf5f06d2fcf4a892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.652642 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e99580d33be14d59916fae025c81700120a64889ea614c1f04d18be178b533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.657833 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.657875 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.657887 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.657903 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.657916 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:52Z","lastTransitionTime":"2025-11-24T21:20:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.664754 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27035ec78eaedd29238e13769c857bb0f8b79cfa4f5218f99438636691166caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc03c3831c2b38d521328d59ec14fc608766952c77abc524de05c7d6abc6679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lpnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.675456 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://926013354edf1382934bf5829af75dc38d00843d1d93ae599bfcedd1322571d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:50Z\\\",\\\"message\\\":\\\"2025-11-24T21:20:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0f44749b-b8d6-4b46-95e9-9d3afc3cbd35\\\\n2025-11-24T21:20:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0f44749b-b8d6-4b46-95e9-9d3afc3cbd35 to /host/opt/cni/bin/\\\\n2025-11-24T21:20:05Z [verbose] multus-daemon started\\\\n2025-11-24T21:20:05Z [verbose] Readiness Indicator file check\\\\n2025-11-24T21:20:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:52Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.760572 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.760655 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.760672 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.760696 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.760717 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:52Z","lastTransitionTime":"2025-11-24T21:20:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.864255 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.864312 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.864333 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.864360 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.864382 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:52Z","lastTransitionTime":"2025-11-24T21:20:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.966974 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.967043 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.967055 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.967083 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:52 crc kubenswrapper[4915]: I1124 21:20:52.967099 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:52Z","lastTransitionTime":"2025-11-24T21:20:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.070212 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.070270 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.070281 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.070315 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.070329 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:53Z","lastTransitionTime":"2025-11-24T21:20:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.173050 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.173129 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.173145 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.173171 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.173185 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:53Z","lastTransitionTime":"2025-11-24T21:20:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.276759 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.276830 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.276840 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.276854 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.276864 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:53Z","lastTransitionTime":"2025-11-24T21:20:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.379290 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.379363 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.379377 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.379408 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.379426 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:53Z","lastTransitionTime":"2025-11-24T21:20:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.426667 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:53 crc kubenswrapper[4915]: E1124 21:20:53.426848 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.481982 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.482048 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.482062 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.482087 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.482102 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:53Z","lastTransitionTime":"2025-11-24T21:20:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.588912 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.588971 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.588991 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.589013 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.589026 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:53Z","lastTransitionTime":"2025-11-24T21:20:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.692487 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.692550 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.692563 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.692585 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.692605 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:53Z","lastTransitionTime":"2025-11-24T21:20:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.796120 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.796177 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.796189 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.796208 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.796221 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:53Z","lastTransitionTime":"2025-11-24T21:20:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.899050 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.899119 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.899137 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.899162 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:53 crc kubenswrapper[4915]: I1124 21:20:53.899181 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:53Z","lastTransitionTime":"2025-11-24T21:20:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.001755 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.001839 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.001856 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.001879 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.001896 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:54Z","lastTransitionTime":"2025-11-24T21:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.104661 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.104703 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.104740 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.104758 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.104769 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:54Z","lastTransitionTime":"2025-11-24T21:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.207286 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.207332 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.207356 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.207377 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.207391 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:54Z","lastTransitionTime":"2025-11-24T21:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.310352 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.310418 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.310430 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.310446 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.310457 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:54Z","lastTransitionTime":"2025-11-24T21:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.412917 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.412963 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.412974 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.412996 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.413006 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:54Z","lastTransitionTime":"2025-11-24T21:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.426255 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.426296 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.426343 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:54 crc kubenswrapper[4915]: E1124 21:20:54.426389 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:20:54 crc kubenswrapper[4915]: E1124 21:20:54.426599 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:20:54 crc kubenswrapper[4915]: E1124 21:20:54.426674 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.515927 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.515979 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.515992 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.516009 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.516020 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:54Z","lastTransitionTime":"2025-11-24T21:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.618978 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.619015 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.619026 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.619042 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.619052 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:54Z","lastTransitionTime":"2025-11-24T21:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.725210 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.725553 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.725583 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.725613 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.725635 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:54Z","lastTransitionTime":"2025-11-24T21:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.828340 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.828375 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.828386 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.828400 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.828411 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:54Z","lastTransitionTime":"2025-11-24T21:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.931046 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.931082 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.931094 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.931109 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:54 crc kubenswrapper[4915]: I1124 21:20:54.931120 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:54Z","lastTransitionTime":"2025-11-24T21:20:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.033352 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.033396 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.033406 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.033424 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.033436 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:55Z","lastTransitionTime":"2025-11-24T21:20:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.135954 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.136000 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.136012 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.136028 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.136041 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:55Z","lastTransitionTime":"2025-11-24T21:20:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.238269 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.238299 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.238310 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.238326 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.238337 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:55Z","lastTransitionTime":"2025-11-24T21:20:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.341021 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.341071 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.341088 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.341111 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.341127 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:55Z","lastTransitionTime":"2025-11-24T21:20:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.426189 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:55 crc kubenswrapper[4915]: E1124 21:20:55.426323 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.427383 4915 scope.go:117] "RemoveContainer" containerID="13f9fc5ec2725019a1b1d6cd3593fd90e19f9e035d5480c8c7556500f34bfbf4" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.443747 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.443812 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.443824 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.443839 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.443853 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:55Z","lastTransitionTime":"2025-11-24T21:20:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.546240 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.546345 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.546406 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.546495 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.546528 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:55Z","lastTransitionTime":"2025-11-24T21:20:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.649650 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.649714 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.649729 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.649751 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.649766 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:55Z","lastTransitionTime":"2025-11-24T21:20:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.753334 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.753381 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.753399 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.753427 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.753445 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:55Z","lastTransitionTime":"2025-11-24T21:20:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.857693 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.857737 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.857748 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.857764 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.857792 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:55Z","lastTransitionTime":"2025-11-24T21:20:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.893386 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jmqqt_3f235785-6b02-4304-99b8-3b216c369d45/ovnkube-controller/2.log" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.895849 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" event={"ID":"3f235785-6b02-4304-99b8-3b216c369d45","Type":"ContainerStarted","Data":"ea71245c41a0f3fc20b632dcfff65b7ee7185eb47f94d40ee9a6391a6dd81960"} Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.896284 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.906411 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27035ec78eaedd29238e13769c857bb0f8b79cfa4f5218f99438636691166caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc03c3831c2b38d521328d59ec14fc608766952c77abc524de05c7d6abc6679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lpnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:55Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.929333 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:55Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.941810 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:55Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.957443 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a62fc83839ca150ad93c6e2e7c52a9f0d31efd20e3a994bfaf5f06d2fcf4a892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:55Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.959755 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.959818 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.959839 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.959861 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.959875 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:55Z","lastTransitionTime":"2025-11-24T21:20:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.974066 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e99580d33be14d59916fae025c81700120a64889ea614c1f04d18be178b533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:55Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.988316 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://926013354edf1382934bf5829af75dc38d00843d1d93ae599bfcedd1322571d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:50Z\\\",\\\"message\\\":\\\"2025-11-24T21:20:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0f44749b-b8d6-4b46-95e9-9d3afc3cbd35\\\\n2025-11-24T21:20:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0f44749b-b8d6-4b46-95e9-9d3afc3cbd35 to /host/opt/cni/bin/\\\\n2025-11-24T21:20:05Z [verbose] multus-daemon started\\\\n2025-11-24T21:20:05Z [verbose] Readiness Indicator file check\\\\n2025-11-24T21:20:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:55Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:55 crc kubenswrapper[4915]: I1124 21:20:55.999137 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:55Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.008331 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hkc4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a785aaf6-e561-47e9-a3ff-69e6930c5941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hkc4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:56Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.019592 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:56Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.029896 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:56Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.045218 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6520b970c1e0279fea2d39233569ea1cb355259398e39a9fd63a1b61218dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8862d30ccca23f9831d25c0184b0e342ca7e8a4c67b0fe96241323ed74bc49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6932314dff48e102e97de6739f7af9e8c748a9a027a00f6cb31e70c19b9f2027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d8ac7b09233a7e864b5dfba1aefafe23e08de6ef34879478cf63c7ec594794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07279f43e6efab7e8ce1739eb703e344027c7da1e758e62f581fe05c5028c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce0174ac3e69992dce18dea349b0c6ec98cd95c67d12cbd93110c0d604f76e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea71245c41a0f3fc20b632dcfff65b7ee7185eb47f94d40ee9a6391a6dd81960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13f9fc5ec2725019a1b1d6cd3593fd90e19f9e035d5480c8c7556500f34bfbf4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:26Z\\\",\\\"message\\\":\\\"k controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:26Z is after 2025-08-24T17:21:41Z]\\\\nI1124 21:20:26.251407 6556 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr\\\\nI1124 21:20:26.251418 6556 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr\\\\nI1124 21:20:26.251327 6556 services_controller.go:434] Service openshift-apiserver/check-endpoints retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{check-endpoints openshift-apiserver 5f356be5-5c32-4923-9be9-f4ede1a71efd 6150 0 2025-02-23 05:23:46 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:openshift-apiserver-check-endpoints] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0003a7637 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e3c0b39454a6a8c631a99c9c4faab3c4b956f476ba652178ee7b11041b825f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:56Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.056494 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:56Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.061707 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.061740 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.061748 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.061761 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.061770 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:56Z","lastTransitionTime":"2025-11-24T21:20:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.065093 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:56Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.075357 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:56Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.089145 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01094696-9a80-41b4-8341-f721d64dcba9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6566ed68b170603480f4f1938004e3c9cde1a814ceba7575a792ee272c03c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f65a630ef6d66b01511f0e4ce0279addf8b6f72ee79283bccc1325b5f198b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://430687a31d119ccef92a1bc31318cef87e3c53b4a12df207f9b9ef8bcc968c9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fa9d2e4c65eb47a5bdf7ef3f569abd936b8bf98e79ec9656e0edbf1f1cdd60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:56Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.108565 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59702f-bbae-4a7e-91f2-292687063f63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c9823ea60f9f0adf6aca07204f39ef6bf6eeb622f4fcba7d3f804ae38f337d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa36275462843401ca1bae1eee8831ff2c28d8e6af24eaea56d765c16ecc8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014646fee05d1d964cd6d7b56ab09b6c95b56f92796b542c0675a50734e92f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://947c69acffc884b2bef5a6bb30b407f9e5dd0519fcaf967da29b2b9c1f983459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947c69acffc884b2bef5a6bb30b407f9e5dd0519fcaf967da29b2b9c1f983459\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:56Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.128941 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:56Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.139111 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:56Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.163321 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.163355 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.163364 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.163378 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.163388 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:56Z","lastTransitionTime":"2025-11-24T21:20:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.265993 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.266030 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.266041 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.266058 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.266070 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:56Z","lastTransitionTime":"2025-11-24T21:20:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.368807 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.368863 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.368881 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.369408 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.369628 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:56Z","lastTransitionTime":"2025-11-24T21:20:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.426480 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.426562 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:56 crc kubenswrapper[4915]: E1124 21:20:56.426605 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:20:56 crc kubenswrapper[4915]: E1124 21:20:56.426716 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.426870 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:56 crc kubenswrapper[4915]: E1124 21:20:56.426934 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.476163 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.476208 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.476220 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.476238 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.476250 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:56Z","lastTransitionTime":"2025-11-24T21:20:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.579307 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.579398 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.579427 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.579458 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.579482 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:56Z","lastTransitionTime":"2025-11-24T21:20:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.681927 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.682023 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.682045 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.682077 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.682100 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:56Z","lastTransitionTime":"2025-11-24T21:20:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.784445 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.784484 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.784492 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.784506 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.784515 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:56Z","lastTransitionTime":"2025-11-24T21:20:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.886222 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.886254 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.886262 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.886274 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.886283 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:56Z","lastTransitionTime":"2025-11-24T21:20:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.900666 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jmqqt_3f235785-6b02-4304-99b8-3b216c369d45/ovnkube-controller/3.log" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.901340 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jmqqt_3f235785-6b02-4304-99b8-3b216c369d45/ovnkube-controller/2.log" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.904496 4915 generic.go:334] "Generic (PLEG): container finished" podID="3f235785-6b02-4304-99b8-3b216c369d45" containerID="ea71245c41a0f3fc20b632dcfff65b7ee7185eb47f94d40ee9a6391a6dd81960" exitCode=1 Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.904545 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" event={"ID":"3f235785-6b02-4304-99b8-3b216c369d45","Type":"ContainerDied","Data":"ea71245c41a0f3fc20b632dcfff65b7ee7185eb47f94d40ee9a6391a6dd81960"} Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.904592 4915 scope.go:117] "RemoveContainer" containerID="13f9fc5ec2725019a1b1d6cd3593fd90e19f9e035d5480c8c7556500f34bfbf4" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.905693 4915 scope.go:117] "RemoveContainer" containerID="ea71245c41a0f3fc20b632dcfff65b7ee7185eb47f94d40ee9a6391a6dd81960" Nov 24 21:20:56 crc kubenswrapper[4915]: E1124 21:20:56.905967 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jmqqt_openshift-ovn-kubernetes(3f235785-6b02-4304-99b8-3b216c369d45)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" podUID="3f235785-6b02-4304-99b8-3b216c369d45" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.919327 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://926013354edf1382934bf5829af75dc38d00843d1d93ae599bfcedd1322571d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:50Z\\\",\\\"message\\\":\\\"2025-11-24T21:20:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0f44749b-b8d6-4b46-95e9-9d3afc3cbd35\\\\n2025-11-24T21:20:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0f44749b-b8d6-4b46-95e9-9d3afc3cbd35 to /host/opt/cni/bin/\\\\n2025-11-24T21:20:05Z [verbose] multus-daemon started\\\\n2025-11-24T21:20:05Z [verbose] Readiness Indicator file check\\\\n2025-11-24T21:20:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:56Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.937561 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:56Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.964345 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6520b970c1e0279fea2d39233569ea1cb355259398e39a9fd63a1b61218dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8862d30ccca23f9831d25c0184b0e342ca7e8a4c67b0fe96241323ed74bc49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6932314dff48e102e97de6739f7af9e8c748a9a027a00f6cb31e70c19b9f2027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d8ac7b09233a7e864b5dfba1aefafe23e08de6ef34879478cf63c7ec594794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07279f43e6efab7e8ce1739eb703e344027c7da1e758e62f581fe05c5028c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce0174ac3e69992dce18dea349b0c6ec98cd95c67d12cbd93110c0d604f76e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea71245c41a0f3fc20b632dcfff65b7ee7185eb47f94d40ee9a6391a6dd81960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://13f9fc5ec2725019a1b1d6cd3593fd90e19f9e035d5480c8c7556500f34bfbf4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:26Z\\\",\\\"message\\\":\\\"k controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:26Z is after 2025-08-24T17:21:41Z]\\\\nI1124 21:20:26.251407 6556 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr\\\\nI1124 21:20:26.251418 6556 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr\\\\nI1124 21:20:26.251327 6556 services_controller.go:434] Service openshift-apiserver/check-endpoints retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{check-endpoints openshift-apiserver 5f356be5-5c32-4923-9be9-f4ede1a71efd 6150 0 2025-02-23 05:23:46 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[prometheus:openshift-apiserver-check-endpoints] map[include.release.openshift.io/hypershift:true include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0003a7637 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{Service\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea71245c41a0f3fc20b632dcfff65b7ee7185eb47f94d40ee9a6391a6dd81960\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:56Z\\\",\\\"message\\\":\\\"I1124 21:20:56.409410 6914 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 21:20:56.409420 6914 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:20:56.409443 6914 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 21:20:56.409449 6914 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 21:20:56.409464 6914 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 21:20:56.409470 6914 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:20:56.409472 6914 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 21:20:56.409497 6914 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 21:20:56.409506 6914 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 21:20:56.409518 6914 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 21:20:56.409535 6914 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 21:20:56.409542 6914 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 21:20:56.409553 6914 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:20:56.409565 6914 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:20:56.409571 6914 factory.go:656] Stopping watch factory\\\\nI1124 21:20:56.409594 6914 ovnkube.go:599] Stopped ovnkube\\\\nI1124 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e3c0b39454a6a8c631a99c9c4faab3c4b956f476ba652178ee7b11041b825f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:56Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.977349 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:56Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.989015 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.989064 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.989081 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.989104 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.989155 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:56Z","lastTransitionTime":"2025-11-24T21:20:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:56 crc kubenswrapper[4915]: I1124 21:20:56.990465 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hkc4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a785aaf6-e561-47e9-a3ff-69e6930c5941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hkc4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:56Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.007333 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:57Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.025849 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:57Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.040195 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:57Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.054139 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:57Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.065466 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:57Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.075721 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:57Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.090759 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01094696-9a80-41b4-8341-f721d64dcba9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6566ed68b170603480f4f1938004e3c9cde1a814ceba7575a792ee272c03c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f65a630ef6d66b01511f0e4ce0279addf8b6f72ee79283bccc1325b5f198b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://430687a31d119ccef92a1bc31318cef87e3c53b4a12df207f9b9ef8bcc968c9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fa9d2e4c65eb47a5bdf7ef3f569abd936b8bf98e79ec9656e0edbf1f1cdd60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:57Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.093266 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.093311 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.093327 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.093349 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.093366 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:57Z","lastTransitionTime":"2025-11-24T21:20:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.104952 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59702f-bbae-4a7e-91f2-292687063f63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c9823ea60f9f0adf6aca07204f39ef6bf6eeb622f4fcba7d3f804ae38f337d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa36275462843401ca1bae1eee8831ff2c28d8e6af24eaea56d765c16ecc8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014646fee05d1d964cd6d7b56ab09b6c95b56f92796b542c0675a50734e92f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://947c69acffc884b2bef5a6bb30b407f9e5dd0519fcaf967da29b2b9c1f983459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947c69acffc884b2bef5a6bb30b407f9e5dd0519fcaf967da29b2b9c1f983459\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:57Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.117271 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a62fc83839ca150ad93c6e2e7c52a9f0d31efd20e3a994bfaf5f06d2fcf4a892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:57Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.137138 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e99580d33be14d59916fae025c81700120a64889ea614c1f04d18be178b533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:57Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.150735 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27035ec78eaedd29238e13769c857bb0f8b79cfa4f5218f99438636691166caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc03c3831c2b38d521328d59ec14fc608766952c77abc524de05c7d6abc6679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lpnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:57Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.168079 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:57Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.183718 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:57Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.195423 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.195455 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.195466 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.195480 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.195491 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:57Z","lastTransitionTime":"2025-11-24T21:20:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.298743 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.298808 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.298821 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.298839 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.298850 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:57Z","lastTransitionTime":"2025-11-24T21:20:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.401659 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.401717 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.401734 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.401755 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.401805 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:57Z","lastTransitionTime":"2025-11-24T21:20:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.426581 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:57 crc kubenswrapper[4915]: E1124 21:20:57.426755 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.505276 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.505372 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.505392 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.505424 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.505443 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:57Z","lastTransitionTime":"2025-11-24T21:20:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.608903 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.609001 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.609019 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.609047 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.609066 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:57Z","lastTransitionTime":"2025-11-24T21:20:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.713274 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.713327 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.713344 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.713405 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.713426 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:57Z","lastTransitionTime":"2025-11-24T21:20:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.817319 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.817378 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.817394 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.817419 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.817437 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:57Z","lastTransitionTime":"2025-11-24T21:20:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.911396 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jmqqt_3f235785-6b02-4304-99b8-3b216c369d45/ovnkube-controller/3.log" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.916458 4915 scope.go:117] "RemoveContainer" containerID="ea71245c41a0f3fc20b632dcfff65b7ee7185eb47f94d40ee9a6391a6dd81960" Nov 24 21:20:57 crc kubenswrapper[4915]: E1124 21:20:57.916693 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jmqqt_openshift-ovn-kubernetes(3f235785-6b02-4304-99b8-3b216c369d45)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" podUID="3f235785-6b02-4304-99b8-3b216c369d45" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.921145 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.921231 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.921249 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.921281 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.921311 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:57Z","lastTransitionTime":"2025-11-24T21:20:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.943703 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:57Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.964752 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:57Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.979056 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:57Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:57 crc kubenswrapper[4915]: I1124 21:20:57.994305 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:57Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.012766 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01094696-9a80-41b4-8341-f721d64dcba9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6566ed68b170603480f4f1938004e3c9cde1a814ceba7575a792ee272c03c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f65a630ef6d66b01511f0e4ce0279addf8b6f72ee79283bccc1325b5f198b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://430687a31d119ccef92a1bc31318cef87e3c53b4a12df207f9b9ef8bcc968c9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fa9d2e4c65eb47a5bdf7ef3f569abd936b8bf98e79ec9656e0edbf1f1cdd60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.024374 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.024432 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.024451 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.024473 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.024491 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:58Z","lastTransitionTime":"2025-11-24T21:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.027394 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59702f-bbae-4a7e-91f2-292687063f63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c9823ea60f9f0adf6aca07204f39ef6bf6eeb622f4fcba7d3f804ae38f337d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa36275462843401ca1bae1eee8831ff2c28d8e6af24eaea56d765c16ecc8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014646fee05d1d964cd6d7b56ab09b6c95b56f92796b542c0675a50734e92f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://947c69acffc884b2bef5a6bb30b407f9e5dd0519fcaf967da29b2b9c1f983459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947c69acffc884b2bef5a6bb30b407f9e5dd0519fcaf967da29b2b9c1f983459\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.053417 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.069249 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e99580d33be14d59916fae025c81700120a64889ea614c1f04d18be178b533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.085318 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27035ec78eaedd29238e13769c857bb0f8b79cfa4f5218f99438636691166caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc03c3831c2b38d521328d59ec14fc608766952c77abc524de05c7d6abc6679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lpnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.106136 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.120490 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.127929 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.127977 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.127991 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.128014 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.128032 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:58Z","lastTransitionTime":"2025-11-24T21:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.137171 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a62fc83839ca150ad93c6e2e7c52a9f0d31efd20e3a994bfaf5f06d2fcf4a892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.154003 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://926013354edf1382934bf5829af75dc38d00843d1d93ae599bfcedd1322571d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:50Z\\\",\\\"message\\\":\\\"2025-11-24T21:20:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0f44749b-b8d6-4b46-95e9-9d3afc3cbd35\\\\n2025-11-24T21:20:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0f44749b-b8d6-4b46-95e9-9d3afc3cbd35 to /host/opt/cni/bin/\\\\n2025-11-24T21:20:05Z [verbose] multus-daemon started\\\\n2025-11-24T21:20:05Z [verbose] Readiness Indicator file check\\\\n2025-11-24T21:20:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.177029 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6520b970c1e0279fea2d39233569ea1cb355259398e39a9fd63a1b61218dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8862d30ccca23f9831d25c0184b0e342ca7e8a4c67b0fe96241323ed74bc49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6932314dff48e102e97de6739f7af9e8c748a9a027a00f6cb31e70c19b9f2027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d8ac7b09233a7e864b5dfba1aefafe23e08de6ef34879478cf63c7ec594794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07279f43e6efab7e8ce1739eb703e344027c7da1e758e62f581fe05c5028c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce0174ac3e69992dce18dea349b0c6ec98cd95c67d12cbd93110c0d604f76e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea71245c41a0f3fc20b632dcfff65b7ee7185eb47f94d40ee9a6391a6dd81960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea71245c41a0f3fc20b632dcfff65b7ee7185eb47f94d40ee9a6391a6dd81960\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:56Z\\\",\\\"message\\\":\\\"I1124 21:20:56.409410 6914 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 21:20:56.409420 6914 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:20:56.409443 6914 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 21:20:56.409449 6914 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 21:20:56.409464 6914 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 21:20:56.409470 6914 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:20:56.409472 6914 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 21:20:56.409497 6914 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 21:20:56.409506 6914 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 21:20:56.409518 6914 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 21:20:56.409535 6914 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 21:20:56.409542 6914 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 21:20:56.409553 6914 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:20:56.409565 6914 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:20:56.409571 6914 factory.go:656] Stopping watch factory\\\\nI1124 21:20:56.409594 6914 ovnkube.go:599] Stopped ovnkube\\\\nI1124 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jmqqt_openshift-ovn-kubernetes(3f235785-6b02-4304-99b8-3b216c369d45)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e3c0b39454a6a8c631a99c9c4faab3c4b956f476ba652178ee7b11041b825f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.191917 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.205772 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hkc4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a785aaf6-e561-47e9-a3ff-69e6930c5941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hkc4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.221011 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.230413 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.230450 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.230461 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.230478 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.230491 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:58Z","lastTransitionTime":"2025-11-24T21:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.235325 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:58Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.334277 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.334335 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.334344 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.334365 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.334378 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:58Z","lastTransitionTime":"2025-11-24T21:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.426362 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.426482 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:20:58 crc kubenswrapper[4915]: E1124 21:20:58.426663 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:20:58 crc kubenswrapper[4915]: E1124 21:20:58.426906 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.427235 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:20:58 crc kubenswrapper[4915]: E1124 21:20:58.427416 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.437158 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.437247 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.437265 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.437297 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.437318 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:58Z","lastTransitionTime":"2025-11-24T21:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.540294 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.540383 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.540413 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.540466 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.540496 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:58Z","lastTransitionTime":"2025-11-24T21:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.643436 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.643520 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.643548 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.643596 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.643622 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:58Z","lastTransitionTime":"2025-11-24T21:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.747006 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.747080 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.747103 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.747131 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.747154 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:58Z","lastTransitionTime":"2025-11-24T21:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.850145 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.850290 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.850321 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.850347 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.850363 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:58Z","lastTransitionTime":"2025-11-24T21:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.953850 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.953922 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.953945 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.953976 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:58 crc kubenswrapper[4915]: I1124 21:20:58.953997 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:58Z","lastTransitionTime":"2025-11-24T21:20:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.057494 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.057551 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.057567 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.057591 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.057608 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:59Z","lastTransitionTime":"2025-11-24T21:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.161005 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.161083 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.161105 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.161134 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.161161 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:59Z","lastTransitionTime":"2025-11-24T21:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.264088 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.264145 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.264158 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.264177 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.264189 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:59Z","lastTransitionTime":"2025-11-24T21:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.367606 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.367667 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.367683 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.367705 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.367722 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:59Z","lastTransitionTime":"2025-11-24T21:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.425968 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:20:59 crc kubenswrapper[4915]: E1124 21:20:59.426176 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.470991 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.471112 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.471140 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.471181 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.471198 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:59Z","lastTransitionTime":"2025-11-24T21:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.486996 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.487055 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.487068 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.487086 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.487098 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:59Z","lastTransitionTime":"2025-11-24T21:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:59 crc kubenswrapper[4915]: E1124 21:20:59.504233 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:59Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.508426 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.508472 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.508487 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.508509 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.508524 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:59Z","lastTransitionTime":"2025-11-24T21:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:59 crc kubenswrapper[4915]: E1124 21:20:59.529643 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:59Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.536282 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.536357 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.536366 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.536382 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.536392 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:59Z","lastTransitionTime":"2025-11-24T21:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:59 crc kubenswrapper[4915]: E1124 21:20:59.549848 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:59Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.553297 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.553345 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.553359 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.553382 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.553396 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:59Z","lastTransitionTime":"2025-11-24T21:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:59 crc kubenswrapper[4915]: E1124 21:20:59.565899 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:59Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.569566 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.569601 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.569614 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.569663 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.569677 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:59Z","lastTransitionTime":"2025-11-24T21:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:59 crc kubenswrapper[4915]: E1124 21:20:59.583333 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:20:59Z is after 2025-08-24T17:21:41Z" Nov 24 21:20:59 crc kubenswrapper[4915]: E1124 21:20:59.583470 4915 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.585610 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.585633 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.585640 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.585653 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.585662 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:59Z","lastTransitionTime":"2025-11-24T21:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.689295 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.689747 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.689949 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.690113 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.690251 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:59Z","lastTransitionTime":"2025-11-24T21:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.792999 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.793044 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.793057 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.793074 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.793084 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:59Z","lastTransitionTime":"2025-11-24T21:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.895962 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.896033 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.896050 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.896079 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:20:59 crc kubenswrapper[4915]: I1124 21:20:59.896096 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:59Z","lastTransitionTime":"2025-11-24T21:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:20:59.999682 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:20:59.999762 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:20:59.999837 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:20:59.999868 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:20:59.999890 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:20:59Z","lastTransitionTime":"2025-11-24T21:20:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.103524 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.103560 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.103568 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.103584 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.103594 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:00Z","lastTransitionTime":"2025-11-24T21:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.206052 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.206093 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.206103 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.206117 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.206128 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:00Z","lastTransitionTime":"2025-11-24T21:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.308428 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.308456 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.308465 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.308481 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.308488 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:00Z","lastTransitionTime":"2025-11-24T21:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.412083 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.412617 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.412808 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.412979 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.413132 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:00Z","lastTransitionTime":"2025-11-24T21:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.426566 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:00 crc kubenswrapper[4915]: E1124 21:21:00.426668 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.426802 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.426831 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:21:00 crc kubenswrapper[4915]: E1124 21:21:00.426892 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:21:00 crc kubenswrapper[4915]: E1124 21:21:00.427038 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.515927 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.515988 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.516006 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.516031 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.516047 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:00Z","lastTransitionTime":"2025-11-24T21:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.619384 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.619441 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.619457 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.619480 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.619498 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:00Z","lastTransitionTime":"2025-11-24T21:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.722949 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.723010 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.723019 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.723051 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.723061 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:00Z","lastTransitionTime":"2025-11-24T21:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.826551 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.826593 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.826607 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.826626 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.826639 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:00Z","lastTransitionTime":"2025-11-24T21:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.929348 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.929421 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.929442 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.929472 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:00 crc kubenswrapper[4915]: I1124 21:21:00.929494 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:00Z","lastTransitionTime":"2025-11-24T21:21:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.031970 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.032007 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.032020 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.032036 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.032048 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:01Z","lastTransitionTime":"2025-11-24T21:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.135250 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.135327 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.135350 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.135382 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.135405 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:01Z","lastTransitionTime":"2025-11-24T21:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.238092 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.238169 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.238184 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.238201 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.238213 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:01Z","lastTransitionTime":"2025-11-24T21:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.341640 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.341696 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.341713 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.341738 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.341754 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:01Z","lastTransitionTime":"2025-11-24T21:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.426663 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:21:01 crc kubenswrapper[4915]: E1124 21:21:01.426958 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.445159 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.445247 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.445263 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.445282 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.445296 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:01Z","lastTransitionTime":"2025-11-24T21:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.548108 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.548161 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.548184 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.548203 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.548216 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:01Z","lastTransitionTime":"2025-11-24T21:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.650746 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.650802 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.650812 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.650826 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.650836 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:01Z","lastTransitionTime":"2025-11-24T21:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.757224 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.757316 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.757341 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.757386 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.757410 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:01Z","lastTransitionTime":"2025-11-24T21:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.859976 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.860359 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.860378 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.860401 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.860421 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:01Z","lastTransitionTime":"2025-11-24T21:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.962942 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.962966 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.962974 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.962985 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:01 crc kubenswrapper[4915]: I1124 21:21:01.962994 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:01Z","lastTransitionTime":"2025-11-24T21:21:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.065532 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.065577 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.065590 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.065611 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.065626 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:02Z","lastTransitionTime":"2025-11-24T21:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.168280 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.168581 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.168663 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.168753 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.168880 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:02Z","lastTransitionTime":"2025-11-24T21:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.271299 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.271577 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.271658 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.271771 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.271900 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:02Z","lastTransitionTime":"2025-11-24T21:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.374996 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.375080 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.375103 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.375133 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.375157 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:02Z","lastTransitionTime":"2025-11-24T21:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.426229 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.426269 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:21:02 crc kubenswrapper[4915]: E1124 21:21:02.426366 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.426306 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:02 crc kubenswrapper[4915]: E1124 21:21:02.426619 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:21:02 crc kubenswrapper[4915]: E1124 21:21:02.426887 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.444501 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vl494" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7dd98449-7ae2-455f-aa42-fc277ebfd5f2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3af9843806bf84f8bfe1e30750748f73bd9cca4cf9d58671033610365c57f8d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sqrj5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-vl494\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:21:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.465434 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"01094696-9a80-41b4-8341-f721d64dcba9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6566ed68b170603480f4f1938004e3c9cde1a814ceba7575a792ee272c03c2e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://94f65a630ef6d66b01511f0e4ce0279addf8b6f72ee79283bccc1325b5f198b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://430687a31d119ccef92a1bc31318cef87e3c53b4a12df207f9b9ef8bcc968c9f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://26fa9d2e4c65eb47a5bdf7ef3f569abd936b8bf98e79ec9656e0edbf1f1cdd60\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:21:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.477804 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.477841 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.477849 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.477865 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.477875 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:02Z","lastTransitionTime":"2025-11-24T21:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.482970 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0b59702f-bbae-4a7e-91f2-292687063f63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c9823ea60f9f0adf6aca07204f39ef6bf6eeb622f4fcba7d3f804ae38f337d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3aa36275462843401ca1bae1eee8831ff2c28d8e6af24eaea56d765c16ecc8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://014646fee05d1d964cd6d7b56ab09b6c95b56f92796b542c0675a50734e92f37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://947c69acffc884b2bef5a6bb30b407f9e5dd0519fcaf967da29b2b9c1f983459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://947c69acffc884b2bef5a6bb30b407f9e5dd0519fcaf967da29b2b9c1f983459\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:21:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.507280 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a99b006a-abb1-44e7-b340-a5abedce2c5d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://970b70730a1600b51174bc5259c4ceaa5d69f203f7e97d9cfcb3f6faada70b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d3c42074355efe3d8bae82fb7f98b7c118e015551379858808bb4fa8c7eb0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e245feaa377bb8d375e72a05cc677c773df04e8d8cf5bef5683ab005977c3a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9d6727d08918ada80c383dfe12778c36bc4799ce54f8da27c273fc771e94f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://961ae25787805870bb8a3ec96f466e1177ce6dcf26e2dde85d14ffdd8a99491d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a9e522fa19ed6a197e0161d966ea7ae824f7f82c219414751163a373dffb3b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff65daf0aba618d9a58c0f71af7712a137d100e1850dcee5f861ebd30ded5b3a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad3c325289e6dd1aff422efba8df2dbd04d175fc16e560e63f1bba43eebc3a03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:21:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.523946 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:21:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.540673 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:21:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.554307 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kg8p2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3edde8ec-c020-4be1-8007-edf769dd0ecc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://30edfd23c61f7dc5eaf8ee660704c66757e7b466d80bded9b08cd0273dce58dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pttv5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kg8p2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:21:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.569903 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fd0102e-d41b-4396-bf60-b22178c3e574\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:19:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://510ae60f2ffed76646a78b86b0dff20eb5b29ea33722dccd1deddef2f19c013c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b02ca1a6a1956540022f473bf935bd00b7f71318500104026077cdcd07a2d74\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://addab57879a0f5117f8023b4407ac6930f81351fdfad33230cdaf16de5303bd0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ee0a13e9bf89dff0ecb04cb1d3bdc9a88c9f909554863b25ce87c45dca918\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0bd6dbc8716a2b77a285c7e436d253f11091c47b5d8a292556426b56e6039c54\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-24T21:19:56Z\\\",\\\"message\\\":\\\"W1124 21:19:45.647300 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1124 21:19:45.647585 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764019185 cert, and key in /tmp/serving-cert-4023393187/serving-signer.crt, /tmp/serving-cert-4023393187/serving-signer.key\\\\nI1124 21:19:46.063707 1 observer_polling.go:159] Starting file observer\\\\nW1124 21:19:46.067804 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1124 21:19:46.068047 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1124 21:19:46.069370 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4023393187/tls.crt::/tmp/serving-cert-4023393187/tls.key\\\\\\\"\\\\nF1124 21:19:56.373690 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1f004309ac0ef1455deb3bd1c58124d253018f88a65f00fb8a6c803e2f4221a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:19:45Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://100b6f86fa0bc6d90725b2f9c53ca6c8298d90864ccee1776889ba85e2f8357b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:19:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:19:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:19:42Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:21:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.580047 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.580092 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.580104 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.580134 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.580145 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:02Z","lastTransitionTime":"2025-11-24T21:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.582010 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:21:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.593305 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a62fc83839ca150ad93c6e2e7c52a9f0d31efd20e3a994bfaf5f06d2fcf4a892\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:21:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.606078 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b71a6eb1-12c2-4f84-875b-868c12dd17b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://17e99580d33be14d59916fae025c81700120a64889ea614c1f04d18be178b533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab8a659fe3a446efc8c7c95701df8d18b1e19a95301cacd2424b7374bfb678b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b51061703378c2960c86f5494737c71fa6c2f96aac6e26e098f75a17c596acc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503e5cb069f7425a970d88ff4d581cc223ca720d093260753e2966af27cb65f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c762fb4e0d1132e74d24b9360748ec1e5e4f5ac4664eb97034190c93990ba28\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://efa838b19d820d7f7f99972bc7684348197d8b398d0cc6317d711e25d3f85c1b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb8fcf3409c80bd943bd0b73812ecbacc5deb9c12e82cc412c128f64fc0365d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7wqvc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r7mbp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:21:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.617406 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f3e92c-d4a6-421c-b970-3d6f6baf0ee9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27035ec78eaedd29238e13769c857bb0f8b79cfa4f5218f99438636691166caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fc03c3831c2b38d521328d59ec14fc608766952c77abc524de05c7d6abc6679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cvs4t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lpnnr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:21:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.628309 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-b8kq8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f5b8930d-4919-4a02-a962-c93b5f8f4ad3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://926013354edf1382934bf5829af75dc38d00843d1d93ae599bfcedd1322571d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:50Z\\\",\\\"message\\\":\\\"2025-11-24T21:20:04+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0f44749b-b8d6-4b46-95e9-9d3afc3cbd35\\\\n2025-11-24T21:20:04+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0f44749b-b8d6-4b46-95e9-9d3afc3cbd35 to /host/opt/cni/bin/\\\\n2025-11-24T21:20:05Z [verbose] multus-daemon started\\\\n2025-11-24T21:20:05Z [verbose] Readiness Indicator file check\\\\n2025-11-24T21:20:50Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xjn9v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-b8kq8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:21:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.639086 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2fb6b6c15a2a6f8dc9dd5a48f3068e46d3fe55b0055f355cce9d5c295a88e4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:21:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.650246 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c0f6a693cc0f42d968fec6141967f53c570110e15043a158dbd954afc944e18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24c5e4986979140c77606883bcea7e511bedc6160621bf205a25ccc71e710ffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:21:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.667390 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3f235785-6b02-4304-99b8-3b216c369d45\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6520b970c1e0279fea2d39233569ea1cb355259398e39a9fd63a1b61218dcf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca8862d30ccca23f9831d25c0184b0e342ca7e8a4c67b0fe96241323ed74bc49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6932314dff48e102e97de6739f7af9e8c748a9a027a00f6cb31e70c19b9f2027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27d8ac7b09233a7e864b5dfba1aefafe23e08de6ef34879478cf63c7ec594794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a07279f43e6efab7e8ce1739eb703e344027c7da1e758e62f581fe05c5028c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce0174ac3e69992dce18dea349b0c6ec98cd95c67d12cbd93110c0d604f76e93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea71245c41a0f3fc20b632dcfff65b7ee7185eb47f94d40ee9a6391a6dd81960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea71245c41a0f3fc20b632dcfff65b7ee7185eb47f94d40ee9a6391a6dd81960\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-24T21:20:56Z\\\",\\\"message\\\":\\\"I1124 21:20:56.409410 6914 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1124 21:20:56.409420 6914 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1124 21:20:56.409443 6914 handler.go:208] Removed *v1.Node event handler 7\\\\nI1124 21:20:56.409449 6914 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1124 21:20:56.409464 6914 handler.go:208] Removed *v1.Node event handler 2\\\\nI1124 21:20:56.409470 6914 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1124 21:20:56.409472 6914 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1124 21:20:56.409497 6914 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1124 21:20:56.409506 6914 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1124 21:20:56.409518 6914 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1124 21:20:56.409535 6914 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1124 21:20:56.409542 6914 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1124 21:20:56.409553 6914 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1124 21:20:56.409565 6914 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1124 21:20:56.409571 6914 factory.go:656] Stopping watch factory\\\\nI1124 21:20:56.409594 6914 ovnkube.go:599] Stopped ovnkube\\\\nI1124 2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jmqqt_openshift-ovn-kubernetes(3f235785-6b02-4304-99b8-3b216c369d45)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e3c0b39454a6a8c631a99c9c4faab3c4b956f476ba652178ee7b11041b825f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-24T21:20:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l2w9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-jmqqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:21:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.680000 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9eade8b9a9cbf562e81e513faf1fdf8eee750905e13888223447b314481669\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-24T21:20:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45qlw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:02Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lxwjd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:21:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.682441 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.682514 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.682523 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.682537 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.682546 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:02Z","lastTransitionTime":"2025-11-24T21:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.691989 4915 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hkc4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a785aaf6-e561-47e9-a3ff-69e6930c5941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-24T21:20:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlb48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-24T21:20:16Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hkc4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:21:02Z is after 2025-08-24T17:21:41Z" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.786734 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.786817 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.786828 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.786843 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.786854 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:02Z","lastTransitionTime":"2025-11-24T21:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.899101 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.899150 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.899162 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.899182 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:02 crc kubenswrapper[4915]: I1124 21:21:02.899195 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:02Z","lastTransitionTime":"2025-11-24T21:21:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.002304 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.002346 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.002357 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.002381 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.002405 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:03Z","lastTransitionTime":"2025-11-24T21:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.105642 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.105683 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.105694 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.105709 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.105720 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:03Z","lastTransitionTime":"2025-11-24T21:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.209621 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.209710 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.209729 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.209759 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.209821 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:03Z","lastTransitionTime":"2025-11-24T21:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.313116 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.313176 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.313195 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.313219 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.313236 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:03Z","lastTransitionTime":"2025-11-24T21:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.416333 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.416404 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.416421 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.416448 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.416491 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:03Z","lastTransitionTime":"2025-11-24T21:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.425869 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:21:03 crc kubenswrapper[4915]: E1124 21:21:03.426116 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.519508 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.519546 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.519555 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.519571 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.519580 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:03Z","lastTransitionTime":"2025-11-24T21:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.622855 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.622912 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.622929 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.622952 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.622965 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:03Z","lastTransitionTime":"2025-11-24T21:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.725762 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.725849 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.725859 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.725874 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.725884 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:03Z","lastTransitionTime":"2025-11-24T21:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.829147 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.829222 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.829248 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.829279 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.829300 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:03Z","lastTransitionTime":"2025-11-24T21:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.932319 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.932382 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.932394 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.932410 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:03 crc kubenswrapper[4915]: I1124 21:21:03.932420 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:03Z","lastTransitionTime":"2025-11-24T21:21:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.036522 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.036593 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.036617 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.036645 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.036667 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:04Z","lastTransitionTime":"2025-11-24T21:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.140201 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.140311 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.140339 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.140370 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.140415 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:04Z","lastTransitionTime":"2025-11-24T21:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.243658 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.243860 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.243884 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.243920 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.243942 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:04Z","lastTransitionTime":"2025-11-24T21:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.346923 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.346996 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.347014 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.347041 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.347062 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:04Z","lastTransitionTime":"2025-11-24T21:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.426504 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.426630 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:04 crc kubenswrapper[4915]: E1124 21:21:04.426757 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.426830 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:04 crc kubenswrapper[4915]: E1124 21:21:04.427631 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:21:04 crc kubenswrapper[4915]: E1124 21:21:04.427884 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.449899 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.449959 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.449973 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.449995 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.450010 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:04Z","lastTransitionTime":"2025-11-24T21:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.551949 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.552019 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.552037 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.552062 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.552079 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:04Z","lastTransitionTime":"2025-11-24T21:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.654832 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.654921 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.654947 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.654981 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.655006 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:04Z","lastTransitionTime":"2025-11-24T21:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.757674 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.757730 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.757744 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.757805 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.757824 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:04Z","lastTransitionTime":"2025-11-24T21:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.860656 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.860705 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.860716 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.860733 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.860745 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:04Z","lastTransitionTime":"2025-11-24T21:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.963893 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.963966 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.963990 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.964021 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:04 crc kubenswrapper[4915]: I1124 21:21:04.964039 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:04Z","lastTransitionTime":"2025-11-24T21:21:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.066965 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.067036 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.067050 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.067068 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.067080 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:05Z","lastTransitionTime":"2025-11-24T21:21:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.169296 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.169366 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.169381 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.169404 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.169421 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:05Z","lastTransitionTime":"2025-11-24T21:21:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.272932 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.273004 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.273026 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.273055 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.273077 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:05Z","lastTransitionTime":"2025-11-24T21:21:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.376977 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.377016 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.377024 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.377039 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.377048 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:05Z","lastTransitionTime":"2025-11-24T21:21:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.426238 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:21:05 crc kubenswrapper[4915]: E1124 21:21:05.426379 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.479642 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.479703 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.479713 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.479751 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.479769 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:05Z","lastTransitionTime":"2025-11-24T21:21:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.582917 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.582978 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.582999 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.583028 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.583049 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:05Z","lastTransitionTime":"2025-11-24T21:21:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.686024 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.686080 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.686095 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.686115 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.686128 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:05Z","lastTransitionTime":"2025-11-24T21:21:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.789344 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.789396 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.789411 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.789431 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.789448 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:05Z","lastTransitionTime":"2025-11-24T21:21:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.892194 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.892248 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.892257 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.892271 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.892280 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:05Z","lastTransitionTime":"2025-11-24T21:21:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.997403 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.997438 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.997446 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.997460 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:05 crc kubenswrapper[4915]: I1124 21:21:05.997470 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:05Z","lastTransitionTime":"2025-11-24T21:21:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.100934 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.100991 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.101002 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.101027 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.101040 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:06Z","lastTransitionTime":"2025-11-24T21:21:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.204612 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.204689 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.204718 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.204739 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.204754 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:06Z","lastTransitionTime":"2025-11-24T21:21:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.307011 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.307037 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.307044 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.307057 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.307065 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:06Z","lastTransitionTime":"2025-11-24T21:21:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.312810 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.312868 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.312889 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.312918 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.312948 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:21:06 crc kubenswrapper[4915]: E1124 21:21:06.313024 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:10.312997627 +0000 UTC m=+148.629249840 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:21:06 crc kubenswrapper[4915]: E1124 21:21:06.313037 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:21:06 crc kubenswrapper[4915]: E1124 21:21:06.313064 4915 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:21:06 crc kubenswrapper[4915]: E1124 21:21:06.313140 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:22:10.313120232 +0000 UTC m=+148.629372435 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 24 21:21:06 crc kubenswrapper[4915]: E1124 21:21:06.313070 4915 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:21:06 crc kubenswrapper[4915]: E1124 21:21:06.313074 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:21:06 crc kubenswrapper[4915]: E1124 21:21:06.313238 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 24 21:21:06 crc kubenswrapper[4915]: E1124 21:21:06.313292 4915 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 24 21:21:06 crc kubenswrapper[4915]: E1124 21:21:06.313299 4915 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:21:06 crc kubenswrapper[4915]: E1124 21:21:06.313314 4915 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:21:06 crc kubenswrapper[4915]: E1124 21:21:06.313237 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-24 21:22:10.313222285 +0000 UTC m=+148.629474498 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 24 21:21:06 crc kubenswrapper[4915]: E1124 21:21:06.313417 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-24 21:22:10.313386162 +0000 UTC m=+148.629638365 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:21:06 crc kubenswrapper[4915]: E1124 21:21:06.313458 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-24 21:22:10.313445864 +0000 UTC m=+148.629698067 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.410153 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.410227 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.410251 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.410279 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.410304 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:06Z","lastTransitionTime":"2025-11-24T21:21:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.426203 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.426316 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.426325 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:21:06 crc kubenswrapper[4915]: E1124 21:21:06.426443 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:21:06 crc kubenswrapper[4915]: E1124 21:21:06.426572 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:21:06 crc kubenswrapper[4915]: E1124 21:21:06.426882 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.513322 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.513377 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.513394 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.513417 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.513437 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:06Z","lastTransitionTime":"2025-11-24T21:21:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.616862 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.616913 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.616930 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.616956 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.616973 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:06Z","lastTransitionTime":"2025-11-24T21:21:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.719645 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.719710 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.719727 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.719752 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.719772 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:06Z","lastTransitionTime":"2025-11-24T21:21:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.823180 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.823258 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.823282 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.823315 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.823338 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:06Z","lastTransitionTime":"2025-11-24T21:21:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.926325 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.926386 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.926397 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.926421 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:06 crc kubenswrapper[4915]: I1124 21:21:06.926442 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:06Z","lastTransitionTime":"2025-11-24T21:21:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.029194 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.029706 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.029720 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.029741 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.029754 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:07Z","lastTransitionTime":"2025-11-24T21:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.133602 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.133643 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.133656 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.133675 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.133701 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:07Z","lastTransitionTime":"2025-11-24T21:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.238967 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.239066 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.239089 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.239138 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.239164 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:07Z","lastTransitionTime":"2025-11-24T21:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.342544 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.342611 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.342633 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.342661 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.342683 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:07Z","lastTransitionTime":"2025-11-24T21:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.426587 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:21:07 crc kubenswrapper[4915]: E1124 21:21:07.426867 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.445628 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.445680 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.445695 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.445720 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.445738 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:07Z","lastTransitionTime":"2025-11-24T21:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.548349 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.548409 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.548432 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.548464 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.548485 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:07Z","lastTransitionTime":"2025-11-24T21:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.651180 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.651226 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.651238 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.651255 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.651268 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:07Z","lastTransitionTime":"2025-11-24T21:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.755144 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.755202 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.755218 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.755241 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.755258 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:07Z","lastTransitionTime":"2025-11-24T21:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.858593 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.858663 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.858682 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.858710 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.858732 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:07Z","lastTransitionTime":"2025-11-24T21:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.963008 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.963086 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.963108 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.963137 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:07 crc kubenswrapper[4915]: I1124 21:21:07.963158 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:07Z","lastTransitionTime":"2025-11-24T21:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.067203 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.067289 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.067313 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.067349 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.067374 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:08Z","lastTransitionTime":"2025-11-24T21:21:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.170268 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.170339 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.170376 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.170409 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.170429 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:08Z","lastTransitionTime":"2025-11-24T21:21:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.273270 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.273343 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.273364 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.273394 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.273415 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:08Z","lastTransitionTime":"2025-11-24T21:21:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.376402 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.376452 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.376465 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.376483 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.376494 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:08Z","lastTransitionTime":"2025-11-24T21:21:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.426661 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.426688 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:21:08 crc kubenswrapper[4915]: E1124 21:21:08.427002 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.427050 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:08 crc kubenswrapper[4915]: E1124 21:21:08.427434 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:21:08 crc kubenswrapper[4915]: E1124 21:21:08.427600 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.441972 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.479107 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.479153 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.479164 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.479182 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.479195 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:08Z","lastTransitionTime":"2025-11-24T21:21:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.581961 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.582023 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.582040 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.582063 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.582079 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:08Z","lastTransitionTime":"2025-11-24T21:21:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.685569 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.685640 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.685651 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.685673 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.685687 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:08Z","lastTransitionTime":"2025-11-24T21:21:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.789453 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.789538 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.789565 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.789599 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.789621 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:08Z","lastTransitionTime":"2025-11-24T21:21:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.893377 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.893446 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.893464 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.893490 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.893508 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:08Z","lastTransitionTime":"2025-11-24T21:21:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.996471 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.996530 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.996547 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.996571 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:08 crc kubenswrapper[4915]: I1124 21:21:08.996588 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:08Z","lastTransitionTime":"2025-11-24T21:21:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.100068 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.100121 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.100139 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.100167 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.100186 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:09Z","lastTransitionTime":"2025-11-24T21:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.203500 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.203564 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.203579 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.203602 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.203617 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:09Z","lastTransitionTime":"2025-11-24T21:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.307266 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.307339 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.307358 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.307389 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.307407 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:09Z","lastTransitionTime":"2025-11-24T21:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.410442 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.410496 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.410511 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.410536 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.410554 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:09Z","lastTransitionTime":"2025-11-24T21:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.426051 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:21:09 crc kubenswrapper[4915]: E1124 21:21:09.426388 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.427500 4915 scope.go:117] "RemoveContainer" containerID="ea71245c41a0f3fc20b632dcfff65b7ee7185eb47f94d40ee9a6391a6dd81960" Nov 24 21:21:09 crc kubenswrapper[4915]: E1124 21:21:09.428025 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jmqqt_openshift-ovn-kubernetes(3f235785-6b02-4304-99b8-3b216c369d45)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" podUID="3f235785-6b02-4304-99b8-3b216c369d45" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.514355 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.514413 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.514430 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.514455 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.514474 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:09Z","lastTransitionTime":"2025-11-24T21:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.617963 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.618033 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.618051 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.618079 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.618098 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:09Z","lastTransitionTime":"2025-11-24T21:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.720997 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.721058 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.721075 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.721099 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.721116 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:09Z","lastTransitionTime":"2025-11-24T21:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.746831 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.746891 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.746905 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.746926 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.746941 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:09Z","lastTransitionTime":"2025-11-24T21:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:09 crc kubenswrapper[4915]: E1124 21:21:09.772037 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:21:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.777636 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.777683 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.777702 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.777727 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.777745 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:09Z","lastTransitionTime":"2025-11-24T21:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:09 crc kubenswrapper[4915]: E1124 21:21:09.799883 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:21:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.804404 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.804458 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.804474 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.804495 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.804507 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:09Z","lastTransitionTime":"2025-11-24T21:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:09 crc kubenswrapper[4915]: E1124 21:21:09.821506 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:21:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.825834 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.825910 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.825930 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.825955 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.825974 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:09Z","lastTransitionTime":"2025-11-24T21:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:09 crc kubenswrapper[4915]: E1124 21:21:09.841642 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:21:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.846700 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.846812 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.846836 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.846861 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.846880 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:09Z","lastTransitionTime":"2025-11-24T21:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:09 crc kubenswrapper[4915]: E1124 21:21:09.867419 4915 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-24T21:21:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f82f57df-1c60-47eb-b103-4027a05787c0\\\",\\\"systemUUID\\\":\\\"0b6646c2-0b1e-4f58-9c61-a13867a5dfdb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-24T21:21:09Z is after 2025-08-24T17:21:41Z" Nov 24 21:21:09 crc kubenswrapper[4915]: E1124 21:21:09.867571 4915 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.869620 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.869689 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.869707 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.869734 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.869752 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:09Z","lastTransitionTime":"2025-11-24T21:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.972997 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.973056 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.973074 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.973099 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:09 crc kubenswrapper[4915]: I1124 21:21:09.973117 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:09Z","lastTransitionTime":"2025-11-24T21:21:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.076709 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.076801 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.076819 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.076875 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.076893 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:10Z","lastTransitionTime":"2025-11-24T21:21:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.179974 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.180081 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.180102 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.180128 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.180145 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:10Z","lastTransitionTime":"2025-11-24T21:21:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.283863 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.283940 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.283959 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.283983 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.284000 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:10Z","lastTransitionTime":"2025-11-24T21:21:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.391740 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.391855 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.391878 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.391904 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.391925 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:10Z","lastTransitionTime":"2025-11-24T21:21:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.432260 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:21:10 crc kubenswrapper[4915]: E1124 21:21:10.432350 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.432549 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:10 crc kubenswrapper[4915]: E1124 21:21:10.432606 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.432770 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:10 crc kubenswrapper[4915]: E1124 21:21:10.432871 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.495041 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.495121 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.495146 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.495177 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.495198 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:10Z","lastTransitionTime":"2025-11-24T21:21:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.598510 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.598566 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.598583 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.598606 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.598625 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:10Z","lastTransitionTime":"2025-11-24T21:21:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.701022 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.701101 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.701130 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.701159 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.701182 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:10Z","lastTransitionTime":"2025-11-24T21:21:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.804546 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.804604 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.804626 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.804654 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.804675 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:10Z","lastTransitionTime":"2025-11-24T21:21:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.907736 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.907805 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.907815 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.907832 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:10 crc kubenswrapper[4915]: I1124 21:21:10.907865 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:10Z","lastTransitionTime":"2025-11-24T21:21:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.010725 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.010829 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.010849 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.010873 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.010889 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:11Z","lastTransitionTime":"2025-11-24T21:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.114306 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.114380 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.114404 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.114432 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.114455 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:11Z","lastTransitionTime":"2025-11-24T21:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.217570 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.217656 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.217689 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.217719 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.217739 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:11Z","lastTransitionTime":"2025-11-24T21:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.321050 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.321115 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.321139 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.321169 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.321195 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:11Z","lastTransitionTime":"2025-11-24T21:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.424908 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.424967 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.424984 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.425003 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.425015 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:11Z","lastTransitionTime":"2025-11-24T21:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.425675 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:21:11 crc kubenswrapper[4915]: E1124 21:21:11.425885 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.528004 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.528050 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.528060 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.528077 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.528089 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:11Z","lastTransitionTime":"2025-11-24T21:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.630854 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.630916 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.630935 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.630966 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.631006 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:11Z","lastTransitionTime":"2025-11-24T21:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.734048 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.734117 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.734141 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.734169 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.734192 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:11Z","lastTransitionTime":"2025-11-24T21:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.837364 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.837426 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.837445 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.837470 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.837489 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:11Z","lastTransitionTime":"2025-11-24T21:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.940872 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.940945 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.940965 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.940988 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:11 crc kubenswrapper[4915]: I1124 21:21:11.941005 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:11Z","lastTransitionTime":"2025-11-24T21:21:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.043530 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.043608 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.043636 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.043667 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.043691 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:12Z","lastTransitionTime":"2025-11-24T21:21:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.147270 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.147318 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.147329 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.147348 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.147360 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:12Z","lastTransitionTime":"2025-11-24T21:21:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.250324 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.250366 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.250377 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.250394 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.250405 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:12Z","lastTransitionTime":"2025-11-24T21:21:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.352955 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.353009 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.353030 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.353059 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.353080 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:12Z","lastTransitionTime":"2025-11-24T21:21:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.426488 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.426575 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:12 crc kubenswrapper[4915]: E1124 21:21:12.426644 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.426982 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:21:12 crc kubenswrapper[4915]: E1124 21:21:12.426971 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:21:12 crc kubenswrapper[4915]: E1124 21:21:12.427056 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.456118 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.456174 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.456190 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.456213 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.456229 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:12Z","lastTransitionTime":"2025-11-24T21:21:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.513752 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podStartSLOduration=70.513731877 podStartE2EDuration="1m10.513731877s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:21:12.496166069 +0000 UTC m=+90.812418252" watchObservedRunningTime="2025-11-24 21:21:12.513731877 +0000 UTC m=+90.829984050" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.544538 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=4.544519267 podStartE2EDuration="4.544519267s" podCreationTimestamp="2025-11-24 21:21:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:21:12.528356029 +0000 UTC m=+90.844608242" watchObservedRunningTime="2025-11-24 21:21:12.544519267 +0000 UTC m=+90.860771440" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.558628 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.558696 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.558721 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.558753 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.560131 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:12Z","lastTransitionTime":"2025-11-24T21:21:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.605935 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-kg8p2" podStartSLOduration=70.605909192 podStartE2EDuration="1m10.605909192s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:21:12.604386897 +0000 UTC m=+90.920639090" watchObservedRunningTime="2025-11-24 21:21:12.605909192 +0000 UTC m=+90.922161405" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.618349 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-vl494" podStartSLOduration=70.618326155 podStartE2EDuration="1m10.618326155s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:21:12.618075447 +0000 UTC m=+90.934327630" watchObservedRunningTime="2025-11-24 21:21:12.618326155 +0000 UTC m=+90.934578338" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.637133 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=66.637106667 podStartE2EDuration="1m6.637106667s" podCreationTimestamp="2025-11-24 21:20:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:21:12.636546537 +0000 UTC m=+90.952798750" watchObservedRunningTime="2025-11-24 21:21:12.637106667 +0000 UTC m=+90.953358870" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.651441 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=36.651420569 podStartE2EDuration="36.651420569s" podCreationTimestamp="2025-11-24 21:20:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:21:12.650552608 +0000 UTC m=+90.966804791" watchObservedRunningTime="2025-11-24 21:21:12.651420569 +0000 UTC m=+90.967672772" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.662765 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.662819 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.662832 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.662852 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.662864 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:12Z","lastTransitionTime":"2025-11-24T21:21:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.690113 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=70.690085131 podStartE2EDuration="1m10.690085131s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:21:12.689966817 +0000 UTC m=+91.006219030" watchObservedRunningTime="2025-11-24 21:21:12.690085131 +0000 UTC m=+91.006337334" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.721962 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-r7mbp" podStartSLOduration=70.72194423 podStartE2EDuration="1m10.72194423s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:21:12.72083155 +0000 UTC m=+91.037083753" watchObservedRunningTime="2025-11-24 21:21:12.72194423 +0000 UTC m=+91.038196413" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.764797 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.765085 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.765157 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.765237 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.765296 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:12Z","lastTransitionTime":"2025-11-24T21:21:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.766015 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=71.765999665 podStartE2EDuration="1m11.765999665s" podCreationTimestamp="2025-11-24 21:20:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:21:12.76529696 +0000 UTC m=+91.081549143" watchObservedRunningTime="2025-11-24 21:21:12.765999665 +0000 UTC m=+91.082251848" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.766191 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lpnnr" podStartSLOduration=70.766186411 podStartE2EDuration="1m10.766186411s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:21:12.746433465 +0000 UTC m=+91.062685648" watchObservedRunningTime="2025-11-24 21:21:12.766186411 +0000 UTC m=+91.082438594" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.802410 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-b8kq8" podStartSLOduration=70.802394866 podStartE2EDuration="1m10.802394866s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:21:12.801559926 +0000 UTC m=+91.117812129" watchObservedRunningTime="2025-11-24 21:21:12.802394866 +0000 UTC m=+91.118647039" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.867885 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.867941 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.867960 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.867987 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.868006 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:12Z","lastTransitionTime":"2025-11-24T21:21:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.970591 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.970664 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.970687 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.970714 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:12 crc kubenswrapper[4915]: I1124 21:21:12.970737 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:12Z","lastTransitionTime":"2025-11-24T21:21:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.074142 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.074199 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.074217 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.074239 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.074270 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:13Z","lastTransitionTime":"2025-11-24T21:21:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.176522 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.176569 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.176583 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.176603 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.176614 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:13Z","lastTransitionTime":"2025-11-24T21:21:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.278689 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.278747 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.278762 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.278804 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.278816 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:13Z","lastTransitionTime":"2025-11-24T21:21:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.381555 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.381588 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.381599 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.381613 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.381622 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:13Z","lastTransitionTime":"2025-11-24T21:21:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.426433 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:21:13 crc kubenswrapper[4915]: E1124 21:21:13.426929 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.484235 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.484322 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.484339 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.484362 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.484383 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:13Z","lastTransitionTime":"2025-11-24T21:21:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.586691 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.586738 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.586749 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.586765 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.586797 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:13Z","lastTransitionTime":"2025-11-24T21:21:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.689361 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.689398 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.689408 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.689420 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.689430 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:13Z","lastTransitionTime":"2025-11-24T21:21:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.791634 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.791670 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.791678 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.791692 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.791701 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:13Z","lastTransitionTime":"2025-11-24T21:21:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.894872 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.894915 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.894925 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.894940 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.894951 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:13Z","lastTransitionTime":"2025-11-24T21:21:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.997659 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.997702 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.997712 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.997727 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:13 crc kubenswrapper[4915]: I1124 21:21:13.997737 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:13Z","lastTransitionTime":"2025-11-24T21:21:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.099981 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.100022 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.100033 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.100049 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.100061 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:14Z","lastTransitionTime":"2025-11-24T21:21:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.205387 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.205420 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.205431 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.205446 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.205457 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:14Z","lastTransitionTime":"2025-11-24T21:21:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.309161 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.309231 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.309254 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.309282 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.309304 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:14Z","lastTransitionTime":"2025-11-24T21:21:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.411138 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.411177 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.411186 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.411200 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.411209 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:14Z","lastTransitionTime":"2025-11-24T21:21:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.426543 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.426573 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.426594 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:14 crc kubenswrapper[4915]: E1124 21:21:14.426668 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:21:14 crc kubenswrapper[4915]: E1124 21:21:14.426713 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:21:14 crc kubenswrapper[4915]: E1124 21:21:14.426766 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.514149 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.514196 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.514209 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.514228 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.514241 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:14Z","lastTransitionTime":"2025-11-24T21:21:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.616807 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.616859 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.616872 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.616889 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.616901 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:14Z","lastTransitionTime":"2025-11-24T21:21:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.719086 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.719149 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.719168 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.719190 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.719204 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:14Z","lastTransitionTime":"2025-11-24T21:21:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.821535 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.821617 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.821646 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.821676 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.821702 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:14Z","lastTransitionTime":"2025-11-24T21:21:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.925072 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.925145 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.925168 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.925195 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:14 crc kubenswrapper[4915]: I1124 21:21:14.925217 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:14Z","lastTransitionTime":"2025-11-24T21:21:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.029101 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.029164 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.029180 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.029202 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.029216 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:15Z","lastTransitionTime":"2025-11-24T21:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.133091 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.133152 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.133170 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.133198 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.133220 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:15Z","lastTransitionTime":"2025-11-24T21:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.236169 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.236226 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.236241 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.236263 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.236277 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:15Z","lastTransitionTime":"2025-11-24T21:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.339659 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.339713 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.339727 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.339748 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.339763 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:15Z","lastTransitionTime":"2025-11-24T21:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.425972 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:21:15 crc kubenswrapper[4915]: E1124 21:21:15.426133 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.442050 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.442106 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.442125 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.442149 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.442165 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:15Z","lastTransitionTime":"2025-11-24T21:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.545052 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.545123 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.545136 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.545180 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.545193 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:15Z","lastTransitionTime":"2025-11-24T21:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.648920 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.648976 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.648991 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.649013 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.649029 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:15Z","lastTransitionTime":"2025-11-24T21:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.752522 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.752589 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.752607 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.752641 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.752668 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:15Z","lastTransitionTime":"2025-11-24T21:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.855497 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.855539 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.855550 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.855563 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.855575 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:15Z","lastTransitionTime":"2025-11-24T21:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.958749 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.958815 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.958827 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.958845 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:15 crc kubenswrapper[4915]: I1124 21:21:15.958857 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:15Z","lastTransitionTime":"2025-11-24T21:21:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.061872 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.061948 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.061966 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.061989 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.062007 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:16Z","lastTransitionTime":"2025-11-24T21:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.165261 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.165303 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.165312 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.165329 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.165346 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:16Z","lastTransitionTime":"2025-11-24T21:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.268969 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.269038 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.269061 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.269089 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.269112 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:16Z","lastTransitionTime":"2025-11-24T21:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.372749 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.372845 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.372863 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.372886 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.372903 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:16Z","lastTransitionTime":"2025-11-24T21:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.425647 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.425767 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:16 crc kubenswrapper[4915]: E1124 21:21:16.425846 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.425900 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:16 crc kubenswrapper[4915]: E1124 21:21:16.426015 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:21:16 crc kubenswrapper[4915]: E1124 21:21:16.426220 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.475505 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.475544 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.475552 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.475568 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.475584 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:16Z","lastTransitionTime":"2025-11-24T21:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.579382 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.579471 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.579504 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.579539 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.579564 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:16Z","lastTransitionTime":"2025-11-24T21:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.682757 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.682855 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.682895 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.682940 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.682960 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:16Z","lastTransitionTime":"2025-11-24T21:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.786798 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.786869 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.786881 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.786925 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.786936 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:16Z","lastTransitionTime":"2025-11-24T21:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.889727 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.889766 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.889796 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.889810 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.889823 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:16Z","lastTransitionTime":"2025-11-24T21:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.992507 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.993035 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.993120 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.993225 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:16 crc kubenswrapper[4915]: I1124 21:21:16.993293 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:16Z","lastTransitionTime":"2025-11-24T21:21:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.096992 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.097477 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.097555 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.097619 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.097673 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:17Z","lastTransitionTime":"2025-11-24T21:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.201318 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.201390 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.201409 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.201437 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.201458 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:17Z","lastTransitionTime":"2025-11-24T21:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.304670 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.304716 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.304726 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.304746 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.304757 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:17Z","lastTransitionTime":"2025-11-24T21:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.408413 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.408503 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.408521 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.408549 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.408567 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:17Z","lastTransitionTime":"2025-11-24T21:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.426147 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:21:17 crc kubenswrapper[4915]: E1124 21:21:17.426526 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.511644 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.511708 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.511725 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.511749 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.511766 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:17Z","lastTransitionTime":"2025-11-24T21:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.614434 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.614490 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.614501 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.614516 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.614526 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:17Z","lastTransitionTime":"2025-11-24T21:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.716889 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.716933 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.716944 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.716957 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.716966 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:17Z","lastTransitionTime":"2025-11-24T21:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.820354 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.820415 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.820434 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.820458 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.820476 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:17Z","lastTransitionTime":"2025-11-24T21:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.923038 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.923105 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.923121 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.923145 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:17 crc kubenswrapper[4915]: I1124 21:21:17.923165 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:17Z","lastTransitionTime":"2025-11-24T21:21:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.025852 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.025917 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.025941 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.025965 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.025982 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:18Z","lastTransitionTime":"2025-11-24T21:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.128698 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.128812 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.128840 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.128874 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.128898 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:18Z","lastTransitionTime":"2025-11-24T21:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.232189 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.232224 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.232234 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.232249 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.232262 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:18Z","lastTransitionTime":"2025-11-24T21:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.336123 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.336185 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.336208 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.336234 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.336253 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:18Z","lastTransitionTime":"2025-11-24T21:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.425884 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:21:18 crc kubenswrapper[4915]: E1124 21:21:18.426114 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.426541 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:18 crc kubenswrapper[4915]: E1124 21:21:18.426683 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.427114 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:18 crc kubenswrapper[4915]: E1124 21:21:18.427247 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.438325 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.438390 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.438415 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.438444 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.438472 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:18Z","lastTransitionTime":"2025-11-24T21:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.543606 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.543644 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.543655 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.543672 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.543683 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:18Z","lastTransitionTime":"2025-11-24T21:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.645892 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.645962 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.645985 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.646003 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.646013 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:18Z","lastTransitionTime":"2025-11-24T21:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.748754 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.748806 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.748819 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.748831 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.748840 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:18Z","lastTransitionTime":"2025-11-24T21:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.850610 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.850657 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.850667 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.850682 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.850694 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:18Z","lastTransitionTime":"2025-11-24T21:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.953593 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.953666 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.953689 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.953718 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:18 crc kubenswrapper[4915]: I1124 21:21:18.953740 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:18Z","lastTransitionTime":"2025-11-24T21:21:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.056881 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.056954 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.056979 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.057009 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.057033 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:19Z","lastTransitionTime":"2025-11-24T21:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.160737 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.169551 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.169568 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.169588 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.169610 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:19Z","lastTransitionTime":"2025-11-24T21:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.273131 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.273202 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.273222 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.273250 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.273268 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:19Z","lastTransitionTime":"2025-11-24T21:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.377218 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.377303 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.377328 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.377362 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.377386 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:19Z","lastTransitionTime":"2025-11-24T21:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.425948 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:21:19 crc kubenswrapper[4915]: E1124 21:21:19.426155 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.480402 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.480450 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.480467 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.480488 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.480505 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:19Z","lastTransitionTime":"2025-11-24T21:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.583420 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.583524 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.583543 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.583570 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.583591 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:19Z","lastTransitionTime":"2025-11-24T21:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.686853 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.686981 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.686992 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.687006 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.687033 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:19Z","lastTransitionTime":"2025-11-24T21:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.790229 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.790285 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.790298 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.790318 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.790338 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:19Z","lastTransitionTime":"2025-11-24T21:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.893029 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.893086 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.893094 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.893109 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.893117 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:19Z","lastTransitionTime":"2025-11-24T21:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.996247 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.996312 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.996331 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.996358 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:19 crc kubenswrapper[4915]: I1124 21:21:19.996376 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:19Z","lastTransitionTime":"2025-11-24T21:21:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.063262 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.063307 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.063317 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.063334 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.063348 4915 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-24T21:21:20Z","lastTransitionTime":"2025-11-24T21:21:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.123213 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-kds4m"] Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.123675 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kds4m" Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.125754 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.127180 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.127178 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.127623 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.169146 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/90ac5efb-0d4b-4f93-a607-1981004686df-service-ca\") pod \"cluster-version-operator-5c965bbfc6-kds4m\" (UID: \"90ac5efb-0d4b-4f93-a607-1981004686df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kds4m" Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.169185 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/90ac5efb-0d4b-4f93-a607-1981004686df-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-kds4m\" (UID: \"90ac5efb-0d4b-4f93-a607-1981004686df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kds4m" Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.169224 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/90ac5efb-0d4b-4f93-a607-1981004686df-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-kds4m\" (UID: \"90ac5efb-0d4b-4f93-a607-1981004686df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kds4m" Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.169251 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90ac5efb-0d4b-4f93-a607-1981004686df-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-kds4m\" (UID: \"90ac5efb-0d4b-4f93-a607-1981004686df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kds4m" Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.169272 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90ac5efb-0d4b-4f93-a607-1981004686df-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-kds4m\" (UID: \"90ac5efb-0d4b-4f93-a607-1981004686df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kds4m" Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.270246 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/90ac5efb-0d4b-4f93-a607-1981004686df-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-kds4m\" (UID: \"90ac5efb-0d4b-4f93-a607-1981004686df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kds4m" Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.270339 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90ac5efb-0d4b-4f93-a607-1981004686df-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-kds4m\" (UID: \"90ac5efb-0d4b-4f93-a607-1981004686df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kds4m" Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.270390 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90ac5efb-0d4b-4f93-a607-1981004686df-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-kds4m\" (UID: \"90ac5efb-0d4b-4f93-a607-1981004686df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kds4m" Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.270450 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/90ac5efb-0d4b-4f93-a607-1981004686df-service-ca\") pod \"cluster-version-operator-5c965bbfc6-kds4m\" (UID: \"90ac5efb-0d4b-4f93-a607-1981004686df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kds4m" Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.270477 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/90ac5efb-0d4b-4f93-a607-1981004686df-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-kds4m\" (UID: \"90ac5efb-0d4b-4f93-a607-1981004686df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kds4m" Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.270490 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/90ac5efb-0d4b-4f93-a607-1981004686df-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-kds4m\" (UID: \"90ac5efb-0d4b-4f93-a607-1981004686df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kds4m" Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.270576 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/90ac5efb-0d4b-4f93-a607-1981004686df-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-kds4m\" (UID: \"90ac5efb-0d4b-4f93-a607-1981004686df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kds4m" Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.271335 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/90ac5efb-0d4b-4f93-a607-1981004686df-service-ca\") pod \"cluster-version-operator-5c965bbfc6-kds4m\" (UID: \"90ac5efb-0d4b-4f93-a607-1981004686df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kds4m" Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.277193 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90ac5efb-0d4b-4f93-a607-1981004686df-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-kds4m\" (UID: \"90ac5efb-0d4b-4f93-a607-1981004686df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kds4m" Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.291742 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90ac5efb-0d4b-4f93-a607-1981004686df-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-kds4m\" (UID: \"90ac5efb-0d4b-4f93-a607-1981004686df\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kds4m" Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.371481 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a785aaf6-e561-47e9-a3ff-69e6930c5941-metrics-certs\") pod \"network-metrics-daemon-hkc4w\" (UID: \"a785aaf6-e561-47e9-a3ff-69e6930c5941\") " pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:20 crc kubenswrapper[4915]: E1124 21:21:20.371819 4915 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:21:20 crc kubenswrapper[4915]: E1124 21:21:20.371953 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a785aaf6-e561-47e9-a3ff-69e6930c5941-metrics-certs podName:a785aaf6-e561-47e9-a3ff-69e6930c5941 nodeName:}" failed. No retries permitted until 2025-11-24 21:22:24.371922716 +0000 UTC m=+162.688174929 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a785aaf6-e561-47e9-a3ff-69e6930c5941-metrics-certs") pod "network-metrics-daemon-hkc4w" (UID: "a785aaf6-e561-47e9-a3ff-69e6930c5941") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.425805 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.425916 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.425769 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:21:20 crc kubenswrapper[4915]: E1124 21:21:20.426044 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:21:20 crc kubenswrapper[4915]: E1124 21:21:20.426177 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:21:20 crc kubenswrapper[4915]: E1124 21:21:20.426384 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.427536 4915 scope.go:117] "RemoveContainer" containerID="ea71245c41a0f3fc20b632dcfff65b7ee7185eb47f94d40ee9a6391a6dd81960" Nov 24 21:21:20 crc kubenswrapper[4915]: E1124 21:21:20.428012 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jmqqt_openshift-ovn-kubernetes(3f235785-6b02-4304-99b8-3b216c369d45)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" podUID="3f235785-6b02-4304-99b8-3b216c369d45" Nov 24 21:21:20 crc kubenswrapper[4915]: I1124 21:21:20.449352 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kds4m" Nov 24 21:21:21 crc kubenswrapper[4915]: I1124 21:21:21.004305 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kds4m" event={"ID":"90ac5efb-0d4b-4f93-a607-1981004686df","Type":"ContainerStarted","Data":"3ac7331f6922925c71c0d79255bbe7c8f58f8110423dc52550819b8499f1e085"} Nov 24 21:21:21 crc kubenswrapper[4915]: I1124 21:21:21.005524 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kds4m" event={"ID":"90ac5efb-0d4b-4f93-a607-1981004686df","Type":"ContainerStarted","Data":"edee6ec03b42509d2cacbeb9e4ed856c91454c02d2d751894ae6d5972d1fc3f2"} Nov 24 21:21:21 crc kubenswrapper[4915]: I1124 21:21:21.028633 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-kds4m" podStartSLOduration=79.028607761 podStartE2EDuration="1m19.028607761s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:21:21.027299084 +0000 UTC m=+99.343551277" watchObservedRunningTime="2025-11-24 21:21:21.028607761 +0000 UTC m=+99.344859964" Nov 24 21:21:21 crc kubenswrapper[4915]: I1124 21:21:21.426549 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:21:21 crc kubenswrapper[4915]: E1124 21:21:21.426695 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:21:22 crc kubenswrapper[4915]: I1124 21:21:22.426372 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:22 crc kubenswrapper[4915]: I1124 21:21:22.426408 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:21:22 crc kubenswrapper[4915]: E1124 21:21:22.428086 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:21:22 crc kubenswrapper[4915]: I1124 21:21:22.428135 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:22 crc kubenswrapper[4915]: E1124 21:21:22.428301 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:21:22 crc kubenswrapper[4915]: E1124 21:21:22.428359 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:21:23 crc kubenswrapper[4915]: I1124 21:21:23.426111 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:21:23 crc kubenswrapper[4915]: E1124 21:21:23.426407 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:21:24 crc kubenswrapper[4915]: I1124 21:21:24.426503 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:21:24 crc kubenswrapper[4915]: I1124 21:21:24.426508 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:24 crc kubenswrapper[4915]: I1124 21:21:24.426557 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:24 crc kubenswrapper[4915]: E1124 21:21:24.428039 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:21:24 crc kubenswrapper[4915]: E1124 21:21:24.428153 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:21:24 crc kubenswrapper[4915]: E1124 21:21:24.428242 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:21:25 crc kubenswrapper[4915]: I1124 21:21:25.425608 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:21:25 crc kubenswrapper[4915]: E1124 21:21:25.425851 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:21:26 crc kubenswrapper[4915]: I1124 21:21:26.426216 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:21:26 crc kubenswrapper[4915]: I1124 21:21:26.426349 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:26 crc kubenswrapper[4915]: E1124 21:21:26.426624 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:21:26 crc kubenswrapper[4915]: I1124 21:21:26.426658 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:26 crc kubenswrapper[4915]: E1124 21:21:26.426885 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:21:26 crc kubenswrapper[4915]: E1124 21:21:26.427001 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:21:27 crc kubenswrapper[4915]: I1124 21:21:27.425967 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:21:27 crc kubenswrapper[4915]: E1124 21:21:27.426146 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:21:28 crc kubenswrapper[4915]: I1124 21:21:28.426246 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:21:28 crc kubenswrapper[4915]: I1124 21:21:28.426298 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:28 crc kubenswrapper[4915]: E1124 21:21:28.426463 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:21:28 crc kubenswrapper[4915]: I1124 21:21:28.426633 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:28 crc kubenswrapper[4915]: E1124 21:21:28.426773 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:21:28 crc kubenswrapper[4915]: E1124 21:21:28.426938 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:21:29 crc kubenswrapper[4915]: I1124 21:21:29.426574 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:21:29 crc kubenswrapper[4915]: E1124 21:21:29.426882 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:21:30 crc kubenswrapper[4915]: I1124 21:21:30.425748 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:21:30 crc kubenswrapper[4915]: I1124 21:21:30.425748 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:30 crc kubenswrapper[4915]: I1124 21:21:30.425874 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:30 crc kubenswrapper[4915]: E1124 21:21:30.426435 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:21:30 crc kubenswrapper[4915]: E1124 21:21:30.426620 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:21:30 crc kubenswrapper[4915]: E1124 21:21:30.426634 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:21:31 crc kubenswrapper[4915]: I1124 21:21:31.426232 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:21:31 crc kubenswrapper[4915]: E1124 21:21:31.426667 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:21:32 crc kubenswrapper[4915]: I1124 21:21:32.426104 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:21:32 crc kubenswrapper[4915]: I1124 21:21:32.426143 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:32 crc kubenswrapper[4915]: I1124 21:21:32.426292 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:32 crc kubenswrapper[4915]: E1124 21:21:32.430126 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:21:32 crc kubenswrapper[4915]: E1124 21:21:32.430480 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:21:32 crc kubenswrapper[4915]: E1124 21:21:32.430732 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:21:33 crc kubenswrapper[4915]: I1124 21:21:33.425958 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:21:33 crc kubenswrapper[4915]: E1124 21:21:33.426135 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:21:34 crc kubenswrapper[4915]: I1124 21:21:34.425715 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:34 crc kubenswrapper[4915]: I1124 21:21:34.425751 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:34 crc kubenswrapper[4915]: I1124 21:21:34.425818 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:21:34 crc kubenswrapper[4915]: E1124 21:21:34.425987 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:21:34 crc kubenswrapper[4915]: E1124 21:21:34.426088 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:21:34 crc kubenswrapper[4915]: E1124 21:21:34.426374 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:21:35 crc kubenswrapper[4915]: I1124 21:21:35.425888 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:21:35 crc kubenswrapper[4915]: E1124 21:21:35.426595 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:21:35 crc kubenswrapper[4915]: I1124 21:21:35.426966 4915 scope.go:117] "RemoveContainer" containerID="ea71245c41a0f3fc20b632dcfff65b7ee7185eb47f94d40ee9a6391a6dd81960" Nov 24 21:21:35 crc kubenswrapper[4915]: E1124 21:21:35.427231 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-jmqqt_openshift-ovn-kubernetes(3f235785-6b02-4304-99b8-3b216c369d45)\"" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" podUID="3f235785-6b02-4304-99b8-3b216c369d45" Nov 24 21:21:36 crc kubenswrapper[4915]: I1124 21:21:36.425936 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:36 crc kubenswrapper[4915]: I1124 21:21:36.425955 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:21:36 crc kubenswrapper[4915]: E1124 21:21:36.426129 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:21:36 crc kubenswrapper[4915]: E1124 21:21:36.426288 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:21:36 crc kubenswrapper[4915]: I1124 21:21:36.426451 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:36 crc kubenswrapper[4915]: E1124 21:21:36.426548 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:21:37 crc kubenswrapper[4915]: I1124 21:21:37.061557 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b8kq8_f5b8930d-4919-4a02-a962-c93b5f8f4ad3/kube-multus/1.log" Nov 24 21:21:37 crc kubenswrapper[4915]: I1124 21:21:37.062108 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b8kq8_f5b8930d-4919-4a02-a962-c93b5f8f4ad3/kube-multus/0.log" Nov 24 21:21:37 crc kubenswrapper[4915]: I1124 21:21:37.062156 4915 generic.go:334] "Generic (PLEG): container finished" podID="f5b8930d-4919-4a02-a962-c93b5f8f4ad3" containerID="926013354edf1382934bf5829af75dc38d00843d1d93ae599bfcedd1322571d7" exitCode=1 Nov 24 21:21:37 crc kubenswrapper[4915]: I1124 21:21:37.062191 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b8kq8" event={"ID":"f5b8930d-4919-4a02-a962-c93b5f8f4ad3","Type":"ContainerDied","Data":"926013354edf1382934bf5829af75dc38d00843d1d93ae599bfcedd1322571d7"} Nov 24 21:21:37 crc kubenswrapper[4915]: I1124 21:21:37.062227 4915 scope.go:117] "RemoveContainer" containerID="5d4ae984b3666a7ad7abe587d0925f16df2d2d3a36db9053973c61eaf9220fff" Nov 24 21:21:37 crc kubenswrapper[4915]: I1124 21:21:37.062621 4915 scope.go:117] "RemoveContainer" containerID="926013354edf1382934bf5829af75dc38d00843d1d93ae599bfcedd1322571d7" Nov 24 21:21:37 crc kubenswrapper[4915]: E1124 21:21:37.062954 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-b8kq8_openshift-multus(f5b8930d-4919-4a02-a962-c93b5f8f4ad3)\"" pod="openshift-multus/multus-b8kq8" podUID="f5b8930d-4919-4a02-a962-c93b5f8f4ad3" Nov 24 21:21:37 crc kubenswrapper[4915]: I1124 21:21:37.426075 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:21:37 crc kubenswrapper[4915]: E1124 21:21:37.426456 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:21:38 crc kubenswrapper[4915]: I1124 21:21:38.067432 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b8kq8_f5b8930d-4919-4a02-a962-c93b5f8f4ad3/kube-multus/1.log" Nov 24 21:21:38 crc kubenswrapper[4915]: I1124 21:21:38.426207 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:21:38 crc kubenswrapper[4915]: I1124 21:21:38.426208 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:38 crc kubenswrapper[4915]: E1124 21:21:38.426451 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:21:38 crc kubenswrapper[4915]: E1124 21:21:38.426527 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:21:38 crc kubenswrapper[4915]: I1124 21:21:38.427042 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:38 crc kubenswrapper[4915]: E1124 21:21:38.427191 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:21:39 crc kubenswrapper[4915]: I1124 21:21:39.426116 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:21:39 crc kubenswrapper[4915]: E1124 21:21:39.426317 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:21:40 crc kubenswrapper[4915]: I1124 21:21:40.426026 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:40 crc kubenswrapper[4915]: I1124 21:21:40.426027 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:21:40 crc kubenswrapper[4915]: E1124 21:21:40.426156 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:21:40 crc kubenswrapper[4915]: I1124 21:21:40.426222 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:40 crc kubenswrapper[4915]: E1124 21:21:40.426320 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:21:40 crc kubenswrapper[4915]: E1124 21:21:40.426436 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:21:41 crc kubenswrapper[4915]: I1124 21:21:41.426097 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:21:41 crc kubenswrapper[4915]: E1124 21:21:41.426276 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:21:42 crc kubenswrapper[4915]: I1124 21:21:42.426029 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:42 crc kubenswrapper[4915]: I1124 21:21:42.426071 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:42 crc kubenswrapper[4915]: E1124 21:21:42.427240 4915 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 24 21:21:42 crc kubenswrapper[4915]: E1124 21:21:42.427271 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:21:42 crc kubenswrapper[4915]: I1124 21:21:42.427292 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:21:42 crc kubenswrapper[4915]: E1124 21:21:42.427408 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:21:42 crc kubenswrapper[4915]: E1124 21:21:42.427528 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:21:42 crc kubenswrapper[4915]: E1124 21:21:42.561690 4915 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 21:21:43 crc kubenswrapper[4915]: I1124 21:21:43.426536 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:21:43 crc kubenswrapper[4915]: E1124 21:21:43.426667 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:21:44 crc kubenswrapper[4915]: I1124 21:21:44.426596 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:21:44 crc kubenswrapper[4915]: I1124 21:21:44.426641 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:44 crc kubenswrapper[4915]: I1124 21:21:44.426707 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:44 crc kubenswrapper[4915]: E1124 21:21:44.426811 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:21:44 crc kubenswrapper[4915]: E1124 21:21:44.426944 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:21:44 crc kubenswrapper[4915]: E1124 21:21:44.427075 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:21:45 crc kubenswrapper[4915]: I1124 21:21:45.426481 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:21:45 crc kubenswrapper[4915]: E1124 21:21:45.426666 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:21:46 crc kubenswrapper[4915]: I1124 21:21:46.426021 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:21:46 crc kubenswrapper[4915]: I1124 21:21:46.426163 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:46 crc kubenswrapper[4915]: E1124 21:21:46.426280 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:21:46 crc kubenswrapper[4915]: I1124 21:21:46.426354 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:46 crc kubenswrapper[4915]: E1124 21:21:46.426554 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:21:46 crc kubenswrapper[4915]: E1124 21:21:46.427066 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:21:46 crc kubenswrapper[4915]: I1124 21:21:46.427383 4915 scope.go:117] "RemoveContainer" containerID="ea71245c41a0f3fc20b632dcfff65b7ee7185eb47f94d40ee9a6391a6dd81960" Nov 24 21:21:47 crc kubenswrapper[4915]: I1124 21:21:47.101033 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jmqqt_3f235785-6b02-4304-99b8-3b216c369d45/ovnkube-controller/3.log" Nov 24 21:21:47 crc kubenswrapper[4915]: I1124 21:21:47.104275 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" event={"ID":"3f235785-6b02-4304-99b8-3b216c369d45","Type":"ContainerStarted","Data":"e47e7134ba9ef188b696b948409a4823455230ba2a169348b8aca9dccac27514"} Nov 24 21:21:47 crc kubenswrapper[4915]: I1124 21:21:47.104725 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:21:47 crc kubenswrapper[4915]: I1124 21:21:47.144273 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" podStartSLOduration=105.144242207 podStartE2EDuration="1m45.144242207s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:21:47.139970824 +0000 UTC m=+125.456223027" watchObservedRunningTime="2025-11-24 21:21:47.144242207 +0000 UTC m=+125.460494420" Nov 24 21:21:47 crc kubenswrapper[4915]: I1124 21:21:47.407955 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hkc4w"] Nov 24 21:21:47 crc kubenswrapper[4915]: I1124 21:21:47.408125 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:47 crc kubenswrapper[4915]: E1124 21:21:47.408253 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:21:47 crc kubenswrapper[4915]: I1124 21:21:47.426505 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:21:47 crc kubenswrapper[4915]: E1124 21:21:47.426681 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:21:47 crc kubenswrapper[4915]: E1124 21:21:47.563980 4915 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 21:21:48 crc kubenswrapper[4915]: I1124 21:21:48.426206 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:21:48 crc kubenswrapper[4915]: I1124 21:21:48.426339 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:48 crc kubenswrapper[4915]: E1124 21:21:48.426364 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:21:48 crc kubenswrapper[4915]: E1124 21:21:48.426570 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:21:49 crc kubenswrapper[4915]: I1124 21:21:49.425546 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:49 crc kubenswrapper[4915]: E1124 21:21:49.425710 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:21:49 crc kubenswrapper[4915]: I1124 21:21:49.425549 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:21:49 crc kubenswrapper[4915]: E1124 21:21:49.425956 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:21:50 crc kubenswrapper[4915]: I1124 21:21:50.425715 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:21:50 crc kubenswrapper[4915]: I1124 21:21:50.425742 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:50 crc kubenswrapper[4915]: E1124 21:21:50.426020 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:21:50 crc kubenswrapper[4915]: E1124 21:21:50.426185 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:21:51 crc kubenswrapper[4915]: I1124 21:21:51.426011 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:51 crc kubenswrapper[4915]: I1124 21:21:51.426131 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:21:51 crc kubenswrapper[4915]: E1124 21:21:51.426147 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:21:51 crc kubenswrapper[4915]: E1124 21:21:51.426327 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:21:51 crc kubenswrapper[4915]: I1124 21:21:51.426959 4915 scope.go:117] "RemoveContainer" containerID="926013354edf1382934bf5829af75dc38d00843d1d93ae599bfcedd1322571d7" Nov 24 21:21:52 crc kubenswrapper[4915]: I1124 21:21:52.121821 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b8kq8_f5b8930d-4919-4a02-a962-c93b5f8f4ad3/kube-multus/1.log" Nov 24 21:21:52 crc kubenswrapper[4915]: I1124 21:21:52.122131 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b8kq8" event={"ID":"f5b8930d-4919-4a02-a962-c93b5f8f4ad3","Type":"ContainerStarted","Data":"b4dbca3c2e2b93a7e5cc889b1b96416b4a9df27216e9ed45cb8ff4b73b75f830"} Nov 24 21:21:52 crc kubenswrapper[4915]: I1124 21:21:52.426317 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:21:52 crc kubenswrapper[4915]: I1124 21:21:52.426996 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:52 crc kubenswrapper[4915]: E1124 21:21:52.427528 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:21:52 crc kubenswrapper[4915]: E1124 21:21:52.427620 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:21:52 crc kubenswrapper[4915]: E1124 21:21:52.565059 4915 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 21:21:53 crc kubenswrapper[4915]: I1124 21:21:53.426434 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:53 crc kubenswrapper[4915]: E1124 21:21:53.426658 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:21:53 crc kubenswrapper[4915]: I1124 21:21:53.427029 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:21:53 crc kubenswrapper[4915]: E1124 21:21:53.427216 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:21:54 crc kubenswrapper[4915]: I1124 21:21:54.426504 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:21:54 crc kubenswrapper[4915]: I1124 21:21:54.426551 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:54 crc kubenswrapper[4915]: E1124 21:21:54.426656 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:21:54 crc kubenswrapper[4915]: E1124 21:21:54.426845 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:21:55 crc kubenswrapper[4915]: I1124 21:21:55.426299 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:55 crc kubenswrapper[4915]: E1124 21:21:55.426500 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:21:55 crc kubenswrapper[4915]: I1124 21:21:55.426980 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:21:55 crc kubenswrapper[4915]: E1124 21:21:55.427113 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:21:56 crc kubenswrapper[4915]: I1124 21:21:56.426539 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:56 crc kubenswrapper[4915]: E1124 21:21:56.426644 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 24 21:21:56 crc kubenswrapper[4915]: I1124 21:21:56.426548 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:21:56 crc kubenswrapper[4915]: E1124 21:21:56.426852 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 24 21:21:57 crc kubenswrapper[4915]: I1124 21:21:57.426167 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:57 crc kubenswrapper[4915]: I1124 21:21:57.426249 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:21:57 crc kubenswrapper[4915]: E1124 21:21:57.426358 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hkc4w" podUID="a785aaf6-e561-47e9-a3ff-69e6930c5941" Nov 24 21:21:57 crc kubenswrapper[4915]: E1124 21:21:57.426467 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 24 21:21:58 crc kubenswrapper[4915]: I1124 21:21:58.426249 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:21:58 crc kubenswrapper[4915]: I1124 21:21:58.426320 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:21:58 crc kubenswrapper[4915]: I1124 21:21:58.429163 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 24 21:21:58 crc kubenswrapper[4915]: I1124 21:21:58.429428 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 24 21:21:58 crc kubenswrapper[4915]: I1124 21:21:58.430189 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 24 21:21:58 crc kubenswrapper[4915]: I1124 21:21:58.431427 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 24 21:21:59 crc kubenswrapper[4915]: I1124 21:21:59.426223 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:21:59 crc kubenswrapper[4915]: I1124 21:21:59.426254 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:21:59 crc kubenswrapper[4915]: I1124 21:21:59.431393 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 24 21:21:59 crc kubenswrapper[4915]: I1124 21:21:59.432444 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.754570 4915 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.799540 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-p7n9h"] Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.800199 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-p7n9h" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.802435 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.802816 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.803069 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.803316 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.803490 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.803714 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.804470 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bp8jj"] Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.805049 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-d8b76"] Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.805077 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-bp8jj" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.806168 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.811139 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.811404 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.811646 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.814193 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.814441 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.814521 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.814702 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.814988 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.815287 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.817829 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.818453 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9qlcq"] Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.818655 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.818844 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp"] Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.818940 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.819255 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.819825 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.820111 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9qlcq" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.821182 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.821504 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9mddb"] Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.821555 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.822139 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5xxjt"] Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.822527 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xxjt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.823028 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9mddb" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.823176 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-762rz"] Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.823672 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-762rz" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.824055 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.824437 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.824573 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.827531 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d83bcc5d-5419-4656-882e-964b0d87e966-config\") pod \"controller-manager-879f6c89f-p7n9h\" (UID: \"d83bcc5d-5419-4656-882e-964b0d87e966\") " pod="openshift-controller-manager/controller-manager-879f6c89f-p7n9h" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.827627 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/15b4822e-5f78-4c15-a2ad-641d0af56a1b-encryption-config\") pod \"apiserver-7bbb656c7d-zsbmp\" (UID: \"15b4822e-5f78-4c15-a2ad-641d0af56a1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.827683 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/05cd76ab-027a-4c05-aeb0-702df5277cbe-etcd-serving-ca\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.827717 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqxjl\" (UniqueName: \"kubernetes.io/projected/0df5d6ad-e65b-4070-909d-0bcd055db60f-kube-api-access-gqxjl\") pod \"authentication-operator-69f744f599-9qlcq\" (UID: \"0df5d6ad-e65b-4070-909d-0bcd055db60f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9qlcq" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.827759 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d83bcc5d-5419-4656-882e-964b0d87e966-client-ca\") pod \"controller-manager-879f6c89f-p7n9h\" (UID: \"d83bcc5d-5419-4656-882e-964b0d87e966\") " pod="openshift-controller-manager/controller-manager-879f6c89f-p7n9h" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.827837 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/15b4822e-5f78-4c15-a2ad-641d0af56a1b-audit-dir\") pod \"apiserver-7bbb656c7d-zsbmp\" (UID: \"15b4822e-5f78-4c15-a2ad-641d0af56a1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.827884 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b4tl\" (UniqueName: \"kubernetes.io/projected/507e4049-6297-4243-9692-3a3841f80b4c-kube-api-access-4b4tl\") pod \"cluster-samples-operator-665b6dd947-9mddb\" (UID: \"507e4049-6297-4243-9692-3a3841f80b4c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9mddb" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.827914 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42c53a24-3430-4028-a8cd-bfa948466eef-serving-cert\") pod \"openshift-config-operator-7777fb866f-5xxjt\" (UID: \"42c53a24-3430-4028-a8cd-bfa948466eef\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xxjt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.827945 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqnnm\" (UniqueName: \"kubernetes.io/projected/d83bcc5d-5419-4656-882e-964b0d87e966-kube-api-access-mqnnm\") pod \"controller-manager-879f6c89f-p7n9h\" (UID: \"d83bcc5d-5419-4656-882e-964b0d87e966\") " pod="openshift-controller-manager/controller-manager-879f6c89f-p7n9h" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.827978 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd84d\" (UniqueName: \"kubernetes.io/projected/05cd76ab-027a-4c05-aeb0-702df5277cbe-kube-api-access-rd84d\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.828039 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/05cd76ab-027a-4c05-aeb0-702df5277cbe-audit-dir\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.828079 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/15b4822e-5f78-4c15-a2ad-641d0af56a1b-audit-policies\") pod \"apiserver-7bbb656c7d-zsbmp\" (UID: \"15b4822e-5f78-4c15-a2ad-641d0af56a1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.828125 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/05cd76ab-027a-4c05-aeb0-702df5277cbe-encryption-config\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.828161 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0df5d6ad-e65b-4070-909d-0bcd055db60f-config\") pod \"authentication-operator-69f744f599-9qlcq\" (UID: \"0df5d6ad-e65b-4070-909d-0bcd055db60f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9qlcq" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.828192 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0df5d6ad-e65b-4070-909d-0bcd055db60f-serving-cert\") pod \"authentication-operator-69f744f599-9qlcq\" (UID: \"0df5d6ad-e65b-4070-909d-0bcd055db60f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9qlcq" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.828220 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/507e4049-6297-4243-9692-3a3841f80b4c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9mddb\" (UID: \"507e4049-6297-4243-9692-3a3841f80b4c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9mddb" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.828283 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klnld\" (UniqueName: \"kubernetes.io/projected/9195fe9f-3872-4a32-bb62-a13dfea6c331-kube-api-access-klnld\") pod \"machine-api-operator-5694c8668f-bp8jj\" (UID: \"9195fe9f-3872-4a32-bb62-a13dfea6c331\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bp8jj" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.828340 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0df5d6ad-e65b-4070-909d-0bcd055db60f-service-ca-bundle\") pod \"authentication-operator-69f744f599-9qlcq\" (UID: \"0df5d6ad-e65b-4070-909d-0bcd055db60f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9qlcq" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.828387 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9195fe9f-3872-4a32-bb62-a13dfea6c331-config\") pod \"machine-api-operator-5694c8668f-bp8jj\" (UID: \"9195fe9f-3872-4a32-bb62-a13dfea6c331\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bp8jj" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.828418 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/42c53a24-3430-4028-a8cd-bfa948466eef-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5xxjt\" (UID: \"42c53a24-3430-4028-a8cd-bfa948466eef\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xxjt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.828449 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d83bcc5d-5419-4656-882e-964b0d87e966-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-p7n9h\" (UID: \"d83bcc5d-5419-4656-882e-964b0d87e966\") " pod="openshift-controller-manager/controller-manager-879f6c89f-p7n9h" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.828485 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/05cd76ab-027a-4c05-aeb0-702df5277cbe-image-import-ca\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.828529 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9195fe9f-3872-4a32-bb62-a13dfea6c331-images\") pod \"machine-api-operator-5694c8668f-bp8jj\" (UID: \"9195fe9f-3872-4a32-bb62-a13dfea6c331\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bp8jj" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.828562 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/05cd76ab-027a-4c05-aeb0-702df5277cbe-etcd-client\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.828592 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d83bcc5d-5419-4656-882e-964b0d87e966-serving-cert\") pod \"controller-manager-879f6c89f-p7n9h\" (UID: \"d83bcc5d-5419-4656-882e-964b0d87e966\") " pod="openshift-controller-manager/controller-manager-879f6c89f-p7n9h" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.828626 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15b4822e-5f78-4c15-a2ad-641d0af56a1b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-zsbmp\" (UID: \"15b4822e-5f78-4c15-a2ad-641d0af56a1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.828659 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4p99\" (UniqueName: \"kubernetes.io/projected/42c53a24-3430-4028-a8cd-bfa948466eef-kube-api-access-l4p99\") pod \"openshift-config-operator-7777fb866f-5xxjt\" (UID: \"42c53a24-3430-4028-a8cd-bfa948466eef\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xxjt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.828692 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15b4822e-5f78-4c15-a2ad-641d0af56a1b-serving-cert\") pod \"apiserver-7bbb656c7d-zsbmp\" (UID: \"15b4822e-5f78-4c15-a2ad-641d0af56a1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.828723 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/15b4822e-5f78-4c15-a2ad-641d0af56a1b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-zsbmp\" (UID: \"15b4822e-5f78-4c15-a2ad-641d0af56a1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.828750 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05cd76ab-027a-4c05-aeb0-702df5277cbe-serving-cert\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.828821 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0df5d6ad-e65b-4070-909d-0bcd055db60f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9qlcq\" (UID: \"0df5d6ad-e65b-4070-909d-0bcd055db60f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9qlcq" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.828859 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcm7p\" (UniqueName: \"kubernetes.io/projected/15b4822e-5f78-4c15-a2ad-641d0af56a1b-kube-api-access-jcm7p\") pod \"apiserver-7bbb656c7d-zsbmp\" (UID: \"15b4822e-5f78-4c15-a2ad-641d0af56a1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.828893 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05cd76ab-027a-4c05-aeb0-702df5277cbe-config\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.828926 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/15b4822e-5f78-4c15-a2ad-641d0af56a1b-etcd-client\") pod \"apiserver-7bbb656c7d-zsbmp\" (UID: \"15b4822e-5f78-4c15-a2ad-641d0af56a1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.828962 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9195fe9f-3872-4a32-bb62-a13dfea6c331-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bp8jj\" (UID: \"9195fe9f-3872-4a32-bb62-a13dfea6c331\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bp8jj" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.828993 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05cd76ab-027a-4c05-aeb0-702df5277cbe-trusted-ca-bundle\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.829021 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/05cd76ab-027a-4c05-aeb0-702df5277cbe-audit\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.829051 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/05cd76ab-027a-4c05-aeb0-702df5277cbe-node-pullsecrets\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.837245 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.837523 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.838328 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.838578 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.840929 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.841329 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.841372 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.841606 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.841716 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.841329 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.842013 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.842070 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.842232 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.842019 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.842521 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.842727 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.842961 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.843105 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.843255 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.843410 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.843456 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.843656 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.843842 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.843895 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.844005 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.844023 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.844023 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.844026 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.844481 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sb4zq"] Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.845457 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-sb4zq" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.855279 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.857228 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-x7cqd"] Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.858386 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rdk5z"] Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.859849 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x7cqd" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.861686 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.865502 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.879010 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.879490 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.879715 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.879917 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.880144 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-5qvrk"] Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.880425 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.880819 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.882769 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5qvrk" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.884089 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-97jl2"] Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.887940 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.890324 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.890730 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.891035 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-d7cvw"] Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.891559 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zgpwn"] Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.891835 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.893423 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.893911 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zgpwn" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.894341 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-d7cvw" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.894934 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.895946 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lbpp5"] Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.897016 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbpp5" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.897745 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nmx6q"] Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.898225 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4rr2s"] Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.898537 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-nmx6q" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.898710 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4rr2s" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.899644 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.901528 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.901729 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.902074 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.902123 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.902158 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.902072 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.902837 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.903116 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.903282 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.903395 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.903399 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.904385 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.906090 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.906213 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.906374 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.906476 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.906555 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.906654 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.906838 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.906930 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j2dm5"] Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.907501 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j2dm5" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.907828 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c98n4"] Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.908118 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.908200 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c98n4" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.910298 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-snd2t"] Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.910389 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.910592 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.910667 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.910884 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.910965 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.911050 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.911336 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.911380 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.911555 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.914899 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.915071 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.917024 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.930984 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.931355 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jct"] Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.934520 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hlzxv"] Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.935027 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05cd76ab-027a-4c05-aeb0-702df5277cbe-config\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.935080 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/15b4822e-5f78-4c15-a2ad-641d0af56a1b-etcd-client\") pod \"apiserver-7bbb656c7d-zsbmp\" (UID: \"15b4822e-5f78-4c15-a2ad-641d0af56a1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.935178 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9195fe9f-3872-4a32-bb62-a13dfea6c331-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bp8jj\" (UID: \"9195fe9f-3872-4a32-bb62-a13dfea6c331\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bp8jj" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.935244 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05cd76ab-027a-4c05-aeb0-702df5277cbe-trusted-ca-bundle\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.935280 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/05cd76ab-027a-4c05-aeb0-702df5277cbe-audit\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.935309 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/05cd76ab-027a-4c05-aeb0-702df5277cbe-node-pullsecrets\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.935395 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d83bcc5d-5419-4656-882e-964b0d87e966-config\") pod \"controller-manager-879f6c89f-p7n9h\" (UID: \"d83bcc5d-5419-4656-882e-964b0d87e966\") " pod="openshift-controller-manager/controller-manager-879f6c89f-p7n9h" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.935464 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/15b4822e-5f78-4c15-a2ad-641d0af56a1b-encryption-config\") pod \"apiserver-7bbb656c7d-zsbmp\" (UID: \"15b4822e-5f78-4c15-a2ad-641d0af56a1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.935516 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqxjl\" (UniqueName: \"kubernetes.io/projected/0df5d6ad-e65b-4070-909d-0bcd055db60f-kube-api-access-gqxjl\") pod \"authentication-operator-69f744f599-9qlcq\" (UID: \"0df5d6ad-e65b-4070-909d-0bcd055db60f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9qlcq" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.935547 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/05cd76ab-027a-4c05-aeb0-702df5277cbe-etcd-serving-ca\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.935579 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d83bcc5d-5419-4656-882e-964b0d87e966-client-ca\") pod \"controller-manager-879f6c89f-p7n9h\" (UID: \"d83bcc5d-5419-4656-882e-964b0d87e966\") " pod="openshift-controller-manager/controller-manager-879f6c89f-p7n9h" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.935635 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/15b4822e-5f78-4c15-a2ad-641d0af56a1b-audit-dir\") pod \"apiserver-7bbb656c7d-zsbmp\" (UID: \"15b4822e-5f78-4c15-a2ad-641d0af56a1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.935693 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b4tl\" (UniqueName: \"kubernetes.io/projected/507e4049-6297-4243-9692-3a3841f80b4c-kube-api-access-4b4tl\") pod \"cluster-samples-operator-665b6dd947-9mddb\" (UID: \"507e4049-6297-4243-9692-3a3841f80b4c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9mddb" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.935723 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42c53a24-3430-4028-a8cd-bfa948466eef-serving-cert\") pod \"openshift-config-operator-7777fb866f-5xxjt\" (UID: \"42c53a24-3430-4028-a8cd-bfa948466eef\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xxjt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.935845 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd84d\" (UniqueName: \"kubernetes.io/projected/05cd76ab-027a-4c05-aeb0-702df5277cbe-kube-api-access-rd84d\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.935875 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqnnm\" (UniqueName: \"kubernetes.io/projected/d83bcc5d-5419-4656-882e-964b0d87e966-kube-api-access-mqnnm\") pod \"controller-manager-879f6c89f-p7n9h\" (UID: \"d83bcc5d-5419-4656-882e-964b0d87e966\") " pod="openshift-controller-manager/controller-manager-879f6c89f-p7n9h" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.935904 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/05cd76ab-027a-4c05-aeb0-702df5277cbe-audit-dir\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.935933 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/15b4822e-5f78-4c15-a2ad-641d0af56a1b-audit-policies\") pod \"apiserver-7bbb656c7d-zsbmp\" (UID: \"15b4822e-5f78-4c15-a2ad-641d0af56a1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.935961 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0df5d6ad-e65b-4070-909d-0bcd055db60f-config\") pod \"authentication-operator-69f744f599-9qlcq\" (UID: \"0df5d6ad-e65b-4070-909d-0bcd055db60f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9qlcq" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.935986 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0df5d6ad-e65b-4070-909d-0bcd055db60f-serving-cert\") pod \"authentication-operator-69f744f599-9qlcq\" (UID: \"0df5d6ad-e65b-4070-909d-0bcd055db60f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9qlcq" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.936030 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/05cd76ab-027a-4c05-aeb0-702df5277cbe-encryption-config\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.936061 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klnld\" (UniqueName: \"kubernetes.io/projected/9195fe9f-3872-4a32-bb62-a13dfea6c331-kube-api-access-klnld\") pod \"machine-api-operator-5694c8668f-bp8jj\" (UID: \"9195fe9f-3872-4a32-bb62-a13dfea6c331\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bp8jj" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.936152 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/507e4049-6297-4243-9692-3a3841f80b4c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9mddb\" (UID: \"507e4049-6297-4243-9692-3a3841f80b4c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9mddb" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.936226 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0df5d6ad-e65b-4070-909d-0bcd055db60f-service-ca-bundle\") pod \"authentication-operator-69f744f599-9qlcq\" (UID: \"0df5d6ad-e65b-4070-909d-0bcd055db60f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9qlcq" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.936261 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/42c53a24-3430-4028-a8cd-bfa948466eef-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5xxjt\" (UID: \"42c53a24-3430-4028-a8cd-bfa948466eef\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xxjt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.936396 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9195fe9f-3872-4a32-bb62-a13dfea6c331-config\") pod \"machine-api-operator-5694c8668f-bp8jj\" (UID: \"9195fe9f-3872-4a32-bb62-a13dfea6c331\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bp8jj" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.936432 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d83bcc5d-5419-4656-882e-964b0d87e966-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-p7n9h\" (UID: \"d83bcc5d-5419-4656-882e-964b0d87e966\") " pod="openshift-controller-manager/controller-manager-879f6c89f-p7n9h" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.936464 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/05cd76ab-027a-4c05-aeb0-702df5277cbe-image-import-ca\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.936492 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9195fe9f-3872-4a32-bb62-a13dfea6c331-images\") pod \"machine-api-operator-5694c8668f-bp8jj\" (UID: \"9195fe9f-3872-4a32-bb62-a13dfea6c331\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bp8jj" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.936517 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/05cd76ab-027a-4c05-aeb0-702df5277cbe-etcd-client\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.936556 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d83bcc5d-5419-4656-882e-964b0d87e966-serving-cert\") pod \"controller-manager-879f6c89f-p7n9h\" (UID: \"d83bcc5d-5419-4656-882e-964b0d87e966\") " pod="openshift-controller-manager/controller-manager-879f6c89f-p7n9h" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.936725 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15b4822e-5f78-4c15-a2ad-641d0af56a1b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-zsbmp\" (UID: \"15b4822e-5f78-4c15-a2ad-641d0af56a1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.936761 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15b4822e-5f78-4c15-a2ad-641d0af56a1b-serving-cert\") pod \"apiserver-7bbb656c7d-zsbmp\" (UID: \"15b4822e-5f78-4c15-a2ad-641d0af56a1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.954164 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.955322 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.955613 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hlzxv" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.955714 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-snd2t" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.956172 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jct" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.956866 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4p99\" (UniqueName: \"kubernetes.io/projected/42c53a24-3430-4028-a8cd-bfa948466eef-kube-api-access-l4p99\") pod \"openshift-config-operator-7777fb866f-5xxjt\" (UID: \"42c53a24-3430-4028-a8cd-bfa948466eef\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xxjt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.961066 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/15b4822e-5f78-4c15-a2ad-641d0af56a1b-audit-dir\") pod \"apiserver-7bbb656c7d-zsbmp\" (UID: \"15b4822e-5f78-4c15-a2ad-641d0af56a1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.961072 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/05cd76ab-027a-4c05-aeb0-702df5277cbe-etcd-serving-ca\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.961138 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15b4822e-5f78-4c15-a2ad-641d0af56a1b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-zsbmp\" (UID: \"15b4822e-5f78-4c15-a2ad-641d0af56a1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.961502 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/05cd76ab-027a-4c05-aeb0-702df5277cbe-node-pullsecrets\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.961868 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05cd76ab-027a-4c05-aeb0-702df5277cbe-trusted-ca-bundle\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.961919 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d83bcc5d-5419-4656-882e-964b0d87e966-client-ca\") pod \"controller-manager-879f6c89f-p7n9h\" (UID: \"d83bcc5d-5419-4656-882e-964b0d87e966\") " pod="openshift-controller-manager/controller-manager-879f6c89f-p7n9h" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.962066 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/05cd76ab-027a-4c05-aeb0-702df5277cbe-audit\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.962403 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/42c53a24-3430-4028-a8cd-bfa948466eef-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5xxjt\" (UID: \"42c53a24-3430-4028-a8cd-bfa948466eef\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xxjt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.962565 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0df5d6ad-e65b-4070-909d-0bcd055db60f-service-ca-bundle\") pod \"authentication-operator-69f744f599-9qlcq\" (UID: \"0df5d6ad-e65b-4070-909d-0bcd055db60f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9qlcq" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.963011 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/05cd76ab-027a-4c05-aeb0-702df5277cbe-audit-dir\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.964308 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d83bcc5d-5419-4656-882e-964b0d87e966-config\") pod \"controller-manager-879f6c89f-p7n9h\" (UID: \"d83bcc5d-5419-4656-882e-964b0d87e966\") " pod="openshift-controller-manager/controller-manager-879f6c89f-p7n9h" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.964695 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05cd76ab-027a-4c05-aeb0-702df5277cbe-config\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.965600 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9195fe9f-3872-4a32-bb62-a13dfea6c331-images\") pod \"machine-api-operator-5694c8668f-bp8jj\" (UID: \"9195fe9f-3872-4a32-bb62-a13dfea6c331\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bp8jj" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.966533 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0df5d6ad-e65b-4070-909d-0bcd055db60f-config\") pod \"authentication-operator-69f744f599-9qlcq\" (UID: \"0df5d6ad-e65b-4070-909d-0bcd055db60f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9qlcq" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.967234 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9195fe9f-3872-4a32-bb62-a13dfea6c331-config\") pod \"machine-api-operator-5694c8668f-bp8jj\" (UID: \"9195fe9f-3872-4a32-bb62-a13dfea6c331\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bp8jj" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.967451 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/9195fe9f-3872-4a32-bb62-a13dfea6c331-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bp8jj\" (UID: \"9195fe9f-3872-4a32-bb62-a13dfea6c331\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bp8jj" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.968402 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0df5d6ad-e65b-4070-909d-0bcd055db60f-serving-cert\") pod \"authentication-operator-69f744f599-9qlcq\" (UID: \"0df5d6ad-e65b-4070-909d-0bcd055db60f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9qlcq" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.968466 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05cd76ab-027a-4c05-aeb0-702df5277cbe-serving-cert\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.968606 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/15b4822e-5f78-4c15-a2ad-641d0af56a1b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-zsbmp\" (UID: \"15b4822e-5f78-4c15-a2ad-641d0af56a1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.970313 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/15b4822e-5f78-4c15-a2ad-641d0af56a1b-etcd-client\") pod \"apiserver-7bbb656c7d-zsbmp\" (UID: \"15b4822e-5f78-4c15-a2ad-641d0af56a1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.970336 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42c53a24-3430-4028-a8cd-bfa948466eef-serving-cert\") pod \"openshift-config-operator-7777fb866f-5xxjt\" (UID: \"42c53a24-3430-4028-a8cd-bfa948466eef\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xxjt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.970343 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7pvlk"] Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.970904 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/15b4822e-5f78-4c15-a2ad-641d0af56a1b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-zsbmp\" (UID: \"15b4822e-5f78-4c15-a2ad-641d0af56a1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.972468 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2vd2"] Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.972910 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pk8zt"] Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.973375 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15b4822e-5f78-4c15-a2ad-641d0af56a1b-serving-cert\") pod \"apiserver-7bbb656c7d-zsbmp\" (UID: \"15b4822e-5f78-4c15-a2ad-641d0af56a1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.973543 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk8zt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.973608 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d83bcc5d-5419-4656-882e-964b0d87e966-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-p7n9h\" (UID: \"d83bcc5d-5419-4656-882e-964b0d87e966\") " pod="openshift-controller-manager/controller-manager-879f6c89f-p7n9h" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.973701 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0df5d6ad-e65b-4070-909d-0bcd055db60f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9qlcq\" (UID: \"0df5d6ad-e65b-4070-909d-0bcd055db60f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9qlcq" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.973899 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-7pvlk" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.974100 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2vd2" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.974472 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0df5d6ad-e65b-4070-909d-0bcd055db60f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-9qlcq\" (UID: \"0df5d6ad-e65b-4070-909d-0bcd055db60f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9qlcq" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.974543 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcm7p\" (UniqueName: \"kubernetes.io/projected/15b4822e-5f78-4c15-a2ad-641d0af56a1b-kube-api-access-jcm7p\") pod \"apiserver-7bbb656c7d-zsbmp\" (UID: \"15b4822e-5f78-4c15-a2ad-641d0af56a1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.974859 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/05cd76ab-027a-4c05-aeb0-702df5277cbe-etcd-client\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.975248 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/15b4822e-5f78-4c15-a2ad-641d0af56a1b-audit-policies\") pod \"apiserver-7bbb656c7d-zsbmp\" (UID: \"15b4822e-5f78-4c15-a2ad-641d0af56a1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.976177 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.976674 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/15b4822e-5f78-4c15-a2ad-641d0af56a1b-encryption-config\") pod \"apiserver-7bbb656c7d-zsbmp\" (UID: \"15b4822e-5f78-4c15-a2ad-641d0af56a1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.976715 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-dwnvl"] Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.976887 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.977058 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-dwnvl" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.977384 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.977528 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/05cd76ab-027a-4c05-aeb0-702df5277cbe-image-import-ca\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.978001 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d83bcc5d-5419-4656-882e-964b0d87e966-serving-cert\") pod \"controller-manager-879f6c89f-p7n9h\" (UID: \"d83bcc5d-5419-4656-882e-964b0d87e966\") " pod="openshift-controller-manager/controller-manager-879f6c89f-p7n9h" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.979632 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.980027 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/05cd76ab-027a-4c05-aeb0-702df5277cbe-encryption-config\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.980131 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nz685"] Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.980164 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05cd76ab-027a-4c05-aeb0-702df5277cbe-serving-cert\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.980847 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nz685" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.981640 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s4nzx"] Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.982254 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s4nzx" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.982847 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.990835 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/507e4049-6297-4243-9692-3a3841f80b4c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9mddb\" (UID: \"507e4049-6297-4243-9692-3a3841f80b4c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9mddb" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.994851 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-kjj6c"] Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.995432 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-kjj6c" Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.995990 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q87rd"] Nov 24 21:22:00 crc kubenswrapper[4915]: I1124 21:22:00.996650 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q87rd" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.010152 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rpff6"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.010702 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rflqm"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.011107 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.012929 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rpff6" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.019589 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-brhzz"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.020023 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400315-hnmlz"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.020031 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rflqm" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.020243 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-brhzz" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.020319 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5xxjt"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.020337 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zbtbl"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.020420 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-hnmlz" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.021678 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lz5zs"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.021748 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zbtbl" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.022067 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bp8jj"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.022139 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lz5zs" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.024250 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.025045 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-p7n9h"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.029533 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zjqjg"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.030292 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zjqjg" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.032403 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9mddb"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.032564 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sb4zq"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.036740 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-d8b76"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.036867 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9qlcq"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.038076 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-x7cqd"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.039129 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-97jl2"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.041923 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.042762 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lbpp5"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.043669 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zgpwn"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.051117 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s4nzx"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.054271 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hlzxv"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.056116 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-bzdrs"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.057654 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-bzdrs" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.058637 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jct"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.060620 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-762rz"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.062229 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.063841 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-llzdr"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.065117 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-llzdr" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.065717 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-d7cvw"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.067150 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-dwnvl"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.068357 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lz5zs"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.069700 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-snd2t"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.071102 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4rr2s"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.072523 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c98n4"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.073977 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j2dm5"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.075357 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rdk5z"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.075538 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h8x4\" (UniqueName: \"kubernetes.io/projected/7f11f7bc-4f9d-4bed-8557-de43a62647b1-kube-api-access-5h8x4\") pod \"package-server-manager-789f6589d5-w5jct\" (UID: \"7f11f7bc-4f9d-4bed-8557-de43a62647b1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jct" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.075601 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fcc4ebd4-dcbc-464e-91cf-a52a944b461f-trusted-ca\") pod \"console-operator-58897d9998-sb4zq\" (UID: \"fcc4ebd4-dcbc-464e-91cf-a52a944b461f\") " pod="openshift-console-operator/console-operator-58897d9998-sb4zq" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.075650 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e0cfbbde-c3a3-4306-a714-76134e43b495-audit-dir\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.075673 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.075699 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0828ca36-88fa-4eac-95ed-a8bc14a2b840-trusted-ca\") pod \"ingress-operator-5b745b69d9-lbpp5\" (UID: \"0828ca36-88fa-4eac-95ed-a8bc14a2b840\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbpp5" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.075720 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fc3e3b86-8f8c-4d9e-ad67-9f562e077367-tmpfs\") pod \"packageserver-d55dfcdfc-c98n4\" (UID: \"fc3e3b86-8f8c-4d9e-ad67-9f562e077367\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c98n4" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.075823 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.075883 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bc4b093-0be5-4df2-9f51-bf3e24d722af-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-hlzxv\" (UID: \"8bc4b093-0be5-4df2-9f51-bf3e24d722af\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hlzxv" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.075922 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e70a867a-f31a-495c-9f2a-13a996188c84-srv-cert\") pod \"olm-operator-6b444d44fb-j2dm5\" (UID: \"e70a867a-f31a-495c-9f2a-13a996188c84\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j2dm5" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.075939 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.075961 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sptdr\" (UniqueName: \"kubernetes.io/projected/fc3e3b86-8f8c-4d9e-ad67-9f562e077367-kube-api-access-sptdr\") pod \"packageserver-d55dfcdfc-c98n4\" (UID: \"fc3e3b86-8f8c-4d9e-ad67-9f562e077367\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c98n4" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.075985 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.076010 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbbrb\" (UniqueName: \"kubernetes.io/projected/0828ca36-88fa-4eac-95ed-a8bc14a2b840-kube-api-access-nbbrb\") pod \"ingress-operator-5b745b69d9-lbpp5\" (UID: \"0828ca36-88fa-4eac-95ed-a8bc14a2b840\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbpp5" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.076075 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c25872a5-42e3-4e20-ad54-594477784fa2-trusted-ca-bundle\") pod \"console-f9d7485db-x7cqd\" (UID: \"c25872a5-42e3-4e20-ad54-594477784fa2\") " pod="openshift-console/console-f9d7485db-x7cqd" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.076113 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.076163 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vvgd\" (UniqueName: \"kubernetes.io/projected/e0cfbbde-c3a3-4306-a714-76134e43b495-kube-api-access-6vvgd\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.076211 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lpg4\" (UniqueName: \"kubernetes.io/projected/53094d5f-2a39-479b-912d-c766fd9f4fa1-kube-api-access-9lpg4\") pod \"migrator-59844c95c7-4rr2s\" (UID: \"53094d5f-2a39-479b-912d-c766fd9f4fa1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4rr2s" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.076240 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.076276 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzr6t\" (UniqueName: \"kubernetes.io/projected/e70a867a-f31a-495c-9f2a-13a996188c84-kube-api-access-qzr6t\") pod \"olm-operator-6b444d44fb-j2dm5\" (UID: \"e70a867a-f31a-495c-9f2a-13a996188c84\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j2dm5" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.076292 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/96ed7ec2-f67e-4a3a-9023-34ac783d91db-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-zgpwn\" (UID: \"96ed7ec2-f67e-4a3a-9023-34ac783d91db\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zgpwn" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.076308 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/56ba8653-31c9-4e8a-bc67-9bfd3514677b-auth-proxy-config\") pod \"machine-approver-56656f9798-5qvrk\" (UID: \"56ba8653-31c9-4e8a-bc67-9bfd3514677b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5qvrk" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.076322 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c25872a5-42e3-4e20-ad54-594477784fa2-service-ca\") pod \"console-f9d7485db-x7cqd\" (UID: \"c25872a5-42e3-4e20-ad54-594477784fa2\") " pod="openshift-console/console-f9d7485db-x7cqd" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.076341 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f11f7bc-4f9d-4bed-8557-de43a62647b1-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-w5jct\" (UID: \"7f11f7bc-4f9d-4bed-8557-de43a62647b1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jct" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.076369 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/56ba8653-31c9-4e8a-bc67-9bfd3514677b-machine-approver-tls\") pod \"machine-approver-56656f9798-5qvrk\" (UID: \"56ba8653-31c9-4e8a-bc67-9bfd3514677b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5qvrk" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.076417 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcv8q\" (UniqueName: \"kubernetes.io/projected/56ba8653-31c9-4e8a-bc67-9bfd3514677b-kube-api-access-fcv8q\") pod \"machine-approver-56656f9798-5qvrk\" (UID: \"56ba8653-31c9-4e8a-bc67-9bfd3514677b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5qvrk" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.076436 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c25872a5-42e3-4e20-ad54-594477784fa2-oauth-serving-cert\") pod \"console-f9d7485db-x7cqd\" (UID: \"c25872a5-42e3-4e20-ad54-594477784fa2\") " pod="openshift-console/console-f9d7485db-x7cqd" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.076458 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb5cp\" (UniqueName: \"kubernetes.io/projected/c25872a5-42e3-4e20-ad54-594477784fa2-kube-api-access-wb5cp\") pod \"console-f9d7485db-x7cqd\" (UID: \"c25872a5-42e3-4e20-ad54-594477784fa2\") " pod="openshift-console/console-f9d7485db-x7cqd" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.076483 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcc4ebd4-dcbc-464e-91cf-a52a944b461f-serving-cert\") pod \"console-operator-58897d9998-sb4zq\" (UID: \"fcc4ebd4-dcbc-464e-91cf-a52a944b461f\") " pod="openshift-console-operator/console-operator-58897d9998-sb4zq" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.076497 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bc4b093-0be5-4df2-9f51-bf3e24d722af-config\") pod \"kube-apiserver-operator-766d6c64bb-hlzxv\" (UID: \"8bc4b093-0be5-4df2-9f51-bf3e24d722af\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hlzxv" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.076688 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-hmd9t"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.076882 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.076933 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/96ed7ec2-f67e-4a3a-9023-34ac783d91db-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-zgpwn\" (UID: \"96ed7ec2-f67e-4a3a-9023-34ac783d91db\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zgpwn" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.076968 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de799d6d-f599-4dfd-a95b-4f7daf00d23a-client-ca\") pod \"route-controller-manager-6576b87f9c-762rz\" (UID: \"de799d6d-f599-4dfd-a95b-4f7daf00d23a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-762rz" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.077023 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmdhg\" (UniqueName: \"kubernetes.io/projected/de799d6d-f599-4dfd-a95b-4f7daf00d23a-kube-api-access-wmdhg\") pod \"route-controller-manager-6576b87f9c-762rz\" (UID: \"de799d6d-f599-4dfd-a95b-4f7daf00d23a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-762rz" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.077207 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c25872a5-42e3-4e20-ad54-594477784fa2-console-config\") pod \"console-f9d7485db-x7cqd\" (UID: \"c25872a5-42e3-4e20-ad54-594477784fa2\") " pod="openshift-console/console-f9d7485db-x7cqd" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.077253 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.077363 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fc3e3b86-8f8c-4d9e-ad67-9f562e077367-apiservice-cert\") pod \"packageserver-d55dfcdfc-c98n4\" (UID: \"fc3e3b86-8f8c-4d9e-ad67-9f562e077367\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c98n4" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.077418 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c25872a5-42e3-4e20-ad54-594477784fa2-console-serving-cert\") pod \"console-f9d7485db-x7cqd\" (UID: \"c25872a5-42e3-4e20-ad54-594477784fa2\") " pod="openshift-console/console-f9d7485db-x7cqd" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.077452 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqmr7\" (UniqueName: \"kubernetes.io/projected/96ed7ec2-f67e-4a3a-9023-34ac783d91db-kube-api-access-hqmr7\") pod \"cluster-image-registry-operator-dc59b4c8b-zgpwn\" (UID: \"96ed7ec2-f67e-4a3a-9023-34ac783d91db\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zgpwn" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.077694 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksgr2\" (UniqueName: \"kubernetes.io/projected/fa3f4f80-34b9-4283-874c-ace65ae5ed4d-kube-api-access-ksgr2\") pod \"multus-admission-controller-857f4d67dd-nmx6q\" (UID: \"fa3f4f80-34b9-4283-874c-ace65ae5ed4d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nmx6q" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.077746 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.077765 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hmd9t" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.077823 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.078022 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-lrzbs"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.078037 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c57c5c5b-42fb-44e2-bbcb-2a8797d45d58-config\") pod \"openshift-apiserver-operator-796bbdcf4f-snd2t\" (UID: \"c57c5c5b-42fb-44e2-bbcb-2a8797d45d58\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-snd2t" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.078064 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de799d6d-f599-4dfd-a95b-4f7daf00d23a-config\") pod \"route-controller-manager-6576b87f9c-762rz\" (UID: \"de799d6d-f599-4dfd-a95b-4f7daf00d23a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-762rz" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.078083 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0828ca36-88fa-4eac-95ed-a8bc14a2b840-metrics-tls\") pod \"ingress-operator-5b745b69d9-lbpp5\" (UID: \"0828ca36-88fa-4eac-95ed-a8bc14a2b840\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbpp5" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.078099 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de799d6d-f599-4dfd-a95b-4f7daf00d23a-serving-cert\") pod \"route-controller-manager-6576b87f9c-762rz\" (UID: \"de799d6d-f599-4dfd-a95b-4f7daf00d23a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-762rz" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.078172 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e0cfbbde-c3a3-4306-a714-76134e43b495-audit-policies\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.078218 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8bc4b093-0be5-4df2-9f51-bf3e24d722af-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-hlzxv\" (UID: \"8bc4b093-0be5-4df2-9f51-bf3e24d722af\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hlzxv" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.078359 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ba8653-31c9-4e8a-bc67-9bfd3514677b-config\") pod \"machine-approver-56656f9798-5qvrk\" (UID: \"56ba8653-31c9-4e8a-bc67-9bfd3514677b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5qvrk" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.078476 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.078562 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-lrzbs" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.078572 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c57c5c5b-42fb-44e2-bbcb-2a8797d45d58-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-snd2t\" (UID: \"c57c5c5b-42fb-44e2-bbcb-2a8797d45d58\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-snd2t" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.078715 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c25872a5-42e3-4e20-ad54-594477784fa2-console-oauth-config\") pod \"console-f9d7485db-x7cqd\" (UID: \"c25872a5-42e3-4e20-ad54-594477784fa2\") " pod="openshift-console/console-f9d7485db-x7cqd" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.078808 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5xjs\" (UniqueName: \"kubernetes.io/projected/fcc4ebd4-dcbc-464e-91cf-a52a944b461f-kube-api-access-d5xjs\") pod \"console-operator-58897d9998-sb4zq\" (UID: \"fcc4ebd4-dcbc-464e-91cf-a52a944b461f\") " pod="openshift-console-operator/console-operator-58897d9998-sb4zq" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.078910 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fa3f4f80-34b9-4283-874c-ace65ae5ed4d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nmx6q\" (UID: \"fa3f4f80-34b9-4283-874c-ace65ae5ed4d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nmx6q" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.079001 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e70a867a-f31a-495c-9f2a-13a996188c84-profile-collector-cert\") pod \"olm-operator-6b444d44fb-j2dm5\" (UID: \"e70a867a-f31a-495c-9f2a-13a996188c84\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j2dm5" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.079083 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0828ca36-88fa-4eac-95ed-a8bc14a2b840-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lbpp5\" (UID: \"0828ca36-88fa-4eac-95ed-a8bc14a2b840\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbpp5" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.079129 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jqfv\" (UniqueName: \"kubernetes.io/projected/c57c5c5b-42fb-44e2-bbcb-2a8797d45d58-kube-api-access-2jqfv\") pod \"openshift-apiserver-operator-796bbdcf4f-snd2t\" (UID: \"c57c5c5b-42fb-44e2-bbcb-2a8797d45d58\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-snd2t" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.079171 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc3e3b86-8f8c-4d9e-ad67-9f562e077367-webhook-cert\") pod \"packageserver-d55dfcdfc-c98n4\" (UID: \"fc3e3b86-8f8c-4d9e-ad67-9f562e077367\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c98n4" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.079195 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcc4ebd4-dcbc-464e-91cf-a52a944b461f-config\") pod \"console-operator-58897d9998-sb4zq\" (UID: \"fcc4ebd4-dcbc-464e-91cf-a52a944b461f\") " pod="openshift-console-operator/console-operator-58897d9998-sb4zq" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.079218 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/96ed7ec2-f67e-4a3a-9023-34ac783d91db-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-zgpwn\" (UID: \"96ed7ec2-f67e-4a3a-9023-34ac783d91db\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zgpwn" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.081113 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zbtbl"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.081953 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.083461 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.083544 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nmx6q"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.084801 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2vd2"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.085901 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7pvlk"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.088533 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nz685"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.088556 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q87rd"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.089648 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-brhzz"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.092372 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zjqjg"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.093396 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400315-hnmlz"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.094440 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-bzdrs"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.095483 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pk8zt"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.096533 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hmd9t"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.097529 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rpff6"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.098529 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rflqm"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.099540 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-llzdr"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.102258 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.121940 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.144921 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.162484 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.180387 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.180417 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c57c5c5b-42fb-44e2-bbcb-2a8797d45d58-config\") pod \"openshift-apiserver-operator-796bbdcf4f-snd2t\" (UID: \"c57c5c5b-42fb-44e2-bbcb-2a8797d45d58\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-snd2t" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.180434 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de799d6d-f599-4dfd-a95b-4f7daf00d23a-config\") pod \"route-controller-manager-6576b87f9c-762rz\" (UID: \"de799d6d-f599-4dfd-a95b-4f7daf00d23a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-762rz" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.180450 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0828ca36-88fa-4eac-95ed-a8bc14a2b840-metrics-tls\") pod \"ingress-operator-5b745b69d9-lbpp5\" (UID: \"0828ca36-88fa-4eac-95ed-a8bc14a2b840\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbpp5" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.180467 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de799d6d-f599-4dfd-a95b-4f7daf00d23a-serving-cert\") pod \"route-controller-manager-6576b87f9c-762rz\" (UID: \"de799d6d-f599-4dfd-a95b-4f7daf00d23a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-762rz" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.180482 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e0cfbbde-c3a3-4306-a714-76134e43b495-audit-policies\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.180499 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8bc4b093-0be5-4df2-9f51-bf3e24d722af-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-hlzxv\" (UID: \"8bc4b093-0be5-4df2-9f51-bf3e24d722af\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hlzxv" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.180519 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ba8653-31c9-4e8a-bc67-9bfd3514677b-config\") pod \"machine-approver-56656f9798-5qvrk\" (UID: \"56ba8653-31c9-4e8a-bc67-9bfd3514677b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5qvrk" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.180541 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.180574 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c57c5c5b-42fb-44e2-bbcb-2a8797d45d58-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-snd2t\" (UID: \"c57c5c5b-42fb-44e2-bbcb-2a8797d45d58\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-snd2t" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.180589 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c25872a5-42e3-4e20-ad54-594477784fa2-console-oauth-config\") pod \"console-f9d7485db-x7cqd\" (UID: \"c25872a5-42e3-4e20-ad54-594477784fa2\") " pod="openshift-console/console-f9d7485db-x7cqd" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.180605 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5xjs\" (UniqueName: \"kubernetes.io/projected/fcc4ebd4-dcbc-464e-91cf-a52a944b461f-kube-api-access-d5xjs\") pod \"console-operator-58897d9998-sb4zq\" (UID: \"fcc4ebd4-dcbc-464e-91cf-a52a944b461f\") " pod="openshift-console-operator/console-operator-58897d9998-sb4zq" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.180624 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fa3f4f80-34b9-4283-874c-ace65ae5ed4d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nmx6q\" (UID: \"fa3f4f80-34b9-4283-874c-ace65ae5ed4d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nmx6q" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.180644 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jqfv\" (UniqueName: \"kubernetes.io/projected/c57c5c5b-42fb-44e2-bbcb-2a8797d45d58-kube-api-access-2jqfv\") pod \"openshift-apiserver-operator-796bbdcf4f-snd2t\" (UID: \"c57c5c5b-42fb-44e2-bbcb-2a8797d45d58\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-snd2t" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.180674 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e70a867a-f31a-495c-9f2a-13a996188c84-profile-collector-cert\") pod \"olm-operator-6b444d44fb-j2dm5\" (UID: \"e70a867a-f31a-495c-9f2a-13a996188c84\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j2dm5" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.180688 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0828ca36-88fa-4eac-95ed-a8bc14a2b840-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lbpp5\" (UID: \"0828ca36-88fa-4eac-95ed-a8bc14a2b840\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbpp5" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.180703 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc3e3b86-8f8c-4d9e-ad67-9f562e077367-webhook-cert\") pod \"packageserver-d55dfcdfc-c98n4\" (UID: \"fc3e3b86-8f8c-4d9e-ad67-9f562e077367\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c98n4" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.180723 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcc4ebd4-dcbc-464e-91cf-a52a944b461f-config\") pod \"console-operator-58897d9998-sb4zq\" (UID: \"fcc4ebd4-dcbc-464e-91cf-a52a944b461f\") " pod="openshift-console-operator/console-operator-58897d9998-sb4zq" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.181702 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e0cfbbde-c3a3-4306-a714-76134e43b495-audit-policies\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.181370 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56ba8653-31c9-4e8a-bc67-9bfd3514677b-config\") pod \"machine-approver-56656f9798-5qvrk\" (UID: \"56ba8653-31c9-4e8a-bc67-9bfd3514677b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5qvrk" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.182320 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.182682 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de799d6d-f599-4dfd-a95b-4f7daf00d23a-config\") pod \"route-controller-manager-6576b87f9c-762rz\" (UID: \"de799d6d-f599-4dfd-a95b-4f7daf00d23a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-762rz" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.181271 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.183725 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcc4ebd4-dcbc-464e-91cf-a52a944b461f-config\") pod \"console-operator-58897d9998-sb4zq\" (UID: \"fcc4ebd4-dcbc-464e-91cf-a52a944b461f\") " pod="openshift-console-operator/console-operator-58897d9998-sb4zq" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.184699 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc3e3b86-8f8c-4d9e-ad67-9f562e077367-webhook-cert\") pod \"packageserver-d55dfcdfc-c98n4\" (UID: \"fc3e3b86-8f8c-4d9e-ad67-9f562e077367\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c98n4" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.184828 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de799d6d-f599-4dfd-a95b-4f7daf00d23a-serving-cert\") pod \"route-controller-manager-6576b87f9c-762rz\" (UID: \"de799d6d-f599-4dfd-a95b-4f7daf00d23a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-762rz" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.184839 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c25872a5-42e3-4e20-ad54-594477784fa2-console-oauth-config\") pod \"console-f9d7485db-x7cqd\" (UID: \"c25872a5-42e3-4e20-ad54-594477784fa2\") " pod="openshift-console/console-f9d7485db-x7cqd" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.185107 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fa3f4f80-34b9-4283-874c-ace65ae5ed4d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nmx6q\" (UID: \"fa3f4f80-34b9-4283-874c-ace65ae5ed4d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nmx6q" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.185389 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/96ed7ec2-f67e-4a3a-9023-34ac783d91db-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-zgpwn\" (UID: \"96ed7ec2-f67e-4a3a-9023-34ac783d91db\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zgpwn" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.185426 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fc3e3b86-8f8c-4d9e-ad67-9f562e077367-tmpfs\") pod \"packageserver-d55dfcdfc-c98n4\" (UID: \"fc3e3b86-8f8c-4d9e-ad67-9f562e077367\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c98n4" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.185450 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h8x4\" (UniqueName: \"kubernetes.io/projected/7f11f7bc-4f9d-4bed-8557-de43a62647b1-kube-api-access-5h8x4\") pod \"package-server-manager-789f6589d5-w5jct\" (UID: \"7f11f7bc-4f9d-4bed-8557-de43a62647b1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jct" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.185468 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fcc4ebd4-dcbc-464e-91cf-a52a944b461f-trusted-ca\") pod \"console-operator-58897d9998-sb4zq\" (UID: \"fcc4ebd4-dcbc-464e-91cf-a52a944b461f\") " pod="openshift-console-operator/console-operator-58897d9998-sb4zq" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.185485 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e0cfbbde-c3a3-4306-a714-76134e43b495-audit-dir\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.185501 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.185524 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0828ca36-88fa-4eac-95ed-a8bc14a2b840-trusted-ca\") pod \"ingress-operator-5b745b69d9-lbpp5\" (UID: \"0828ca36-88fa-4eac-95ed-a8bc14a2b840\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbpp5" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.185546 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.185579 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bc4b093-0be5-4df2-9f51-bf3e24d722af-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-hlzxv\" (UID: \"8bc4b093-0be5-4df2-9f51-bf3e24d722af\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hlzxv" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.185595 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sptdr\" (UniqueName: \"kubernetes.io/projected/fc3e3b86-8f8c-4d9e-ad67-9f562e077367-kube-api-access-sptdr\") pod \"packageserver-d55dfcdfc-c98n4\" (UID: \"fc3e3b86-8f8c-4d9e-ad67-9f562e077367\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c98n4" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.185620 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e70a867a-f31a-495c-9f2a-13a996188c84-srv-cert\") pod \"olm-operator-6b444d44fb-j2dm5\" (UID: \"e70a867a-f31a-495c-9f2a-13a996188c84\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j2dm5" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.185635 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.185652 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.185669 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbbrb\" (UniqueName: \"kubernetes.io/projected/0828ca36-88fa-4eac-95ed-a8bc14a2b840-kube-api-access-nbbrb\") pod \"ingress-operator-5b745b69d9-lbpp5\" (UID: \"0828ca36-88fa-4eac-95ed-a8bc14a2b840\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbpp5" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.185708 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c25872a5-42e3-4e20-ad54-594477784fa2-trusted-ca-bundle\") pod \"console-f9d7485db-x7cqd\" (UID: \"c25872a5-42e3-4e20-ad54-594477784fa2\") " pod="openshift-console/console-f9d7485db-x7cqd" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.185728 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.185734 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.185746 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vvgd\" (UniqueName: \"kubernetes.io/projected/e0cfbbde-c3a3-4306-a714-76134e43b495-kube-api-access-6vvgd\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.185924 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fc3e3b86-8f8c-4d9e-ad67-9f562e077367-tmpfs\") pod \"packageserver-d55dfcdfc-c98n4\" (UID: \"fc3e3b86-8f8c-4d9e-ad67-9f562e077367\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c98n4" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.185940 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lpg4\" (UniqueName: \"kubernetes.io/projected/53094d5f-2a39-479b-912d-c766fd9f4fa1-kube-api-access-9lpg4\") pod \"migrator-59844c95c7-4rr2s\" (UID: \"53094d5f-2a39-479b-912d-c766fd9f4fa1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4rr2s" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.185992 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.186019 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e0cfbbde-c3a3-4306-a714-76134e43b495-audit-dir\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.186026 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/96ed7ec2-f67e-4a3a-9023-34ac783d91db-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-zgpwn\" (UID: \"96ed7ec2-f67e-4a3a-9023-34ac783d91db\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zgpwn" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.186076 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzr6t\" (UniqueName: \"kubernetes.io/projected/e70a867a-f31a-495c-9f2a-13a996188c84-kube-api-access-qzr6t\") pod \"olm-operator-6b444d44fb-j2dm5\" (UID: \"e70a867a-f31a-495c-9f2a-13a996188c84\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j2dm5" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.186127 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/56ba8653-31c9-4e8a-bc67-9bfd3514677b-auth-proxy-config\") pod \"machine-approver-56656f9798-5qvrk\" (UID: \"56ba8653-31c9-4e8a-bc67-9bfd3514677b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5qvrk" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.186160 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c25872a5-42e3-4e20-ad54-594477784fa2-service-ca\") pod \"console-f9d7485db-x7cqd\" (UID: \"c25872a5-42e3-4e20-ad54-594477784fa2\") " pod="openshift-console/console-f9d7485db-x7cqd" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.186199 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f11f7bc-4f9d-4bed-8557-de43a62647b1-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-w5jct\" (UID: \"7f11f7bc-4f9d-4bed-8557-de43a62647b1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jct" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.186249 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c25872a5-42e3-4e20-ad54-594477784fa2-oauth-serving-cert\") pod \"console-f9d7485db-x7cqd\" (UID: \"c25872a5-42e3-4e20-ad54-594477784fa2\") " pod="openshift-console/console-f9d7485db-x7cqd" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.186280 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb5cp\" (UniqueName: \"kubernetes.io/projected/c25872a5-42e3-4e20-ad54-594477784fa2-kube-api-access-wb5cp\") pod \"console-f9d7485db-x7cqd\" (UID: \"c25872a5-42e3-4e20-ad54-594477784fa2\") " pod="openshift-console/console-f9d7485db-x7cqd" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.186311 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/56ba8653-31c9-4e8a-bc67-9bfd3514677b-machine-approver-tls\") pod \"machine-approver-56656f9798-5qvrk\" (UID: \"56ba8653-31c9-4e8a-bc67-9bfd3514677b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5qvrk" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.186340 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcv8q\" (UniqueName: \"kubernetes.io/projected/56ba8653-31c9-4e8a-bc67-9bfd3514677b-kube-api-access-fcv8q\") pod \"machine-approver-56656f9798-5qvrk\" (UID: \"56ba8653-31c9-4e8a-bc67-9bfd3514677b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5qvrk" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.186386 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcc4ebd4-dcbc-464e-91cf-a52a944b461f-serving-cert\") pod \"console-operator-58897d9998-sb4zq\" (UID: \"fcc4ebd4-dcbc-464e-91cf-a52a944b461f\") " pod="openshift-console-operator/console-operator-58897d9998-sb4zq" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.186413 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bc4b093-0be5-4df2-9f51-bf3e24d722af-config\") pod \"kube-apiserver-operator-766d6c64bb-hlzxv\" (UID: \"8bc4b093-0be5-4df2-9f51-bf3e24d722af\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hlzxv" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.186443 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.186473 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/96ed7ec2-f67e-4a3a-9023-34ac783d91db-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-zgpwn\" (UID: \"96ed7ec2-f67e-4a3a-9023-34ac783d91db\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zgpwn" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.186501 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de799d6d-f599-4dfd-a95b-4f7daf00d23a-client-ca\") pod \"route-controller-manager-6576b87f9c-762rz\" (UID: \"de799d6d-f599-4dfd-a95b-4f7daf00d23a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-762rz" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.186529 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmdhg\" (UniqueName: \"kubernetes.io/projected/de799d6d-f599-4dfd-a95b-4f7daf00d23a-kube-api-access-wmdhg\") pod \"route-controller-manager-6576b87f9c-762rz\" (UID: \"de799d6d-f599-4dfd-a95b-4f7daf00d23a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-762rz" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.186535 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fcc4ebd4-dcbc-464e-91cf-a52a944b461f-trusted-ca\") pod \"console-operator-58897d9998-sb4zq\" (UID: \"fcc4ebd4-dcbc-464e-91cf-a52a944b461f\") " pod="openshift-console-operator/console-operator-58897d9998-sb4zq" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.186562 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fc3e3b86-8f8c-4d9e-ad67-9f562e077367-apiservice-cert\") pod \"packageserver-d55dfcdfc-c98n4\" (UID: \"fc3e3b86-8f8c-4d9e-ad67-9f562e077367\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c98n4" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.186605 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c25872a5-42e3-4e20-ad54-594477784fa2-console-config\") pod \"console-f9d7485db-x7cqd\" (UID: \"c25872a5-42e3-4e20-ad54-594477784fa2\") " pod="openshift-console/console-f9d7485db-x7cqd" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.186636 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.186668 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c25872a5-42e3-4e20-ad54-594477784fa2-console-serving-cert\") pod \"console-f9d7485db-x7cqd\" (UID: \"c25872a5-42e3-4e20-ad54-594477784fa2\") " pod="openshift-console/console-f9d7485db-x7cqd" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.186697 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmr7\" (UniqueName: \"kubernetes.io/projected/96ed7ec2-f67e-4a3a-9023-34ac783d91db-kube-api-access-hqmr7\") pod \"cluster-image-registry-operator-dc59b4c8b-zgpwn\" (UID: \"96ed7ec2-f67e-4a3a-9023-34ac783d91db\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zgpwn" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.186733 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksgr2\" (UniqueName: \"kubernetes.io/projected/fa3f4f80-34b9-4283-874c-ace65ae5ed4d-kube-api-access-ksgr2\") pod \"multus-admission-controller-857f4d67dd-nmx6q\" (UID: \"fa3f4f80-34b9-4283-874c-ace65ae5ed4d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nmx6q" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.186765 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.187742 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.188532 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0828ca36-88fa-4eac-95ed-a8bc14a2b840-trusted-ca\") pod \"ingress-operator-5b745b69d9-lbpp5\" (UID: \"0828ca36-88fa-4eac-95ed-a8bc14a2b840\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbpp5" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.188559 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e70a867a-f31a-495c-9f2a-13a996188c84-profile-collector-cert\") pod \"olm-operator-6b444d44fb-j2dm5\" (UID: \"e70a867a-f31a-495c-9f2a-13a996188c84\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j2dm5" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.188900 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.189193 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e70a867a-f31a-495c-9f2a-13a996188c84-srv-cert\") pod \"olm-operator-6b444d44fb-j2dm5\" (UID: \"e70a867a-f31a-495c-9f2a-13a996188c84\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j2dm5" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.189280 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c25872a5-42e3-4e20-ad54-594477784fa2-service-ca\") pod \"console-f9d7485db-x7cqd\" (UID: \"c25872a5-42e3-4e20-ad54-594477784fa2\") " pod="openshift-console/console-f9d7485db-x7cqd" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.189550 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcc4ebd4-dcbc-464e-91cf-a52a944b461f-serving-cert\") pod \"console-operator-58897d9998-sb4zq\" (UID: \"fcc4ebd4-dcbc-464e-91cf-a52a944b461f\") " pod="openshift-console-operator/console-operator-58897d9998-sb4zq" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.189854 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de799d6d-f599-4dfd-a95b-4f7daf00d23a-client-ca\") pod \"route-controller-manager-6576b87f9c-762rz\" (UID: \"de799d6d-f599-4dfd-a95b-4f7daf00d23a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-762rz" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.190035 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fc3e3b86-8f8c-4d9e-ad67-9f562e077367-apiservice-cert\") pod \"packageserver-d55dfcdfc-c98n4\" (UID: \"fc3e3b86-8f8c-4d9e-ad67-9f562e077367\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c98n4" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.190185 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.190322 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c25872a5-42e3-4e20-ad54-594477784fa2-oauth-serving-cert\") pod \"console-f9d7485db-x7cqd\" (UID: \"c25872a5-42e3-4e20-ad54-594477784fa2\") " pod="openshift-console/console-f9d7485db-x7cqd" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.191033 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.191219 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0828ca36-88fa-4eac-95ed-a8bc14a2b840-metrics-tls\") pod \"ingress-operator-5b745b69d9-lbpp5\" (UID: \"0828ca36-88fa-4eac-95ed-a8bc14a2b840\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbpp5" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.191698 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c25872a5-42e3-4e20-ad54-594477784fa2-trusted-ca-bundle\") pod \"console-f9d7485db-x7cqd\" (UID: \"c25872a5-42e3-4e20-ad54-594477784fa2\") " pod="openshift-console/console-f9d7485db-x7cqd" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.191733 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/56ba8653-31c9-4e8a-bc67-9bfd3514677b-auth-proxy-config\") pod \"machine-approver-56656f9798-5qvrk\" (UID: \"56ba8653-31c9-4e8a-bc67-9bfd3514677b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5qvrk" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.191934 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c25872a5-42e3-4e20-ad54-594477784fa2-console-config\") pod \"console-f9d7485db-x7cqd\" (UID: \"c25872a5-42e3-4e20-ad54-594477784fa2\") " pod="openshift-console/console-f9d7485db-x7cqd" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.192027 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/96ed7ec2-f67e-4a3a-9023-34ac783d91db-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-zgpwn\" (UID: \"96ed7ec2-f67e-4a3a-9023-34ac783d91db\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zgpwn" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.192199 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.192394 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c57c5c5b-42fb-44e2-bbcb-2a8797d45d58-config\") pod \"openshift-apiserver-operator-796bbdcf4f-snd2t\" (UID: \"c57c5c5b-42fb-44e2-bbcb-2a8797d45d58\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-snd2t" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.193254 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.193528 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.193662 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/96ed7ec2-f67e-4a3a-9023-34ac783d91db-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-zgpwn\" (UID: \"96ed7ec2-f67e-4a3a-9023-34ac783d91db\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zgpwn" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.193730 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.194361 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.194514 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c25872a5-42e3-4e20-ad54-594477784fa2-console-serving-cert\") pod \"console-f9d7485db-x7cqd\" (UID: \"c25872a5-42e3-4e20-ad54-594477784fa2\") " pod="openshift-console/console-f9d7485db-x7cqd" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.195417 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/56ba8653-31c9-4e8a-bc67-9bfd3514677b-machine-approver-tls\") pod \"machine-approver-56656f9798-5qvrk\" (UID: \"56ba8653-31c9-4e8a-bc67-9bfd3514677b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5qvrk" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.202697 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.222120 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.241991 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.245922 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c57c5c5b-42fb-44e2-bbcb-2a8797d45d58-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-snd2t\" (UID: \"c57c5c5b-42fb-44e2-bbcb-2a8797d45d58\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-snd2t" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.262543 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.272207 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bc4b093-0be5-4df2-9f51-bf3e24d722af-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-hlzxv\" (UID: \"8bc4b093-0be5-4df2-9f51-bf3e24d722af\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hlzxv" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.282515 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.283477 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bc4b093-0be5-4df2-9f51-bf3e24d722af-config\") pod \"kube-apiserver-operator-766d6c64bb-hlzxv\" (UID: \"8bc4b093-0be5-4df2-9f51-bf3e24d722af\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hlzxv" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.301943 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.312610 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7f11f7bc-4f9d-4bed-8557-de43a62647b1-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-w5jct\" (UID: \"7f11f7bc-4f9d-4bed-8557-de43a62647b1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jct" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.322031 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.357223 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4p99\" (UniqueName: \"kubernetes.io/projected/42c53a24-3430-4028-a8cd-bfa948466eef-kube-api-access-l4p99\") pod \"openshift-config-operator-7777fb866f-5xxjt\" (UID: \"42c53a24-3430-4028-a8cd-bfa948466eef\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xxjt" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.362279 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.395155 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klnld\" (UniqueName: \"kubernetes.io/projected/9195fe9f-3872-4a32-bb62-a13dfea6c331-kube-api-access-klnld\") pod \"machine-api-operator-5694c8668f-bp8jj\" (UID: \"9195fe9f-3872-4a32-bb62-a13dfea6c331\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bp8jj" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.414374 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd84d\" (UniqueName: \"kubernetes.io/projected/05cd76ab-027a-4c05-aeb0-702df5277cbe-kube-api-access-rd84d\") pod \"apiserver-76f77b778f-d8b76\" (UID: \"05cd76ab-027a-4c05-aeb0-702df5277cbe\") " pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.425645 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-bp8jj" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.437908 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqnnm\" (UniqueName: \"kubernetes.io/projected/d83bcc5d-5419-4656-882e-964b0d87e966-kube-api-access-mqnnm\") pod \"controller-manager-879f6c89f-p7n9h\" (UID: \"d83bcc5d-5419-4656-882e-964b0d87e966\") " pod="openshift-controller-manager/controller-manager-879f6c89f-p7n9h" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.458797 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b4tl\" (UniqueName: \"kubernetes.io/projected/507e4049-6297-4243-9692-3a3841f80b4c-kube-api-access-4b4tl\") pod \"cluster-samples-operator-665b6dd947-9mddb\" (UID: \"507e4049-6297-4243-9692-3a3841f80b4c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9mddb" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.477040 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqxjl\" (UniqueName: \"kubernetes.io/projected/0df5d6ad-e65b-4070-909d-0bcd055db60f-kube-api-access-gqxjl\") pod \"authentication-operator-69f744f599-9qlcq\" (UID: \"0df5d6ad-e65b-4070-909d-0bcd055db60f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-9qlcq" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.481139 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.485135 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.503838 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.520611 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-9qlcq" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.523577 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.543547 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.561222 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xxjt" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.562619 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.583297 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.592006 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9mddb" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.604864 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.622313 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.640099 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bp8jj"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.642185 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.663066 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.682643 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.695414 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-d8b76"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.703860 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.715486 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-p7n9h" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.722706 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.738546 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-9qlcq"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.742519 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 24 21:22:01 crc kubenswrapper[4915]: W1124 21:22:01.755412 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0df5d6ad_e65b_4070_909d_0bcd055db60f.slice/crio-dd619dc9cabad1537191f180e741ea9ccc7652ae1881de51077063592f0b9991 WatchSource:0}: Error finding container dd619dc9cabad1537191f180e741ea9ccc7652ae1881de51077063592f0b9991: Status 404 returned error can't find the container with id dd619dc9cabad1537191f180e741ea9ccc7652ae1881de51077063592f0b9991 Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.762516 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.801728 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcm7p\" (UniqueName: \"kubernetes.io/projected/15b4822e-5f78-4c15-a2ad-641d0af56a1b-kube-api-access-jcm7p\") pod \"apiserver-7bbb656c7d-zsbmp\" (UID: \"15b4822e-5f78-4c15-a2ad-641d0af56a1b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.824415 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.829265 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9mddb"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.842922 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.863232 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.882154 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.898611 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-p7n9h"] Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.901716 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.921711 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.942372 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.962594 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.981435 4915 request.go:700] Waited for 1.000340431s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.982744 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 24 21:22:01 crc kubenswrapper[4915]: I1124 21:22:01.987472 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5xxjt"] Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.002536 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.023120 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 24 21:22:02 crc kubenswrapper[4915]: W1124 21:22:02.034069 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42c53a24_3430_4028_a8cd_bfa948466eef.slice/crio-2c6bfa5beec2a31bfb713f8823d6a8cb28a27430b9e77897344089d2249e526d WatchSource:0}: Error finding container 2c6bfa5beec2a31bfb713f8823d6a8cb28a27430b9e77897344089d2249e526d: Status 404 returned error can't find the container with id 2c6bfa5beec2a31bfb713f8823d6a8cb28a27430b9e77897344089d2249e526d Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.041875 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.062187 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.081858 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.101502 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.103284 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.126214 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.142398 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.162005 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bp8jj" event={"ID":"9195fe9f-3872-4a32-bb62-a13dfea6c331","Type":"ContainerStarted","Data":"7373b866c94a16d516169755a3d143e018151f6a52993ec9801be2ed84fd9f17"} Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.162052 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bp8jj" event={"ID":"9195fe9f-3872-4a32-bb62-a13dfea6c331","Type":"ContainerStarted","Data":"8556177c6cecd8f0b2edaee224c2be29e31971c9b5fcdf6012739a0fae7a16c0"} Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.162067 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bp8jj" event={"ID":"9195fe9f-3872-4a32-bb62-a13dfea6c331","Type":"ContainerStarted","Data":"f74b1c4298fae0de57c428c959225db82a648a16534bafa2d30b33e1b9a3cbe5"} Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.162171 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.172370 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-p7n9h" event={"ID":"d83bcc5d-5419-4656-882e-964b0d87e966","Type":"ContainerStarted","Data":"8e4e2983221b2abbdd5b0329b83a3f246bc58650f1066548c29a8ecf235ac018"} Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.172411 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-p7n9h" event={"ID":"d83bcc5d-5419-4656-882e-964b0d87e966","Type":"ContainerStarted","Data":"86301b4b65b373895b342259cb2b759b3ba76d856be8f748e2c620d977d3fd5b"} Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.173013 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-p7n9h" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.181029 4915 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-p7n9h container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.181073 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-p7n9h" podUID="d83bcc5d-5419-4656-882e-964b0d87e966" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.182953 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xxjt" event={"ID":"42c53a24-3430-4028-a8cd-bfa948466eef","Type":"ContainerStarted","Data":"74b8a9576400a5a3b65e4b1af660b0680d3ef4448982ef5b896b4c8df3657a4e"} Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.183008 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xxjt" event={"ID":"42c53a24-3430-4028-a8cd-bfa948466eef","Type":"ContainerStarted","Data":"2c6bfa5beec2a31bfb713f8823d6a8cb28a27430b9e77897344089d2249e526d"} Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.183953 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.184711 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9mddb" event={"ID":"507e4049-6297-4243-9692-3a3841f80b4c","Type":"ContainerStarted","Data":"c6923ec72b6e9c20c45961a8b1b35246181001bdf5c015d07e579d1e7b7fba9a"} Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.184738 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9mddb" event={"ID":"507e4049-6297-4243-9692-3a3841f80b4c","Type":"ContainerStarted","Data":"c933fc46d490bbbefba1215eedcc1bc2eabbb9cf8b17af5f10a6462d7f7abe28"} Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.186977 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9qlcq" event={"ID":"0df5d6ad-e65b-4070-909d-0bcd055db60f","Type":"ContainerStarted","Data":"fb1c3559c9f6b76dd36808f3e5e4eff560a0122785ab96cabb7c00d2b91f3955"} Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.187012 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-9qlcq" event={"ID":"0df5d6ad-e65b-4070-909d-0bcd055db60f","Type":"ContainerStarted","Data":"dd619dc9cabad1537191f180e741ea9ccc7652ae1881de51077063592f0b9991"} Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.195673 4915 generic.go:334] "Generic (PLEG): container finished" podID="05cd76ab-027a-4c05-aeb0-702df5277cbe" containerID="a882c26204a593be5b451098c154bc9c334f9bcdb1a4f36e01d771311d663f2a" exitCode=0 Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.195720 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d8b76" event={"ID":"05cd76ab-027a-4c05-aeb0-702df5277cbe","Type":"ContainerDied","Data":"a882c26204a593be5b451098c154bc9c334f9bcdb1a4f36e01d771311d663f2a"} Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.195751 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d8b76" event={"ID":"05cd76ab-027a-4c05-aeb0-702df5277cbe","Type":"ContainerStarted","Data":"6067fb469b86255de18e8c3b254944f68770ebeb4e49fd1de84dfa107ce4f18c"} Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.202030 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.223080 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.245094 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.263162 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.278035 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp"] Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.283300 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.302880 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.323016 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.341953 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 24 21:22:02 crc kubenswrapper[4915]: W1124 21:22:02.347957 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15b4822e_5f78_4c15_a2ad_641d0af56a1b.slice/crio-c30ea5d609f982df0665f5f39aaf3c3292aa84b320226f1a36c03539aa534f8d WatchSource:0}: Error finding container c30ea5d609f982df0665f5f39aaf3c3292aa84b320226f1a36c03539aa534f8d: Status 404 returned error can't find the container with id c30ea5d609f982df0665f5f39aaf3c3292aa84b320226f1a36c03539aa534f8d Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.361999 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.383351 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.402647 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.422886 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.442378 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.462120 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.482147 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.502094 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.522028 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.542953 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.568275 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.589539 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.603026 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.622604 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.642037 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.662328 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.682376 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.702293 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.722571 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.742343 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.762133 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.782208 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.802581 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.822257 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.842647 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.862660 4915 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.881923 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.901882 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.922489 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.942282 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.962266 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 24 21:22:02 crc kubenswrapper[4915]: I1124 21:22:02.981597 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.000980 4915 request.go:700] Waited for 1.92217175s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&limit=500&resourceVersion=0 Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.002419 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.102085 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jqfv\" (UniqueName: \"kubernetes.io/projected/c57c5c5b-42fb-44e2-bbcb-2a8797d45d58-kube-api-access-2jqfv\") pod \"openshift-apiserver-operator-796bbdcf4f-snd2t\" (UID: \"c57c5c5b-42fb-44e2-bbcb-2a8797d45d58\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-snd2t" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.104366 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5xjs\" (UniqueName: \"kubernetes.io/projected/fcc4ebd4-dcbc-464e-91cf-a52a944b461f-kube-api-access-d5xjs\") pod \"console-operator-58897d9998-sb4zq\" (UID: \"fcc4ebd4-dcbc-464e-91cf-a52a944b461f\") " pod="openshift-console-operator/console-operator-58897d9998-sb4zq" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.112232 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8bc4b093-0be5-4df2-9f51-bf3e24d722af-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-hlzxv\" (UID: \"8bc4b093-0be5-4df2-9f51-bf3e24d722af\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hlzxv" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.123174 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-sb4zq" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.129439 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0828ca36-88fa-4eac-95ed-a8bc14a2b840-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lbpp5\" (UID: \"0828ca36-88fa-4eac-95ed-a8bc14a2b840\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbpp5" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.144154 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/96ed7ec2-f67e-4a3a-9023-34ac783d91db-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-zgpwn\" (UID: \"96ed7ec2-f67e-4a3a-9023-34ac783d91db\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zgpwn" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.169092 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h8x4\" (UniqueName: \"kubernetes.io/projected/7f11f7bc-4f9d-4bed-8557-de43a62647b1-kube-api-access-5h8x4\") pod \"package-server-manager-789f6589d5-w5jct\" (UID: \"7f11f7bc-4f9d-4bed-8557-de43a62647b1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jct" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.182692 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vvgd\" (UniqueName: \"kubernetes.io/projected/e0cfbbde-c3a3-4306-a714-76134e43b495-kube-api-access-6vvgd\") pod \"oauth-openshift-558db77b4-rdk5z\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.197074 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sptdr\" (UniqueName: \"kubernetes.io/projected/fc3e3b86-8f8c-4d9e-ad67-9f562e077367-kube-api-access-sptdr\") pod \"packageserver-d55dfcdfc-c98n4\" (UID: \"fc3e3b86-8f8c-4d9e-ad67-9f562e077367\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c98n4" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.199749 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" event={"ID":"15b4822e-5f78-4c15-a2ad-641d0af56a1b","Type":"ContainerStarted","Data":"df218e9ce0e6d0f7bfbf2fe593b9244137acba75d4e2a406e2ad9f9ccbd42377"} Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.199820 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" event={"ID":"15b4822e-5f78-4c15-a2ad-641d0af56a1b","Type":"ContainerStarted","Data":"c30ea5d609f982df0665f5f39aaf3c3292aa84b320226f1a36c03539aa534f8d"} Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.201242 4915 generic.go:334] "Generic (PLEG): container finished" podID="42c53a24-3430-4028-a8cd-bfa948466eef" containerID="74b8a9576400a5a3b65e4b1af660b0680d3ef4448982ef5b896b4c8df3657a4e" exitCode=0 Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.201313 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xxjt" event={"ID":"42c53a24-3430-4028-a8cd-bfa948466eef","Type":"ContainerDied","Data":"74b8a9576400a5a3b65e4b1af660b0680d3ef4448982ef5b896b4c8df3657a4e"} Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.202861 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9mddb" event={"ID":"507e4049-6297-4243-9692-3a3841f80b4c","Type":"ContainerStarted","Data":"5c708ab8f255d76f626fd5e05999774b4a5acb919f2522d13557eacd153654b1"} Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.204361 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d8b76" event={"ID":"05cd76ab-027a-4c05-aeb0-702df5277cbe","Type":"ContainerStarted","Data":"164ebbe38497c76c4e8629b9a16af5c7ca35513f9aeb131dd3e21b4282f51ff7"} Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.217187 4915 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-p7n9h container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.217249 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-p7n9h" podUID="d83bcc5d-5419-4656-882e-964b0d87e966" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.218473 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c98n4" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.218872 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbbrb\" (UniqueName: \"kubernetes.io/projected/0828ca36-88fa-4eac-95ed-a8bc14a2b840-kube-api-access-nbbrb\") pod \"ingress-operator-5b745b69d9-lbpp5\" (UID: \"0828ca36-88fa-4eac-95ed-a8bc14a2b840\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbpp5" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.228583 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-snd2t" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.236307 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jct" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.239857 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lpg4\" (UniqueName: \"kubernetes.io/projected/53094d5f-2a39-479b-912d-c766fd9f4fa1-kube-api-access-9lpg4\") pod \"migrator-59844c95c7-4rr2s\" (UID: \"53094d5f-2a39-479b-912d-c766fd9f4fa1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4rr2s" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.244151 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hlzxv" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.257842 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmdhg\" (UniqueName: \"kubernetes.io/projected/de799d6d-f599-4dfd-a95b-4f7daf00d23a-kube-api-access-wmdhg\") pod \"route-controller-manager-6576b87f9c-762rz\" (UID: \"de799d6d-f599-4dfd-a95b-4f7daf00d23a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-762rz" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.278409 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqmr7\" (UniqueName: \"kubernetes.io/projected/96ed7ec2-f67e-4a3a-9023-34ac783d91db-kube-api-access-hqmr7\") pod \"cluster-image-registry-operator-dc59b4c8b-zgpwn\" (UID: \"96ed7ec2-f67e-4a3a-9023-34ac783d91db\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zgpwn" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.306335 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sb4zq"] Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.309433 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksgr2\" (UniqueName: \"kubernetes.io/projected/fa3f4f80-34b9-4283-874c-ace65ae5ed4d-kube-api-access-ksgr2\") pod \"multus-admission-controller-857f4d67dd-nmx6q\" (UID: \"fa3f4f80-34b9-4283-874c-ace65ae5ed4d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nmx6q" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.318928 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb5cp\" (UniqueName: \"kubernetes.io/projected/c25872a5-42e3-4e20-ad54-594477784fa2-kube-api-access-wb5cp\") pod \"console-f9d7485db-x7cqd\" (UID: \"c25872a5-42e3-4e20-ad54-594477784fa2\") " pod="openshift-console/console-f9d7485db-x7cqd" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.333614 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzr6t\" (UniqueName: \"kubernetes.io/projected/e70a867a-f31a-495c-9f2a-13a996188c84-kube-api-access-qzr6t\") pod \"olm-operator-6b444d44fb-j2dm5\" (UID: \"e70a867a-f31a-495c-9f2a-13a996188c84\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j2dm5" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.353501 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcv8q\" (UniqueName: \"kubernetes.io/projected/56ba8653-31c9-4e8a-bc67-9bfd3514677b-kube-api-access-fcv8q\") pod \"machine-approver-56656f9798-5qvrk\" (UID: \"56ba8653-31c9-4e8a-bc67-9bfd3514677b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5qvrk" Nov 24 21:22:03 crc kubenswrapper[4915]: W1124 21:22:03.396879 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcc4ebd4_dcbc_464e_91cf_a52a944b461f.slice/crio-a8fd5beeb011fa37ec794d54e5011b99aade847b080baef9c697d2c7a05c1546 WatchSource:0}: Error finding container a8fd5beeb011fa37ec794d54e5011b99aade847b080baef9c697d2c7a05c1546: Status 404 returned error can't find the container with id a8fd5beeb011fa37ec794d54e5011b99aade847b080baef9c697d2c7a05c1546 Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.412482 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-762rz" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.416477 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/03660b87-4011-4ee8-ac77-a26a9f853005-ca-trust-extracted\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.416510 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/03660b87-4011-4ee8-ac77-a26a9f853005-registry-certificates\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.416538 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.416558 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptlpg\" (UniqueName: \"kubernetes.io/projected/03660b87-4011-4ee8-ac77-a26a9f853005-kube-api-access-ptlpg\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.416577 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/03660b87-4011-4ee8-ac77-a26a9f853005-trusted-ca\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.416628 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gdld\" (UniqueName: \"kubernetes.io/projected/ea64de69-8cc1-4935-8dc5-908bf44bb2d4-kube-api-access-9gdld\") pod \"downloads-7954f5f757-d7cvw\" (UID: \"ea64de69-8cc1-4935-8dc5-908bf44bb2d4\") " pod="openshift-console/downloads-7954f5f757-d7cvw" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.416767 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/03660b87-4011-4ee8-ac77-a26a9f853005-registry-tls\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.416821 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/03660b87-4011-4ee8-ac77-a26a9f853005-installation-pull-secrets\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.416841 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/03660b87-4011-4ee8-ac77-a26a9f853005-bound-sa-token\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:03 crc kubenswrapper[4915]: E1124 21:22:03.417437 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:03.91742439 +0000 UTC m=+142.233676663 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.437306 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x7cqd" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.445944 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.453398 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5qvrk" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.471031 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zgpwn" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.485896 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbpp5" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.495724 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-nmx6q" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.503010 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4rr2s" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.511245 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j2dm5" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.518654 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.518838 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll7cg\" (UniqueName: \"kubernetes.io/projected/859b0c41-321d-4408-9971-f024de0f81aa-kube-api-access-ll7cg\") pod \"machine-config-controller-84d6567774-zjqjg\" (UID: \"859b0c41-321d-4408-9971-f024de0f81aa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zjqjg" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.518895 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9f6l\" (UniqueName: \"kubernetes.io/projected/3d71d3fc-a6c9-4e59-992f-03d27f8d14b9-kube-api-access-h9f6l\") pod \"dns-default-hmd9t\" (UID: \"3d71d3fc-a6c9-4e59-992f-03d27f8d14b9\") " pod="openshift-dns/dns-default-hmd9t" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.518915 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5-default-certificate\") pod \"router-default-5444994796-kjj6c\" (UID: \"c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5\") " pod="openshift-ingress/router-default-5444994796-kjj6c" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.518937 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5212e9eb-ead2-4974-b668-ffe2d6a586d4-images\") pod \"machine-config-operator-74547568cd-pk8zt\" (UID: \"5212e9eb-ead2-4974-b668-ffe2d6a586d4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk8zt" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.518951 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5f9100cf-9045-42a6-b0f5-27913d4839b4-signing-cabundle\") pod \"service-ca-9c57cc56f-dwnvl\" (UID: \"5f9100cf-9045-42a6-b0f5-27913d4839b4\") " pod="openshift-service-ca/service-ca-9c57cc56f-dwnvl" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.518967 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f3e83404-96f6-4a56-9642-316296b61562-mountpoint-dir\") pod \"csi-hostpathplugin-llzdr\" (UID: \"f3e83404-96f6-4a56-9642-316296b61562\") " pod="hostpath-provisioner/csi-hostpathplugin-llzdr" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.518993 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/aee2119f-dcf9-46fa-9341-13a9b5faec77-node-bootstrap-token\") pod \"machine-config-server-lrzbs\" (UID: \"aee2119f-dcf9-46fa-9341-13a9b5faec77\") " pod="openshift-machine-config-operator/machine-config-server-lrzbs" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.519046 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5-metrics-certs\") pod \"router-default-5444994796-kjj6c\" (UID: \"c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5\") " pod="openshift-ingress/router-default-5444994796-kjj6c" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.519084 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78aca27e-b8fd-4b40-a1d0-389faf2593c6-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-q87rd\" (UID: \"78aca27e-b8fd-4b40-a1d0-389faf2593c6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q87rd" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.519109 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f3e83404-96f6-4a56-9642-316296b61562-csi-data-dir\") pod \"csi-hostpathplugin-llzdr\" (UID: \"f3e83404-96f6-4a56-9642-316296b61562\") " pod="hostpath-provisioner/csi-hostpathplugin-llzdr" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.519125 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/05335a1e-219e-4242-8c4a-4253ac1f346d-profile-collector-cert\") pod \"catalog-operator-68c6474976-s4nzx\" (UID: \"05335a1e-219e-4242-8c4a-4253ac1f346d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s4nzx" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.519141 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/859b0c41-321d-4408-9971-f024de0f81aa-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zjqjg\" (UID: \"859b0c41-321d-4408-9971-f024de0f81aa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zjqjg" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.519178 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5212e9eb-ead2-4974-b668-ffe2d6a586d4-proxy-tls\") pod \"machine-config-operator-74547568cd-pk8zt\" (UID: \"5212e9eb-ead2-4974-b668-ffe2d6a586d4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk8zt" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.519217 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/03660b87-4011-4ee8-ac77-a26a9f853005-ca-trust-extracted\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.519233 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29df6b7f-c6e5-4a9e-9257-57ccea81b430-config\") pod \"kube-controller-manager-operator-78b949d7b-j2vd2\" (UID: \"29df6b7f-c6e5-4a9e-9257-57ccea81b430\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2vd2" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.519271 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5-service-ca-bundle\") pod \"router-default-5444994796-kjj6c\" (UID: \"c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5\") " pod="openshift-ingress/router-default-5444994796-kjj6c" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.519299 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb25c8ff-bce3-4e50-ad7e-6cea6815a02b-config\") pod \"etcd-operator-b45778765-7pvlk\" (UID: \"bb25c8ff-bce3-4e50-ad7e-6cea6815a02b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7pvlk" Nov 24 21:22:03 crc kubenswrapper[4915]: E1124 21:22:03.520494 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:04.020477257 +0000 UTC m=+142.336729430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.521798 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/03660b87-4011-4ee8-ac77-a26a9f853005-ca-trust-extracted\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.522405 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/03660b87-4011-4ee8-ac77-a26a9f853005-registry-certificates\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.522465 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dd6fba7-d689-47af-bf34-c2f20aa2893b-serving-cert\") pod \"service-ca-operator-777779d784-brhzz\" (UID: \"1dd6fba7-d689-47af-bf34-c2f20aa2893b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-brhzz" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.522865 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5212e9eb-ead2-4974-b668-ffe2d6a586d4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pk8zt\" (UID: \"5212e9eb-ead2-4974-b668-ffe2d6a586d4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk8zt" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.522919 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.522948 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmfd4\" (UniqueName: \"kubernetes.io/projected/c5109e97-fa8d-4eb5-afff-4c4ea89f1948-kube-api-access-cmfd4\") pod \"ingress-canary-bzdrs\" (UID: \"c5109e97-fa8d-4eb5-afff-4c4ea89f1948\") " pod="openshift-ingress-canary/ingress-canary-bzdrs" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.522991 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/03660b87-4011-4ee8-ac77-a26a9f853005-trusted-ca\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.523017 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72xwb\" (UniqueName: \"kubernetes.io/projected/c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5-kube-api-access-72xwb\") pod \"router-default-5444994796-kjj6c\" (UID: \"c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5\") " pod="openshift-ingress/router-default-5444994796-kjj6c" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.523059 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5f9100cf-9045-42a6-b0f5-27913d4839b4-signing-key\") pod \"service-ca-9c57cc56f-dwnvl\" (UID: \"5f9100cf-9045-42a6-b0f5-27913d4839b4\") " pod="openshift-service-ca/service-ca-9c57cc56f-dwnvl" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.523133 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38f0c117-4d15-4ac3-aece-8f0189d91bdb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zbtbl\" (UID: \"38f0c117-4d15-4ac3-aece-8f0189d91bdb\") " pod="openshift-marketplace/marketplace-operator-79b997595-zbtbl" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.523204 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jj99\" (UniqueName: \"kubernetes.io/projected/05335a1e-219e-4242-8c4a-4253ac1f346d-kube-api-access-5jj99\") pod \"catalog-operator-68c6474976-s4nzx\" (UID: \"05335a1e-219e-4242-8c4a-4253ac1f346d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s4nzx" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.523247 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bb25c8ff-bce3-4e50-ad7e-6cea6815a02b-etcd-ca\") pod \"etcd-operator-b45778765-7pvlk\" (UID: \"bb25c8ff-bce3-4e50-ad7e-6cea6815a02b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7pvlk" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.523371 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bb25c8ff-bce3-4e50-ad7e-6cea6815a02b-etcd-service-ca\") pod \"etcd-operator-b45778765-7pvlk\" (UID: \"bb25c8ff-bce3-4e50-ad7e-6cea6815a02b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7pvlk" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.523396 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/05335a1e-219e-4242-8c4a-4253ac1f346d-srv-cert\") pod \"catalog-operator-68c6474976-s4nzx\" (UID: \"05335a1e-219e-4242-8c4a-4253ac1f346d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s4nzx" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.523444 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78aca27e-b8fd-4b40-a1d0-389faf2593c6-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-q87rd\" (UID: \"78aca27e-b8fd-4b40-a1d0-389faf2593c6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q87rd" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.523476 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54fcdac8-7a83-4cc3-8115-3600324106c4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-nz685\" (UID: \"54fcdac8-7a83-4cc3-8115-3600324106c4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nz685" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.523585 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpf8p\" (UniqueName: \"kubernetes.io/projected/d8e29389-d3c0-4175-81dc-ecec5a0c5f35-kube-api-access-fpf8p\") pod \"collect-profiles-29400315-hnmlz\" (UID: \"d8e29389-d3c0-4175-81dc-ecec5a0c5f35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-hnmlz" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.523633 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f3e83404-96f6-4a56-9642-316296b61562-registration-dir\") pod \"csi-hostpathplugin-llzdr\" (UID: \"f3e83404-96f6-4a56-9642-316296b61562\") " pod="hostpath-provisioner/csi-hostpathplugin-llzdr" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.523662 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/03660b87-4011-4ee8-ac77-a26a9f853005-bound-sa-token\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.523729 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/03660b87-4011-4ee8-ac77-a26a9f853005-registry-tls\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.523753 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/03660b87-4011-4ee8-ac77-a26a9f853005-installation-pull-secrets\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.523795 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd6fba7-d689-47af-bf34-c2f20aa2893b-config\") pod \"service-ca-operator-777779d784-brhzz\" (UID: \"1dd6fba7-d689-47af-bf34-c2f20aa2893b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-brhzz" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.523898 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bb25c8ff-bce3-4e50-ad7e-6cea6815a02b-etcd-client\") pod \"etcd-operator-b45778765-7pvlk\" (UID: \"bb25c8ff-bce3-4e50-ad7e-6cea6815a02b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7pvlk" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.523505 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/03660b87-4011-4ee8-ac77-a26a9f853005-registry-certificates\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.525050 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65aeafa6-f0b4-4983-827d-9a70340306ae-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lz5zs\" (UID: \"65aeafa6-f0b4-4983-827d-9a70340306ae\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lz5zs" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.525077 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/859b0c41-321d-4408-9971-f024de0f81aa-proxy-tls\") pod \"machine-config-controller-84d6567774-zjqjg\" (UID: \"859b0c41-321d-4408-9971-f024de0f81aa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zjqjg" Nov 24 21:22:03 crc kubenswrapper[4915]: E1124 21:22:03.525125 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:04.025110338 +0000 UTC m=+142.341362701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.525173 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9m4n\" (UniqueName: \"kubernetes.io/projected/aee2119f-dcf9-46fa-9341-13a9b5faec77-kube-api-access-z9m4n\") pod \"machine-config-server-lrzbs\" (UID: \"aee2119f-dcf9-46fa-9341-13a9b5faec77\") " pod="openshift-machine-config-operator/machine-config-server-lrzbs" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.525314 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb25c8ff-bce3-4e50-ad7e-6cea6815a02b-serving-cert\") pod \"etcd-operator-b45778765-7pvlk\" (UID: \"bb25c8ff-bce3-4e50-ad7e-6cea6815a02b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7pvlk" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.525377 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8e29389-d3c0-4175-81dc-ecec5a0c5f35-secret-volume\") pod \"collect-profiles-29400315-hnmlz\" (UID: \"d8e29389-d3c0-4175-81dc-ecec5a0c5f35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-hnmlz" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.525396 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29df6b7f-c6e5-4a9e-9257-57ccea81b430-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-j2vd2\" (UID: \"29df6b7f-c6e5-4a9e-9257-57ccea81b430\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2vd2" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.528223 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29df6b7f-c6e5-4a9e-9257-57ccea81b430-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-j2vd2\" (UID: \"29df6b7f-c6e5-4a9e-9257-57ccea81b430\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2vd2" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.528248 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87x9m\" (UniqueName: \"kubernetes.io/projected/78aca27e-b8fd-4b40-a1d0-389faf2593c6-kube-api-access-87x9m\") pod \"kube-storage-version-migrator-operator-b67b599dd-q87rd\" (UID: \"78aca27e-b8fd-4b40-a1d0-389faf2593c6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q87rd" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.528277 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz7zc\" (UniqueName: \"kubernetes.io/projected/7eb280c8-f12e-4971-bf46-3aaaff234a87-kube-api-access-fz7zc\") pod \"dns-operator-744455d44c-rflqm\" (UID: \"7eb280c8-f12e-4971-bf46-3aaaff234a87\") " pod="openshift-dns-operator/dns-operator-744455d44c-rflqm" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.528296 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f3e83404-96f6-4a56-9642-316296b61562-socket-dir\") pod \"csi-hostpathplugin-llzdr\" (UID: \"f3e83404-96f6-4a56-9642-316296b61562\") " pod="hostpath-provisioner/csi-hostpathplugin-llzdr" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.528314 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54fcdac8-7a83-4cc3-8115-3600324106c4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-nz685\" (UID: \"54fcdac8-7a83-4cc3-8115-3600324106c4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nz685" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.528552 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5-stats-auth\") pod \"router-default-5444994796-kjj6c\" (UID: \"c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5\") " pod="openshift-ingress/router-default-5444994796-kjj6c" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.528624 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/03660b87-4011-4ee8-ac77-a26a9f853005-trusted-ca\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.528670 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptlpg\" (UniqueName: \"kubernetes.io/projected/03660b87-4011-4ee8-ac77-a26a9f853005-kube-api-access-ptlpg\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.528689 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3d71d3fc-a6c9-4e59-992f-03d27f8d14b9-metrics-tls\") pod \"dns-default-hmd9t\" (UID: \"3d71d3fc-a6c9-4e59-992f-03d27f8d14b9\") " pod="openshift-dns/dns-default-hmd9t" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.528707 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ldw4\" (UniqueName: \"kubernetes.io/projected/5f9100cf-9045-42a6-b0f5-27913d4839b4-kube-api-access-4ldw4\") pod \"service-ca-9c57cc56f-dwnvl\" (UID: \"5f9100cf-9045-42a6-b0f5-27913d4839b4\") " pod="openshift-service-ca/service-ca-9c57cc56f-dwnvl" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.528860 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbzrf\" (UniqueName: \"kubernetes.io/projected/1dd6fba7-d689-47af-bf34-c2f20aa2893b-kube-api-access-hbzrf\") pod \"service-ca-operator-777779d784-brhzz\" (UID: \"1dd6fba7-d689-47af-bf34-c2f20aa2893b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-brhzz" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.528958 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c5109e97-fa8d-4eb5-afff-4c4ea89f1948-cert\") pod \"ingress-canary-bzdrs\" (UID: \"c5109e97-fa8d-4eb5-afff-4c4ea89f1948\") " pod="openshift-ingress-canary/ingress-canary-bzdrs" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.528998 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpzls\" (UniqueName: \"kubernetes.io/projected/5212e9eb-ead2-4974-b668-ffe2d6a586d4-kube-api-access-zpzls\") pod \"machine-config-operator-74547568cd-pk8zt\" (UID: \"5212e9eb-ead2-4974-b668-ffe2d6a586d4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk8zt" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.529015 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4srf\" (UniqueName: \"kubernetes.io/projected/38f0c117-4d15-4ac3-aece-8f0189d91bdb-kube-api-access-h4srf\") pod \"marketplace-operator-79b997595-zbtbl\" (UID: \"38f0c117-4d15-4ac3-aece-8f0189d91bdb\") " pod="openshift-marketplace/marketplace-operator-79b997595-zbtbl" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.529085 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d71d3fc-a6c9-4e59-992f-03d27f8d14b9-config-volume\") pod \"dns-default-hmd9t\" (UID: \"3d71d3fc-a6c9-4e59-992f-03d27f8d14b9\") " pod="openshift-dns/dns-default-hmd9t" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.529103 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc27q\" (UniqueName: \"kubernetes.io/projected/bb25c8ff-bce3-4e50-ad7e-6cea6815a02b-kube-api-access-bc27q\") pod \"etcd-operator-b45778765-7pvlk\" (UID: \"bb25c8ff-bce3-4e50-ad7e-6cea6815a02b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7pvlk" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.529125 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flpds\" (UniqueName: \"kubernetes.io/projected/f3e83404-96f6-4a56-9642-316296b61562-kube-api-access-flpds\") pod \"csi-hostpathplugin-llzdr\" (UID: \"f3e83404-96f6-4a56-9642-316296b61562\") " pod="hostpath-provisioner/csi-hostpathplugin-llzdr" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.529184 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gdld\" (UniqueName: \"kubernetes.io/projected/ea64de69-8cc1-4935-8dc5-908bf44bb2d4-kube-api-access-9gdld\") pod \"downloads-7954f5f757-d7cvw\" (UID: \"ea64de69-8cc1-4935-8dc5-908bf44bb2d4\") " pod="openshift-console/downloads-7954f5f757-d7cvw" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.529204 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/aee2119f-dcf9-46fa-9341-13a9b5faec77-certs\") pod \"machine-config-server-lrzbs\" (UID: \"aee2119f-dcf9-46fa-9341-13a9b5faec77\") " pod="openshift-machine-config-operator/machine-config-server-lrzbs" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.529221 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/38f0c117-4d15-4ac3-aece-8f0189d91bdb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zbtbl\" (UID: \"38f0c117-4d15-4ac3-aece-8f0189d91bdb\") " pod="openshift-marketplace/marketplace-operator-79b997595-zbtbl" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.529255 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6x2j\" (UniqueName: \"kubernetes.io/projected/65aeafa6-f0b4-4983-827d-9a70340306ae-kube-api-access-x6x2j\") pod \"openshift-controller-manager-operator-756b6f6bc6-lz5zs\" (UID: \"65aeafa6-f0b4-4983-827d-9a70340306ae\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lz5zs" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.529293 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65aeafa6-f0b4-4983-827d-9a70340306ae-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lz5zs\" (UID: \"65aeafa6-f0b4-4983-827d-9a70340306ae\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lz5zs" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.529312 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54fcdac8-7a83-4cc3-8115-3600324106c4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-nz685\" (UID: \"54fcdac8-7a83-4cc3-8115-3600324106c4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nz685" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.529353 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7eb280c8-f12e-4971-bf46-3aaaff234a87-metrics-tls\") pod \"dns-operator-744455d44c-rflqm\" (UID: \"7eb280c8-f12e-4971-bf46-3aaaff234a87\") " pod="openshift-dns-operator/dns-operator-744455d44c-rflqm" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.529372 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcfsf\" (UniqueName: \"kubernetes.io/projected/fe4383de-d0d0-423d-aca2-f3dc1da5acba-kube-api-access-zcfsf\") pod \"control-plane-machine-set-operator-78cbb6b69f-rpff6\" (UID: \"fe4383de-d0d0-423d-aca2-f3dc1da5acba\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rpff6" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.529391 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f3e83404-96f6-4a56-9642-316296b61562-plugins-dir\") pod \"csi-hostpathplugin-llzdr\" (UID: \"f3e83404-96f6-4a56-9642-316296b61562\") " pod="hostpath-provisioner/csi-hostpathplugin-llzdr" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.529837 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8e29389-d3c0-4175-81dc-ecec5a0c5f35-config-volume\") pod \"collect-profiles-29400315-hnmlz\" (UID: \"d8e29389-d3c0-4175-81dc-ecec5a0c5f35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-hnmlz" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.529898 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/fe4383de-d0d0-423d-aca2-f3dc1da5acba-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rpff6\" (UID: \"fe4383de-d0d0-423d-aca2-f3dc1da5acba\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rpff6" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.530506 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/03660b87-4011-4ee8-ac77-a26a9f853005-registry-tls\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.531554 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/03660b87-4011-4ee8-ac77-a26a9f853005-installation-pull-secrets\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.539426 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hlzxv"] Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.556671 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/03660b87-4011-4ee8-ac77-a26a9f853005-bound-sa-token\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.577770 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptlpg\" (UniqueName: \"kubernetes.io/projected/03660b87-4011-4ee8-ac77-a26a9f853005-kube-api-access-ptlpg\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.630717 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.630925 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f3e83404-96f6-4a56-9642-316296b61562-plugins-dir\") pod \"csi-hostpathplugin-llzdr\" (UID: \"f3e83404-96f6-4a56-9642-316296b61562\") " pod="hostpath-provisioner/csi-hostpathplugin-llzdr" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.630947 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8e29389-d3c0-4175-81dc-ecec5a0c5f35-config-volume\") pod \"collect-profiles-29400315-hnmlz\" (UID: \"d8e29389-d3c0-4175-81dc-ecec5a0c5f35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-hnmlz" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.630969 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/fe4383de-d0d0-423d-aca2-f3dc1da5acba-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rpff6\" (UID: \"fe4383de-d0d0-423d-aca2-f3dc1da5acba\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rpff6" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.630991 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9f6l\" (UniqueName: \"kubernetes.io/projected/3d71d3fc-a6c9-4e59-992f-03d27f8d14b9-kube-api-access-h9f6l\") pod \"dns-default-hmd9t\" (UID: \"3d71d3fc-a6c9-4e59-992f-03d27f8d14b9\") " pod="openshift-dns/dns-default-hmd9t" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631007 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll7cg\" (UniqueName: \"kubernetes.io/projected/859b0c41-321d-4408-9971-f024de0f81aa-kube-api-access-ll7cg\") pod \"machine-config-controller-84d6567774-zjqjg\" (UID: \"859b0c41-321d-4408-9971-f024de0f81aa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zjqjg" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631028 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5f9100cf-9045-42a6-b0f5-27913d4839b4-signing-cabundle\") pod \"service-ca-9c57cc56f-dwnvl\" (UID: \"5f9100cf-9045-42a6-b0f5-27913d4839b4\") " pod="openshift-service-ca/service-ca-9c57cc56f-dwnvl" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631044 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5-default-certificate\") pod \"router-default-5444994796-kjj6c\" (UID: \"c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5\") " pod="openshift-ingress/router-default-5444994796-kjj6c" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631060 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5212e9eb-ead2-4974-b668-ffe2d6a586d4-images\") pod \"machine-config-operator-74547568cd-pk8zt\" (UID: \"5212e9eb-ead2-4974-b668-ffe2d6a586d4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk8zt" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631076 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/aee2119f-dcf9-46fa-9341-13a9b5faec77-node-bootstrap-token\") pod \"machine-config-server-lrzbs\" (UID: \"aee2119f-dcf9-46fa-9341-13a9b5faec77\") " pod="openshift-machine-config-operator/machine-config-server-lrzbs" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631090 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f3e83404-96f6-4a56-9642-316296b61562-mountpoint-dir\") pod \"csi-hostpathplugin-llzdr\" (UID: \"f3e83404-96f6-4a56-9642-316296b61562\") " pod="hostpath-provisioner/csi-hostpathplugin-llzdr" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631115 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5-metrics-certs\") pod \"router-default-5444994796-kjj6c\" (UID: \"c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5\") " pod="openshift-ingress/router-default-5444994796-kjj6c" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631137 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78aca27e-b8fd-4b40-a1d0-389faf2593c6-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-q87rd\" (UID: \"78aca27e-b8fd-4b40-a1d0-389faf2593c6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q87rd" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631160 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f3e83404-96f6-4a56-9642-316296b61562-csi-data-dir\") pod \"csi-hostpathplugin-llzdr\" (UID: \"f3e83404-96f6-4a56-9642-316296b61562\") " pod="hostpath-provisioner/csi-hostpathplugin-llzdr" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631176 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/05335a1e-219e-4242-8c4a-4253ac1f346d-profile-collector-cert\") pod \"catalog-operator-68c6474976-s4nzx\" (UID: \"05335a1e-219e-4242-8c4a-4253ac1f346d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s4nzx" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631191 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/859b0c41-321d-4408-9971-f024de0f81aa-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zjqjg\" (UID: \"859b0c41-321d-4408-9971-f024de0f81aa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zjqjg" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631209 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5212e9eb-ead2-4974-b668-ffe2d6a586d4-proxy-tls\") pod \"machine-config-operator-74547568cd-pk8zt\" (UID: \"5212e9eb-ead2-4974-b668-ffe2d6a586d4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk8zt" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631226 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5-service-ca-bundle\") pod \"router-default-5444994796-kjj6c\" (UID: \"c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5\") " pod="openshift-ingress/router-default-5444994796-kjj6c" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631242 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29df6b7f-c6e5-4a9e-9257-57ccea81b430-config\") pod \"kube-controller-manager-operator-78b949d7b-j2vd2\" (UID: \"29df6b7f-c6e5-4a9e-9257-57ccea81b430\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2vd2" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631258 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb25c8ff-bce3-4e50-ad7e-6cea6815a02b-config\") pod \"etcd-operator-b45778765-7pvlk\" (UID: \"bb25c8ff-bce3-4e50-ad7e-6cea6815a02b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7pvlk" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631273 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dd6fba7-d689-47af-bf34-c2f20aa2893b-serving-cert\") pod \"service-ca-operator-777779d784-brhzz\" (UID: \"1dd6fba7-d689-47af-bf34-c2f20aa2893b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-brhzz" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631290 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5212e9eb-ead2-4974-b668-ffe2d6a586d4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pk8zt\" (UID: \"5212e9eb-ead2-4974-b668-ffe2d6a586d4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk8zt" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631313 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmfd4\" (UniqueName: \"kubernetes.io/projected/c5109e97-fa8d-4eb5-afff-4c4ea89f1948-kube-api-access-cmfd4\") pod \"ingress-canary-bzdrs\" (UID: \"c5109e97-fa8d-4eb5-afff-4c4ea89f1948\") " pod="openshift-ingress-canary/ingress-canary-bzdrs" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631329 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72xwb\" (UniqueName: \"kubernetes.io/projected/c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5-kube-api-access-72xwb\") pod \"router-default-5444994796-kjj6c\" (UID: \"c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5\") " pod="openshift-ingress/router-default-5444994796-kjj6c" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631344 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5f9100cf-9045-42a6-b0f5-27913d4839b4-signing-key\") pod \"service-ca-9c57cc56f-dwnvl\" (UID: \"5f9100cf-9045-42a6-b0f5-27913d4839b4\") " pod="openshift-service-ca/service-ca-9c57cc56f-dwnvl" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631361 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38f0c117-4d15-4ac3-aece-8f0189d91bdb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zbtbl\" (UID: \"38f0c117-4d15-4ac3-aece-8f0189d91bdb\") " pod="openshift-marketplace/marketplace-operator-79b997595-zbtbl" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631377 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jj99\" (UniqueName: \"kubernetes.io/projected/05335a1e-219e-4242-8c4a-4253ac1f346d-kube-api-access-5jj99\") pod \"catalog-operator-68c6474976-s4nzx\" (UID: \"05335a1e-219e-4242-8c4a-4253ac1f346d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s4nzx" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631392 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bb25c8ff-bce3-4e50-ad7e-6cea6815a02b-etcd-ca\") pod \"etcd-operator-b45778765-7pvlk\" (UID: \"bb25c8ff-bce3-4e50-ad7e-6cea6815a02b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7pvlk" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631443 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bb25c8ff-bce3-4e50-ad7e-6cea6815a02b-etcd-service-ca\") pod \"etcd-operator-b45778765-7pvlk\" (UID: \"bb25c8ff-bce3-4e50-ad7e-6cea6815a02b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7pvlk" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631459 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/05335a1e-219e-4242-8c4a-4253ac1f346d-srv-cert\") pod \"catalog-operator-68c6474976-s4nzx\" (UID: \"05335a1e-219e-4242-8c4a-4253ac1f346d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s4nzx" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631473 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54fcdac8-7a83-4cc3-8115-3600324106c4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-nz685\" (UID: \"54fcdac8-7a83-4cc3-8115-3600324106c4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nz685" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631488 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78aca27e-b8fd-4b40-a1d0-389faf2593c6-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-q87rd\" (UID: \"78aca27e-b8fd-4b40-a1d0-389faf2593c6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q87rd" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631504 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpf8p\" (UniqueName: \"kubernetes.io/projected/d8e29389-d3c0-4175-81dc-ecec5a0c5f35-kube-api-access-fpf8p\") pod \"collect-profiles-29400315-hnmlz\" (UID: \"d8e29389-d3c0-4175-81dc-ecec5a0c5f35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-hnmlz" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631518 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f3e83404-96f6-4a56-9642-316296b61562-registration-dir\") pod \"csi-hostpathplugin-llzdr\" (UID: \"f3e83404-96f6-4a56-9642-316296b61562\") " pod="hostpath-provisioner/csi-hostpathplugin-llzdr" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631537 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd6fba7-d689-47af-bf34-c2f20aa2893b-config\") pod \"service-ca-operator-777779d784-brhzz\" (UID: \"1dd6fba7-d689-47af-bf34-c2f20aa2893b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-brhzz" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631552 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bb25c8ff-bce3-4e50-ad7e-6cea6815a02b-etcd-client\") pod \"etcd-operator-b45778765-7pvlk\" (UID: \"bb25c8ff-bce3-4e50-ad7e-6cea6815a02b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7pvlk" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631569 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/859b0c41-321d-4408-9971-f024de0f81aa-proxy-tls\") pod \"machine-config-controller-84d6567774-zjqjg\" (UID: \"859b0c41-321d-4408-9971-f024de0f81aa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zjqjg" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631585 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65aeafa6-f0b4-4983-827d-9a70340306ae-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lz5zs\" (UID: \"65aeafa6-f0b4-4983-827d-9a70340306ae\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lz5zs" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631601 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9m4n\" (UniqueName: \"kubernetes.io/projected/aee2119f-dcf9-46fa-9341-13a9b5faec77-kube-api-access-z9m4n\") pod \"machine-config-server-lrzbs\" (UID: \"aee2119f-dcf9-46fa-9341-13a9b5faec77\") " pod="openshift-machine-config-operator/machine-config-server-lrzbs" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631617 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb25c8ff-bce3-4e50-ad7e-6cea6815a02b-serving-cert\") pod \"etcd-operator-b45778765-7pvlk\" (UID: \"bb25c8ff-bce3-4e50-ad7e-6cea6815a02b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7pvlk" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631636 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8e29389-d3c0-4175-81dc-ecec5a0c5f35-secret-volume\") pod \"collect-profiles-29400315-hnmlz\" (UID: \"d8e29389-d3c0-4175-81dc-ecec5a0c5f35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-hnmlz" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631652 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29df6b7f-c6e5-4a9e-9257-57ccea81b430-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-j2vd2\" (UID: \"29df6b7f-c6e5-4a9e-9257-57ccea81b430\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2vd2" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631666 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29df6b7f-c6e5-4a9e-9257-57ccea81b430-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-j2vd2\" (UID: \"29df6b7f-c6e5-4a9e-9257-57ccea81b430\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2vd2" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631685 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54fcdac8-7a83-4cc3-8115-3600324106c4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-nz685\" (UID: \"54fcdac8-7a83-4cc3-8115-3600324106c4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nz685" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631699 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87x9m\" (UniqueName: \"kubernetes.io/projected/78aca27e-b8fd-4b40-a1d0-389faf2593c6-kube-api-access-87x9m\") pod \"kube-storage-version-migrator-operator-b67b599dd-q87rd\" (UID: \"78aca27e-b8fd-4b40-a1d0-389faf2593c6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q87rd" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631716 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz7zc\" (UniqueName: \"kubernetes.io/projected/7eb280c8-f12e-4971-bf46-3aaaff234a87-kube-api-access-fz7zc\") pod \"dns-operator-744455d44c-rflqm\" (UID: \"7eb280c8-f12e-4971-bf46-3aaaff234a87\") " pod="openshift-dns-operator/dns-operator-744455d44c-rflqm" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631730 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f3e83404-96f6-4a56-9642-316296b61562-socket-dir\") pod \"csi-hostpathplugin-llzdr\" (UID: \"f3e83404-96f6-4a56-9642-316296b61562\") " pod="hostpath-provisioner/csi-hostpathplugin-llzdr" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631756 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5-stats-auth\") pod \"router-default-5444994796-kjj6c\" (UID: \"c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5\") " pod="openshift-ingress/router-default-5444994796-kjj6c" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631769 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3d71d3fc-a6c9-4e59-992f-03d27f8d14b9-metrics-tls\") pod \"dns-default-hmd9t\" (UID: \"3d71d3fc-a6c9-4e59-992f-03d27f8d14b9\") " pod="openshift-dns/dns-default-hmd9t" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631801 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ldw4\" (UniqueName: \"kubernetes.io/projected/5f9100cf-9045-42a6-b0f5-27913d4839b4-kube-api-access-4ldw4\") pod \"service-ca-9c57cc56f-dwnvl\" (UID: \"5f9100cf-9045-42a6-b0f5-27913d4839b4\") " pod="openshift-service-ca/service-ca-9c57cc56f-dwnvl" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631820 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbzrf\" (UniqueName: \"kubernetes.io/projected/1dd6fba7-d689-47af-bf34-c2f20aa2893b-kube-api-access-hbzrf\") pod \"service-ca-operator-777779d784-brhzz\" (UID: \"1dd6fba7-d689-47af-bf34-c2f20aa2893b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-brhzz" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631843 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c5109e97-fa8d-4eb5-afff-4c4ea89f1948-cert\") pod \"ingress-canary-bzdrs\" (UID: \"c5109e97-fa8d-4eb5-afff-4c4ea89f1948\") " pod="openshift-ingress-canary/ingress-canary-bzdrs" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631867 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpzls\" (UniqueName: \"kubernetes.io/projected/5212e9eb-ead2-4974-b668-ffe2d6a586d4-kube-api-access-zpzls\") pod \"machine-config-operator-74547568cd-pk8zt\" (UID: \"5212e9eb-ead2-4974-b668-ffe2d6a586d4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk8zt" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631891 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4srf\" (UniqueName: \"kubernetes.io/projected/38f0c117-4d15-4ac3-aece-8f0189d91bdb-kube-api-access-h4srf\") pod \"marketplace-operator-79b997595-zbtbl\" (UID: \"38f0c117-4d15-4ac3-aece-8f0189d91bdb\") " pod="openshift-marketplace/marketplace-operator-79b997595-zbtbl" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631916 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d71d3fc-a6c9-4e59-992f-03d27f8d14b9-config-volume\") pod \"dns-default-hmd9t\" (UID: \"3d71d3fc-a6c9-4e59-992f-03d27f8d14b9\") " pod="openshift-dns/dns-default-hmd9t" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631938 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc27q\" (UniqueName: \"kubernetes.io/projected/bb25c8ff-bce3-4e50-ad7e-6cea6815a02b-kube-api-access-bc27q\") pod \"etcd-operator-b45778765-7pvlk\" (UID: \"bb25c8ff-bce3-4e50-ad7e-6cea6815a02b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7pvlk" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631953 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flpds\" (UniqueName: \"kubernetes.io/projected/f3e83404-96f6-4a56-9642-316296b61562-kube-api-access-flpds\") pod \"csi-hostpathplugin-llzdr\" (UID: \"f3e83404-96f6-4a56-9642-316296b61562\") " pod="hostpath-provisioner/csi-hostpathplugin-llzdr" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631969 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/38f0c117-4d15-4ac3-aece-8f0189d91bdb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zbtbl\" (UID: \"38f0c117-4d15-4ac3-aece-8f0189d91bdb\") " pod="openshift-marketplace/marketplace-operator-79b997595-zbtbl" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.631988 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/aee2119f-dcf9-46fa-9341-13a9b5faec77-certs\") pod \"machine-config-server-lrzbs\" (UID: \"aee2119f-dcf9-46fa-9341-13a9b5faec77\") " pod="openshift-machine-config-operator/machine-config-server-lrzbs" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.632004 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6x2j\" (UniqueName: \"kubernetes.io/projected/65aeafa6-f0b4-4983-827d-9a70340306ae-kube-api-access-x6x2j\") pod \"openshift-controller-manager-operator-756b6f6bc6-lz5zs\" (UID: \"65aeafa6-f0b4-4983-827d-9a70340306ae\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lz5zs" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.632020 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54fcdac8-7a83-4cc3-8115-3600324106c4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-nz685\" (UID: \"54fcdac8-7a83-4cc3-8115-3600324106c4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nz685" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.632043 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65aeafa6-f0b4-4983-827d-9a70340306ae-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lz5zs\" (UID: \"65aeafa6-f0b4-4983-827d-9a70340306ae\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lz5zs" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.632060 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7eb280c8-f12e-4971-bf46-3aaaff234a87-metrics-tls\") pod \"dns-operator-744455d44c-rflqm\" (UID: \"7eb280c8-f12e-4971-bf46-3aaaff234a87\") " pod="openshift-dns-operator/dns-operator-744455d44c-rflqm" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.632075 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcfsf\" (UniqueName: \"kubernetes.io/projected/fe4383de-d0d0-423d-aca2-f3dc1da5acba-kube-api-access-zcfsf\") pod \"control-plane-machine-set-operator-78cbb6b69f-rpff6\" (UID: \"fe4383de-d0d0-423d-aca2-f3dc1da5acba\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rpff6" Nov 24 21:22:03 crc kubenswrapper[4915]: E1124 21:22:03.632264 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:04.132250396 +0000 UTC m=+142.448502569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.633736 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5-service-ca-bundle\") pod \"router-default-5444994796-kjj6c\" (UID: \"c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5\") " pod="openshift-ingress/router-default-5444994796-kjj6c" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.633860 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/f3e83404-96f6-4a56-9642-316296b61562-mountpoint-dir\") pod \"csi-hostpathplugin-llzdr\" (UID: \"f3e83404-96f6-4a56-9642-316296b61562\") " pod="hostpath-provisioner/csi-hostpathplugin-llzdr" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.634938 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38f0c117-4d15-4ac3-aece-8f0189d91bdb-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-zbtbl\" (UID: \"38f0c117-4d15-4ac3-aece-8f0189d91bdb\") " pod="openshift-marketplace/marketplace-operator-79b997595-zbtbl" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.635580 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65aeafa6-f0b4-4983-827d-9a70340306ae-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lz5zs\" (UID: \"65aeafa6-f0b4-4983-827d-9a70340306ae\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lz5zs" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.635585 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/bb25c8ff-bce3-4e50-ad7e-6cea6815a02b-etcd-service-ca\") pod \"etcd-operator-b45778765-7pvlk\" (UID: \"bb25c8ff-bce3-4e50-ad7e-6cea6815a02b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7pvlk" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.635679 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/5f9100cf-9045-42a6-b0f5-27913d4839b4-signing-cabundle\") pod \"service-ca-9c57cc56f-dwnvl\" (UID: \"5f9100cf-9045-42a6-b0f5-27913d4839b4\") " pod="openshift-service-ca/service-ca-9c57cc56f-dwnvl" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.635683 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/f3e83404-96f6-4a56-9642-316296b61562-plugins-dir\") pod \"csi-hostpathplugin-llzdr\" (UID: \"f3e83404-96f6-4a56-9642-316296b61562\") " pod="hostpath-provisioner/csi-hostpathplugin-llzdr" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.635798 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78aca27e-b8fd-4b40-a1d0-389faf2593c6-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-q87rd\" (UID: \"78aca27e-b8fd-4b40-a1d0-389faf2593c6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q87rd" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.637199 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29df6b7f-c6e5-4a9e-9257-57ccea81b430-config\") pod \"kube-controller-manager-operator-78b949d7b-j2vd2\" (UID: \"29df6b7f-c6e5-4a9e-9257-57ccea81b430\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2vd2" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.637257 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3d71d3fc-a6c9-4e59-992f-03d27f8d14b9-metrics-tls\") pod \"dns-default-hmd9t\" (UID: \"3d71d3fc-a6c9-4e59-992f-03d27f8d14b9\") " pod="openshift-dns/dns-default-hmd9t" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.637895 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3d71d3fc-a6c9-4e59-992f-03d27f8d14b9-config-volume\") pod \"dns-default-hmd9t\" (UID: \"3d71d3fc-a6c9-4e59-992f-03d27f8d14b9\") " pod="openshift-dns/dns-default-hmd9t" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.637943 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/859b0c41-321d-4408-9971-f024de0f81aa-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-zjqjg\" (UID: \"859b0c41-321d-4408-9971-f024de0f81aa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zjqjg" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.638470 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/859b0c41-321d-4408-9971-f024de0f81aa-proxy-tls\") pod \"machine-config-controller-84d6567774-zjqjg\" (UID: \"859b0c41-321d-4408-9971-f024de0f81aa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zjqjg" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.638603 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f3e83404-96f6-4a56-9642-316296b61562-registration-dir\") pod \"csi-hostpathplugin-llzdr\" (UID: \"f3e83404-96f6-4a56-9642-316296b61562\") " pod="hostpath-provisioner/csi-hostpathplugin-llzdr" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.639263 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd6fba7-d689-47af-bf34-c2f20aa2893b-config\") pod \"service-ca-operator-777779d784-brhzz\" (UID: \"1dd6fba7-d689-47af-bf34-c2f20aa2893b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-brhzz" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.639602 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/5f9100cf-9045-42a6-b0f5-27913d4839b4-signing-key\") pod \"service-ca-9c57cc56f-dwnvl\" (UID: \"5f9100cf-9045-42a6-b0f5-27913d4839b4\") " pod="openshift-service-ca/service-ca-9c57cc56f-dwnvl" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.640579 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5-stats-auth\") pod \"router-default-5444994796-kjj6c\" (UID: \"c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5\") " pod="openshift-ingress/router-default-5444994796-kjj6c" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.640952 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/f3e83404-96f6-4a56-9642-316296b61562-csi-data-dir\") pod \"csi-hostpathplugin-llzdr\" (UID: \"f3e83404-96f6-4a56-9642-316296b61562\") " pod="hostpath-provisioner/csi-hostpathplugin-llzdr" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.640978 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/aee2119f-dcf9-46fa-9341-13a9b5faec77-node-bootstrap-token\") pod \"machine-config-server-lrzbs\" (UID: \"aee2119f-dcf9-46fa-9341-13a9b5faec77\") " pod="openshift-machine-config-operator/machine-config-server-lrzbs" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.641302 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/05335a1e-219e-4242-8c4a-4253ac1f346d-profile-collector-cert\") pod \"catalog-operator-68c6474976-s4nzx\" (UID: \"05335a1e-219e-4242-8c4a-4253ac1f346d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s4nzx" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.642349 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/05335a1e-219e-4242-8c4a-4253ac1f346d-srv-cert\") pod \"catalog-operator-68c6474976-s4nzx\" (UID: \"05335a1e-219e-4242-8c4a-4253ac1f346d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s4nzx" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.642358 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/aee2119f-dcf9-46fa-9341-13a9b5faec77-certs\") pod \"machine-config-server-lrzbs\" (UID: \"aee2119f-dcf9-46fa-9341-13a9b5faec77\") " pod="openshift-machine-config-operator/machine-config-server-lrzbs" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.642885 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5-metrics-certs\") pod \"router-default-5444994796-kjj6c\" (UID: \"c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5\") " pod="openshift-ingress/router-default-5444994796-kjj6c" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.642984 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5212e9eb-ead2-4974-b668-ffe2d6a586d4-images\") pod \"machine-config-operator-74547568cd-pk8zt\" (UID: \"5212e9eb-ead2-4974-b668-ffe2d6a586d4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk8zt" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.643180 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f3e83404-96f6-4a56-9642-316296b61562-socket-dir\") pod \"csi-hostpathplugin-llzdr\" (UID: \"f3e83404-96f6-4a56-9642-316296b61562\") " pod="hostpath-provisioner/csi-hostpathplugin-llzdr" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.643391 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb25c8ff-bce3-4e50-ad7e-6cea6815a02b-config\") pod \"etcd-operator-b45778765-7pvlk\" (UID: \"bb25c8ff-bce3-4e50-ad7e-6cea6815a02b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7pvlk" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.643694 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/bb25c8ff-bce3-4e50-ad7e-6cea6815a02b-etcd-ca\") pod \"etcd-operator-b45778765-7pvlk\" (UID: \"bb25c8ff-bce3-4e50-ad7e-6cea6815a02b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7pvlk" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.645106 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5212e9eb-ead2-4974-b668-ffe2d6a586d4-proxy-tls\") pod \"machine-config-operator-74547568cd-pk8zt\" (UID: \"5212e9eb-ead2-4974-b668-ffe2d6a586d4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk8zt" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.645986 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65aeafa6-f0b4-4983-827d-9a70340306ae-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lz5zs\" (UID: \"65aeafa6-f0b4-4983-827d-9a70340306ae\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lz5zs" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.646367 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78aca27e-b8fd-4b40-a1d0-389faf2593c6-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-q87rd\" (UID: \"78aca27e-b8fd-4b40-a1d0-389faf2593c6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q87rd" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.650031 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5-default-certificate\") pod \"router-default-5444994796-kjj6c\" (UID: \"c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5\") " pod="openshift-ingress/router-default-5444994796-kjj6c" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.650207 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb25c8ff-bce3-4e50-ad7e-6cea6815a02b-serving-cert\") pod \"etcd-operator-b45778765-7pvlk\" (UID: \"bb25c8ff-bce3-4e50-ad7e-6cea6815a02b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7pvlk" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.650269 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7eb280c8-f12e-4971-bf46-3aaaff234a87-metrics-tls\") pod \"dns-operator-744455d44c-rflqm\" (UID: \"7eb280c8-f12e-4971-bf46-3aaaff234a87\") " pod="openshift-dns-operator/dns-operator-744455d44c-rflqm" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.651411 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29df6b7f-c6e5-4a9e-9257-57ccea81b430-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-j2vd2\" (UID: \"29df6b7f-c6e5-4a9e-9257-57ccea81b430\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2vd2" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.652442 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bb25c8ff-bce3-4e50-ad7e-6cea6815a02b-etcd-client\") pod \"etcd-operator-b45778765-7pvlk\" (UID: \"bb25c8ff-bce3-4e50-ad7e-6cea6815a02b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7pvlk" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.652837 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/38f0c117-4d15-4ac3-aece-8f0189d91bdb-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-zbtbl\" (UID: \"38f0c117-4d15-4ac3-aece-8f0189d91bdb\") " pod="openshift-marketplace/marketplace-operator-79b997595-zbtbl" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.670442 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8e29389-d3c0-4175-81dc-ecec5a0c5f35-config-volume\") pod \"collect-profiles-29400315-hnmlz\" (UID: \"d8e29389-d3c0-4175-81dc-ecec5a0c5f35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-hnmlz" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.670593 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54fcdac8-7a83-4cc3-8115-3600324106c4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-nz685\" (UID: \"54fcdac8-7a83-4cc3-8115-3600324106c4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nz685" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.671349 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c5109e97-fa8d-4eb5-afff-4c4ea89f1948-cert\") pod \"ingress-canary-bzdrs\" (UID: \"c5109e97-fa8d-4eb5-afff-4c4ea89f1948\") " pod="openshift-ingress-canary/ingress-canary-bzdrs" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.671750 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/fe4383de-d0d0-423d-aca2-f3dc1da5acba-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rpff6\" (UID: \"fe4383de-d0d0-423d-aca2-f3dc1da5acba\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rpff6" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.672525 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8e29389-d3c0-4175-81dc-ecec5a0c5f35-secret-volume\") pod \"collect-profiles-29400315-hnmlz\" (UID: \"d8e29389-d3c0-4175-81dc-ecec5a0c5f35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-hnmlz" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.674015 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54fcdac8-7a83-4cc3-8115-3600324106c4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-nz685\" (UID: \"54fcdac8-7a83-4cc3-8115-3600324106c4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nz685" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.676989 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcfsf\" (UniqueName: \"kubernetes.io/projected/fe4383de-d0d0-423d-aca2-f3dc1da5acba-kube-api-access-zcfsf\") pod \"control-plane-machine-set-operator-78cbb6b69f-rpff6\" (UID: \"fe4383de-d0d0-423d-aca2-f3dc1da5acba\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rpff6" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.677383 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gdld\" (UniqueName: \"kubernetes.io/projected/ea64de69-8cc1-4935-8dc5-908bf44bb2d4-kube-api-access-9gdld\") pod \"downloads-7954f5f757-d7cvw\" (UID: \"ea64de69-8cc1-4935-8dc5-908bf44bb2d4\") " pod="openshift-console/downloads-7954f5f757-d7cvw" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.680137 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dd6fba7-d689-47af-bf34-c2f20aa2893b-serving-cert\") pod \"service-ca-operator-777779d784-brhzz\" (UID: \"1dd6fba7-d689-47af-bf34-c2f20aa2893b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-brhzz" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.716493 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc27q\" (UniqueName: \"kubernetes.io/projected/bb25c8ff-bce3-4e50-ad7e-6cea6815a02b-kube-api-access-bc27q\") pod \"etcd-operator-b45778765-7pvlk\" (UID: \"bb25c8ff-bce3-4e50-ad7e-6cea6815a02b\") " pod="openshift-etcd-operator/etcd-operator-b45778765-7pvlk" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.733605 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:03 crc kubenswrapper[4915]: E1124 21:22:03.734082 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:04.234067218 +0000 UTC m=+142.550319391 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.749919 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ldw4\" (UniqueName: \"kubernetes.io/projected/5f9100cf-9045-42a6-b0f5-27913d4839b4-kube-api-access-4ldw4\") pod \"service-ca-9c57cc56f-dwnvl\" (UID: \"5f9100cf-9045-42a6-b0f5-27913d4839b4\") " pod="openshift-service-ca/service-ca-9c57cc56f-dwnvl" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.760478 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbzrf\" (UniqueName: \"kubernetes.io/projected/1dd6fba7-d689-47af-bf34-c2f20aa2893b-kube-api-access-hbzrf\") pod \"service-ca-operator-777779d784-brhzz\" (UID: \"1dd6fba7-d689-47af-bf34-c2f20aa2893b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-brhzz" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.778494 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-d7cvw" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.780348 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flpds\" (UniqueName: \"kubernetes.io/projected/f3e83404-96f6-4a56-9642-316296b61562-kube-api-access-flpds\") pod \"csi-hostpathplugin-llzdr\" (UID: \"f3e83404-96f6-4a56-9642-316296b61562\") " pod="hostpath-provisioner/csi-hostpathplugin-llzdr" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.797486 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9f6l\" (UniqueName: \"kubernetes.io/projected/3d71d3fc-a6c9-4e59-992f-03d27f8d14b9-kube-api-access-h9f6l\") pod \"dns-default-hmd9t\" (UID: \"3d71d3fc-a6c9-4e59-992f-03d27f8d14b9\") " pod="openshift-dns/dns-default-hmd9t" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.812722 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-snd2t"] Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.814269 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jct"] Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.817490 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c98n4"] Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.843335 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.843570 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll7cg\" (UniqueName: \"kubernetes.io/projected/859b0c41-321d-4408-9971-f024de0f81aa-kube-api-access-ll7cg\") pod \"machine-config-controller-84d6567774-zjqjg\" (UID: \"859b0c41-321d-4408-9971-f024de0f81aa\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zjqjg" Nov 24 21:22:03 crc kubenswrapper[4915]: E1124 21:22:03.843825 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:04.343741899 +0000 UTC m=+142.659994132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.856761 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-762rz"] Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.858677 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpzls\" (UniqueName: \"kubernetes.io/projected/5212e9eb-ead2-4974-b668-ffe2d6a586d4-kube-api-access-zpzls\") pod \"machine-config-operator-74547568cd-pk8zt\" (UID: \"5212e9eb-ead2-4974-b668-ffe2d6a586d4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk8zt" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.858833 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4srf\" (UniqueName: \"kubernetes.io/projected/38f0c117-4d15-4ac3-aece-8f0189d91bdb-kube-api-access-h4srf\") pod \"marketplace-operator-79b997595-zbtbl\" (UID: \"38f0c117-4d15-4ac3-aece-8f0189d91bdb\") " pod="openshift-marketplace/marketplace-operator-79b997595-zbtbl" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.862978 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-7pvlk" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.877949 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29df6b7f-c6e5-4a9e-9257-57ccea81b430-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-j2vd2\" (UID: \"29df6b7f-c6e5-4a9e-9257-57ccea81b430\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2vd2" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.879872 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-dwnvl" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.887806 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-x7cqd"] Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.916711 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmfd4\" (UniqueName: \"kubernetes.io/projected/c5109e97-fa8d-4eb5-afff-4c4ea89f1948-kube-api-access-cmfd4\") pod \"ingress-canary-bzdrs\" (UID: \"c5109e97-fa8d-4eb5-afff-4c4ea89f1948\") " pod="openshift-ingress-canary/ingress-canary-bzdrs" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.935569 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72xwb\" (UniqueName: \"kubernetes.io/projected/c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5-kube-api-access-72xwb\") pod \"router-default-5444994796-kjj6c\" (UID: \"c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5\") " pod="openshift-ingress/router-default-5444994796-kjj6c" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.937975 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-kjj6c" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.941081 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5212e9eb-ead2-4974-b668-ffe2d6a586d4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pk8zt\" (UID: \"5212e9eb-ead2-4974-b668-ffe2d6a586d4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk8zt" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.944794 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:03 crc kubenswrapper[4915]: E1124 21:22:03.945099 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:04.445086423 +0000 UTC m=+142.761338596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.946001 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jj99\" (UniqueName: \"kubernetes.io/projected/05335a1e-219e-4242-8c4a-4253ac1f346d-kube-api-access-5jj99\") pod \"catalog-operator-68c6474976-s4nzx\" (UID: \"05335a1e-219e-4242-8c4a-4253ac1f346d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s4nzx" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.956278 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpf8p\" (UniqueName: \"kubernetes.io/projected/d8e29389-d3c0-4175-81dc-ecec5a0c5f35-kube-api-access-fpf8p\") pod \"collect-profiles-29400315-hnmlz\" (UID: \"d8e29389-d3c0-4175-81dc-ecec5a0c5f35\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-hnmlz" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.959806 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/54fcdac8-7a83-4cc3-8115-3600324106c4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-nz685\" (UID: \"54fcdac8-7a83-4cc3-8115-3600324106c4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nz685" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.963605 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rpff6" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.978289 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-brhzz" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.985882 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-hnmlz" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.992728 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87x9m\" (UniqueName: \"kubernetes.io/projected/78aca27e-b8fd-4b40-a1d0-389faf2593c6-kube-api-access-87x9m\") pod \"kube-storage-version-migrator-operator-b67b599dd-q87rd\" (UID: \"78aca27e-b8fd-4b40-a1d0-389faf2593c6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q87rd" Nov 24 21:22:03 crc kubenswrapper[4915]: I1124 21:22:03.992750 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zbtbl" Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.001309 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz7zc\" (UniqueName: \"kubernetes.io/projected/7eb280c8-f12e-4971-bf46-3aaaff234a87-kube-api-access-fz7zc\") pod \"dns-operator-744455d44c-rflqm\" (UID: \"7eb280c8-f12e-4971-bf46-3aaaff234a87\") " pod="openshift-dns-operator/dns-operator-744455d44c-rflqm" Nov 24 21:22:04 crc kubenswrapper[4915]: W1124 21:22:04.006566 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc57c5c5b_42fb_44e2_bbcb_2a8797d45d58.slice/crio-d6e2e046ffdcceac15d46c39f220e3c44a3eaddfdd0cba73f1ae4d941fbb9403 WatchSource:0}: Error finding container d6e2e046ffdcceac15d46c39f220e3c44a3eaddfdd0cba73f1ae4d941fbb9403: Status 404 returned error can't find the container with id d6e2e046ffdcceac15d46c39f220e3c44a3eaddfdd0cba73f1ae4d941fbb9403 Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.013963 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zjqjg" Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.030503 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6x2j\" (UniqueName: \"kubernetes.io/projected/65aeafa6-f0b4-4983-827d-9a70340306ae-kube-api-access-x6x2j\") pod \"openshift-controller-manager-operator-756b6f6bc6-lz5zs\" (UID: \"65aeafa6-f0b4-4983-827d-9a70340306ae\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lz5zs" Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.030935 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-bzdrs" Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.038701 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9m4n\" (UniqueName: \"kubernetes.io/projected/aee2119f-dcf9-46fa-9341-13a9b5faec77-kube-api-access-z9m4n\") pod \"machine-config-server-lrzbs\" (UID: \"aee2119f-dcf9-46fa-9341-13a9b5faec77\") " pod="openshift-machine-config-operator/machine-config-server-lrzbs" Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.047334 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:04 crc kubenswrapper[4915]: E1124 21:22:04.047647 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:04.547596791 +0000 UTC m=+142.863848974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.058974 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-llzdr" Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.068929 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-hmd9t" Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.077211 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-lrzbs" Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.149039 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:04 crc kubenswrapper[4915]: E1124 21:22:04.149393 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:04.649379361 +0000 UTC m=+142.965631534 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.152558 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk8zt" Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.170239 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2vd2" Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.205413 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4rr2s"] Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.219918 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-762rz" event={"ID":"de799d6d-f599-4dfd-a95b-4f7daf00d23a","Type":"ContainerStarted","Data":"c4aa023d6e9855829ef6fd96609477f8aaf39216bb8e0bf1357c4f5823661e6a"} Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.219944 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c98n4" event={"ID":"fc3e3b86-8f8c-4d9e-ad67-9f562e077367","Type":"ContainerStarted","Data":"5e4f087c7913d909da0407f03d86b1d5cf09e034d04913d672a80f985072f02b"} Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.219958 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-snd2t" event={"ID":"c57c5c5b-42fb-44e2-bbcb-2a8797d45d58","Type":"ContainerStarted","Data":"d6e2e046ffdcceac15d46c39f220e3c44a3eaddfdd0cba73f1ae4d941fbb9403"} Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.219971 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d8b76" event={"ID":"05cd76ab-027a-4c05-aeb0-702df5277cbe","Type":"ContainerStarted","Data":"6072f00c522d62d74ac6685c2d22598480491de93051a91cee762f82ec2d9ee4"} Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.220191 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nz685" Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.232043 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s4nzx" Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.239259 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x7cqd" event={"ID":"c25872a5-42e3-4e20-ad54-594477784fa2","Type":"ContainerStarted","Data":"301aea8096aea37d3915fad8cca394e0257069f6ff0a3e7f6ddd19f734cf05ea"} Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.249541 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-sb4zq" event={"ID":"fcc4ebd4-dcbc-464e-91cf-a52a944b461f","Type":"ContainerStarted","Data":"a8fd5beeb011fa37ec794d54e5011b99aade847b080baef9c697d2c7a05c1546"} Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.249672 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q87rd" Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.250473 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.250590 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jct" event={"ID":"7f11f7bc-4f9d-4bed-8557-de43a62647b1","Type":"ContainerStarted","Data":"1069a9f1a69c5fe80e7b09823e240a5ce749d24b8340dac02a6530e4094deb82"} Nov 24 21:22:04 crc kubenswrapper[4915]: E1124 21:22:04.250748 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:04.750731056 +0000 UTC m=+143.066983229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.251184 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hlzxv" event={"ID":"8bc4b093-0be5-4df2-9f51-bf3e24d722af","Type":"ContainerStarted","Data":"feb4c408f3bae5c9f8f120582610568b95be670822620e16d58e582375d78fee"} Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.251623 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.252212 4915 generic.go:334] "Generic (PLEG): container finished" podID="15b4822e-5f78-4c15-a2ad-641d0af56a1b" containerID="df218e9ce0e6d0f7bfbf2fe593b9244137acba75d4e2a406e2ad9f9ccbd42377" exitCode=0 Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.252337 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" event={"ID":"15b4822e-5f78-4c15-a2ad-641d0af56a1b","Type":"ContainerDied","Data":"df218e9ce0e6d0f7bfbf2fe593b9244137acba75d4e2a406e2ad9f9ccbd42377"} Nov 24 21:22:04 crc kubenswrapper[4915]: E1124 21:22:04.253431 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:04.753418585 +0000 UTC m=+143.069670758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.269708 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-rflqm" Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.302537 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lz5zs" Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.344449 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nmx6q"] Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.352639 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:04 crc kubenswrapper[4915]: E1124 21:22:04.353046 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:04.852981644 +0000 UTC m=+143.169233817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.353263 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:04 crc kubenswrapper[4915]: E1124 21:22:04.353557 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:04.853541895 +0000 UTC m=+143.169794158 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.454243 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:04 crc kubenswrapper[4915]: E1124 21:22:04.454513 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:04.954482244 +0000 UTC m=+143.270734407 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.454716 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:04 crc kubenswrapper[4915]: E1124 21:22:04.455133 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:04.955125478 +0000 UTC m=+143.271377651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.472423 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-bp8jj" podStartSLOduration=122.472407074 podStartE2EDuration="2m2.472407074s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:04.449414447 +0000 UTC m=+142.765666640" watchObservedRunningTime="2025-11-24 21:22:04.472407074 +0000 UTC m=+142.788659257" Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.531703 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9mddb" podStartSLOduration=122.531688179 podStartE2EDuration="2m2.531688179s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:04.530425502 +0000 UTC m=+142.846677675" watchObservedRunningTime="2025-11-24 21:22:04.531688179 +0000 UTC m=+142.847940352" Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.555726 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:04 crc kubenswrapper[4915]: E1124 21:22:04.555926 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:05.055901751 +0000 UTC m=+143.372153924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.556041 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:04 crc kubenswrapper[4915]: E1124 21:22:04.556395 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:05.056382359 +0000 UTC m=+143.372634532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:04 crc kubenswrapper[4915]: W1124 21:22:04.631835 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ba8653_31c9_4e8a_bc67_9bfd3514677b.slice/crio-1ec71eae83d32091b6037f802a50d614e80f1b826947d58f2cc4b7f71d4ac68f WatchSource:0}: Error finding container 1ec71eae83d32091b6037f802a50d614e80f1b826947d58f2cc4b7f71d4ac68f: Status 404 returned error can't find the container with id 1ec71eae83d32091b6037f802a50d614e80f1b826947d58f2cc4b7f71d4ac68f Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.639577 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rdk5z"] Nov 24 21:22:04 crc kubenswrapper[4915]: W1124 21:22:04.642550 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc67fa5c2_5760_49b8_83d8_9de5cb8bcbc5.slice/crio-96f66985f03f6514df2808562bc87d23ff94d3d987b3f761a3faac7577d103fa WatchSource:0}: Error finding container 96f66985f03f6514df2808562bc87d23ff94d3d987b3f761a3faac7577d103fa: Status 404 returned error can't find the container with id 96f66985f03f6514df2808562bc87d23ff94d3d987b3f761a3faac7577d103fa Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.646478 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zgpwn"] Nov 24 21:22:04 crc kubenswrapper[4915]: W1124 21:22:04.656462 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53094d5f_2a39_479b_912d_c766fd9f4fa1.slice/crio-385da858f1fc8040e2beafe46f809162bd78a714db2ec4c2bc91db2b1f50e125 WatchSource:0}: Error finding container 385da858f1fc8040e2beafe46f809162bd78a714db2ec4c2bc91db2b1f50e125: Status 404 returned error can't find the container with id 385da858f1fc8040e2beafe46f809162bd78a714db2ec4c2bc91db2b1f50e125 Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.656676 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:04 crc kubenswrapper[4915]: E1124 21:22:04.656860 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:05.15684068 +0000 UTC m=+143.473092863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.657018 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:04 crc kubenswrapper[4915]: E1124 21:22:04.657429 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:05.157417002 +0000 UTC m=+143.473669185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:04 crc kubenswrapper[4915]: W1124 21:22:04.664048 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa3f4f80_34b9_4283_874c_ace65ae5ed4d.slice/crio-ed4e2375e21cfceabb67733f448e6ab97c7045f3c9f0055d7e6072b7c12734fd WatchSource:0}: Error finding container ed4e2375e21cfceabb67733f448e6ab97c7045f3c9f0055d7e6072b7c12734fd: Status 404 returned error can't find the container with id ed4e2375e21cfceabb67733f448e6ab97c7045f3c9f0055d7e6072b7c12734fd Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.676509 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j2dm5"] Nov 24 21:22:04 crc kubenswrapper[4915]: W1124 21:22:04.678476 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96ed7ec2_f67e_4a3a_9023_34ac783d91db.slice/crio-290dc4a03c178ded7897bae579fb28bf954b310f291ac4b09ac3268adc55fd0a WatchSource:0}: Error finding container 290dc4a03c178ded7897bae579fb28bf954b310f291ac4b09ac3268adc55fd0a: Status 404 returned error can't find the container with id 290dc4a03c178ded7897bae579fb28bf954b310f291ac4b09ac3268adc55fd0a Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.680837 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lbpp5"] Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.777246 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:04 crc kubenswrapper[4915]: E1124 21:22:04.777410 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:05.277389103 +0000 UTC m=+143.593641276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.777504 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:04 crc kubenswrapper[4915]: E1124 21:22:04.777820 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:05.277809508 +0000 UTC m=+143.594061671 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:04 crc kubenswrapper[4915]: W1124 21:22:04.799605 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0828ca36_88fa_4eac_95ed_a8bc14a2b840.slice/crio-880d4b3b9ccc487b346b7236e91da1080eaf02cb710e06515051b4c5da65a287 WatchSource:0}: Error finding container 880d4b3b9ccc487b346b7236e91da1080eaf02cb710e06515051b4c5da65a287: Status 404 returned error can't find the container with id 880d4b3b9ccc487b346b7236e91da1080eaf02cb710e06515051b4c5da65a287 Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.879635 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:04 crc kubenswrapper[4915]: E1124 21:22:04.880086 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:05.380067626 +0000 UTC m=+143.696319799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.977001 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-9qlcq" podStartSLOduration=122.976985747 podStartE2EDuration="2m2.976985747s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:04.956892287 +0000 UTC m=+143.273144460" watchObservedRunningTime="2025-11-24 21:22:04.976985747 +0000 UTC m=+143.293237920" Nov 24 21:22:04 crc kubenswrapper[4915]: I1124 21:22:04.981712 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:04 crc kubenswrapper[4915]: E1124 21:22:04.983405 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:05.483391024 +0000 UTC m=+143.799643197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.078081 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-zjqjg"] Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.083582 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:05 crc kubenswrapper[4915]: E1124 21:22:05.083750 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:05.58372825 +0000 UTC m=+143.899980423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.083809 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:05 crc kubenswrapper[4915]: E1124 21:22:05.084271 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:05.584263091 +0000 UTC m=+143.900515264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.184501 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:05 crc kubenswrapper[4915]: E1124 21:22:05.184629 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:05.684610148 +0000 UTC m=+144.000862311 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.184902 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:05 crc kubenswrapper[4915]: E1124 21:22:05.185206 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:05.68519785 +0000 UTC m=+144.001450033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.267811 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x7cqd" event={"ID":"c25872a5-42e3-4e20-ad54-594477784fa2","Type":"ContainerStarted","Data":"f660b77cb81d6453ec9b45bfba1f25c7ba4f7b63fcff763867e90dbd077f103a"} Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.279207 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" event={"ID":"e0cfbbde-c3a3-4306-a714-76134e43b495","Type":"ContainerStarted","Data":"6e60012216f579f35f80bd59c6ea8fa962467af67bfe34d8ec5fb121836b9191"} Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.282195 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nmx6q" event={"ID":"fa3f4f80-34b9-4283-874c-ace65ae5ed4d","Type":"ContainerStarted","Data":"ed4e2375e21cfceabb67733f448e6ab97c7045f3c9f0055d7e6072b7c12734fd"} Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.284917 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zjqjg" event={"ID":"859b0c41-321d-4408-9971-f024de0f81aa","Type":"ContainerStarted","Data":"e933cd877ece5f0a52caee5decee3260621d8d6eceab03050a6e275d35843c28"} Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.286246 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:05 crc kubenswrapper[4915]: E1124 21:22:05.286358 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:05.786340937 +0000 UTC m=+144.102593110 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.286560 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:05 crc kubenswrapper[4915]: E1124 21:22:05.286934 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:05.786922088 +0000 UTC m=+144.103174261 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.287166 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-kjj6c" event={"ID":"c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5","Type":"ContainerStarted","Data":"96f66985f03f6514df2808562bc87d23ff94d3d987b3f761a3faac7577d103fa"} Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.297197 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c98n4" event={"ID":"fc3e3b86-8f8c-4d9e-ad67-9f562e077367","Type":"ContainerStarted","Data":"7bc6603e16792465955b0e28caf0fc430550e75276534f638fc85a3f09a49163"} Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.297611 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c98n4" Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.300159 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-snd2t" event={"ID":"c57c5c5b-42fb-44e2-bbcb-2a8797d45d58","Type":"ContainerStarted","Data":"83d6a9ee95a05d73d752a280ef92dd2b8eb665b4fe8fbe7d72790c622cdbaed6"} Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.312041 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j2dm5" event={"ID":"e70a867a-f31a-495c-9f2a-13a996188c84","Type":"ContainerStarted","Data":"6d9f9d7b8394f7b723b62bda283c2476f8d95fc2c807ba20ebc76ed2784d91d5"} Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.327568 4915 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-c98n4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" start-of-body= Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.327630 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c98n4" podUID="fc3e3b86-8f8c-4d9e-ad67-9f562e077367" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": dial tcp 10.217.0.27:5443: connect: connection refused" Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.333425 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xxjt" event={"ID":"42c53a24-3430-4028-a8cd-bfa948466eef","Type":"ContainerStarted","Data":"c3280ff31dd1949b125d92bc99a25eb15119d9e18771f0407cf8a1e7c488339a"} Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.333982 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xxjt" Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.387138 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:05 crc kubenswrapper[4915]: E1124 21:22:05.387921 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:05.887905319 +0000 UTC m=+144.204157492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.396830 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-p7n9h" podStartSLOduration=123.390761544 podStartE2EDuration="2m3.390761544s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:05.382213059 +0000 UTC m=+143.698465222" watchObservedRunningTime="2025-11-24 21:22:05.390761544 +0000 UTC m=+143.707013717" Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.402052 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jct" event={"ID":"7f11f7bc-4f9d-4bed-8557-de43a62647b1","Type":"ContainerStarted","Data":"581bb896c5bbd07ade2eeb9593ed518b3fba5f486b543f57a753112ea76ef656"} Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.411151 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-762rz" event={"ID":"de799d6d-f599-4dfd-a95b-4f7daf00d23a","Type":"ContainerStarted","Data":"b07a68ec4794038dfd4ead15b5a42a212636a6d7f7aadf45cc8f71afa776286d"} Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.411520 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-762rz" Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.415289 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zgpwn" event={"ID":"96ed7ec2-f67e-4a3a-9023-34ac783d91db","Type":"ContainerStarted","Data":"290dc4a03c178ded7897bae579fb28bf954b310f291ac4b09ac3268adc55fd0a"} Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.426360 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-lrzbs" event={"ID":"aee2119f-dcf9-46fa-9341-13a9b5faec77","Type":"ContainerStarted","Data":"ed0925ae2da9f2ebc9ea8aeb6cb0c4ddc2f822b7436017ac4596ac3a8155befb"} Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.429619 4915 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-762rz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.429654 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-762rz" podUID="de799d6d-f599-4dfd-a95b-4f7daf00d23a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.445036 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hlzxv" event={"ID":"8bc4b093-0be5-4df2-9f51-bf3e24d722af","Type":"ContainerStarted","Data":"b5cb22001e2ce2a160129c1f741951dfbfcf451c43cac3cf4db256d5267e9f5b"} Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.455078 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4rr2s" event={"ID":"53094d5f-2a39-479b-912d-c766fd9f4fa1","Type":"ContainerStarted","Data":"385da858f1fc8040e2beafe46f809162bd78a714db2ec4c2bc91db2b1f50e125"} Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.461537 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-sb4zq" event={"ID":"fcc4ebd4-dcbc-464e-91cf-a52a944b461f","Type":"ContainerStarted","Data":"0c44e2032ddbdccdd75daf997c13e6ab4b673533c1f6a08011f98db5c1e68d58"} Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.462159 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-sb4zq" Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.465468 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5qvrk" event={"ID":"56ba8653-31c9-4e8a-bc67-9bfd3514677b","Type":"ContainerStarted","Data":"1ec71eae83d32091b6037f802a50d614e80f1b826947d58f2cc4b7f71d4ac68f"} Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.467197 4915 patch_prober.go:28] interesting pod/console-operator-58897d9998-sb4zq container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.467231 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-sb4zq" podUID="fcc4ebd4-dcbc-464e-91cf-a52a944b461f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.468558 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbpp5" event={"ID":"0828ca36-88fa-4eac-95ed-a8bc14a2b840","Type":"ContainerStarted","Data":"880d4b3b9ccc487b346b7236e91da1080eaf02cb710e06515051b4c5da65a287"} Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.498818 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:05 crc kubenswrapper[4915]: E1124 21:22:05.499572 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:05.999559484 +0000 UTC m=+144.315811657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.600119 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:05 crc kubenswrapper[4915]: E1124 21:22:05.600353 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:06.100321587 +0000 UTC m=+144.416573750 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.600847 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:05 crc kubenswrapper[4915]: E1124 21:22:05.607451 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:06.107435578 +0000 UTC m=+144.423687751 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.702604 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:05 crc kubenswrapper[4915]: E1124 21:22:05.703356 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:06.203340313 +0000 UTC m=+144.519592486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.756858 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-dwnvl"] Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.769330 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-hmd9t"] Nov 24 21:22:05 crc kubenswrapper[4915]: W1124 21:22:05.802883 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f9100cf_9045_42a6_b0f5_27913d4839b4.slice/crio-142e59138680c1e45a86dea1e27cbfb7264b982835184dd4ac900b1b3987eef6 WatchSource:0}: Error finding container 142e59138680c1e45a86dea1e27cbfb7264b982835184dd4ac900b1b3987eef6: Status 404 returned error can't find the container with id 142e59138680c1e45a86dea1e27cbfb7264b982835184dd4ac900b1b3987eef6 Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.804661 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-d7cvw"] Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.804930 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:05 crc kubenswrapper[4915]: E1124 21:22:05.805485 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:06.305472116 +0000 UTC m=+144.621724289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.818308 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-x7cqd" podStartSLOduration=123.818283848 podStartE2EDuration="2m3.818283848s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:05.813846055 +0000 UTC m=+144.130098238" watchObservedRunningTime="2025-11-24 21:22:05.818283848 +0000 UTC m=+144.134536021" Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.829013 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-7pvlk"] Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.840052 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-brhzz"] Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.854119 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-sb4zq" podStartSLOduration=123.854101627 podStartE2EDuration="2m3.854101627s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:05.851897367 +0000 UTC m=+144.168149540" watchObservedRunningTime="2025-11-24 21:22:05.854101627 +0000 UTC m=+144.170353820" Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.874946 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-bzdrs"] Nov 24 21:22:05 crc kubenswrapper[4915]: W1124 21:22:05.906183 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb25c8ff_bce3_4e50_ad7e_6cea6815a02b.slice/crio-f45787474f68ccd8594ff2ea5193e0c5f6421788180d89a667b53a41420f9a52 WatchSource:0}: Error finding container f45787474f68ccd8594ff2ea5193e0c5f6421788180d89a667b53a41420f9a52: Status 404 returned error can't find the container with id f45787474f68ccd8594ff2ea5193e0c5f6421788180d89a667b53a41420f9a52 Nov 24 21:22:05 crc kubenswrapper[4915]: W1124 21:22:05.906672 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1dd6fba7_d689_47af_bf34_c2f20aa2893b.slice/crio-f6c2cbf171c4c76179e325096f9d23e025fa6974dbaf77673bfca134c1847818 WatchSource:0}: Error finding container f6c2cbf171c4c76179e325096f9d23e025fa6974dbaf77673bfca134c1847818: Status 404 returned error can't find the container with id f6c2cbf171c4c76179e325096f9d23e025fa6974dbaf77673bfca134c1847818 Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.926530 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-snd2t" podStartSLOduration=123.926512246 podStartE2EDuration="2m3.926512246s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:05.882988363 +0000 UTC m=+144.199240536" watchObservedRunningTime="2025-11-24 21:22:05.926512246 +0000 UTC m=+144.242764419" Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.927110 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-762rz" podStartSLOduration=123.927103967 podStartE2EDuration="2m3.927103967s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:05.906883103 +0000 UTC m=+144.223135276" watchObservedRunningTime="2025-11-24 21:22:05.927103967 +0000 UTC m=+144.243356140" Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.934177 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:05 crc kubenswrapper[4915]: E1124 21:22:05.934662 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:06.434632655 +0000 UTC m=+144.750884828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.934812 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:05 crc kubenswrapper[4915]: E1124 21:22:05.935325 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:06.43531313 +0000 UTC m=+144.751565303 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.946768 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c98n4" podStartSLOduration=123.946747521 podStartE2EDuration="2m3.946747521s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:05.938771748 +0000 UTC m=+144.255023921" watchObservedRunningTime="2025-11-24 21:22:05.946747521 +0000 UTC m=+144.262999694" Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.947371 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zbtbl"] Nov 24 21:22:05 crc kubenswrapper[4915]: I1124 21:22:05.958501 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400315-hnmlz"] Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.021789 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-d8b76" podStartSLOduration=124.021771046 podStartE2EDuration="2m4.021771046s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:06.009233184 +0000 UTC m=+144.325485357" watchObservedRunningTime="2025-11-24 21:22:06.021771046 +0000 UTC m=+144.338023219" Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.025154 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-llzdr"] Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.037197 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:06 crc kubenswrapper[4915]: E1124 21:22:06.037491 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:06.537477635 +0000 UTC m=+144.853729798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.044523 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s4nzx"] Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.066560 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rpff6"] Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.066612 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q87rd"] Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.074102 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pk8zt"] Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.085598 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xxjt" podStartSLOduration=124.085584168 podStartE2EDuration="2m4.085584168s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:06.084114963 +0000 UTC m=+144.400367136" watchObservedRunningTime="2025-11-24 21:22:06.085584168 +0000 UTC m=+144.401836341" Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.139532 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:06 crc kubenswrapper[4915]: E1124 21:22:06.140051 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:06.640034994 +0000 UTC m=+144.956287177 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.141370 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-hlzxv" podStartSLOduration=124.141356673 podStartE2EDuration="2m4.141356673s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:06.129124052 +0000 UTC m=+144.445376225" watchObservedRunningTime="2025-11-24 21:22:06.141356673 +0000 UTC m=+144.457608866" Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.174721 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-rflqm"] Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.176608 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lz5zs"] Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.184528 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2vd2"] Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.241514 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:06 crc kubenswrapper[4915]: E1124 21:22:06.241806 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:06.741791934 +0000 UTC m=+145.058044107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.261907 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nz685"] Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.343566 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:06 crc kubenswrapper[4915]: E1124 21:22:06.343916 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:06.843905506 +0000 UTC m=+145.160157679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.444967 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:06 crc kubenswrapper[4915]: E1124 21:22:06.445661 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:06.945640076 +0000 UTC m=+145.261892249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.482107 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.482410 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.489493 4915 patch_prober.go:28] interesting pod/apiserver-76f77b778f-d8b76 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.489549 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-d8b76" podUID="05cd76ab-027a-4c05-aeb0-702df5277cbe" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.501642 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-brhzz" event={"ID":"1dd6fba7-d689-47af-bf34-c2f20aa2893b","Type":"ContainerStarted","Data":"78139c57d91dbea545c03a729ee776343220e3002e35d39e6402b503958bca3c"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.501682 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-brhzz" event={"ID":"1dd6fba7-d689-47af-bf34-c2f20aa2893b","Type":"ContainerStarted","Data":"f6c2cbf171c4c76179e325096f9d23e025fa6974dbaf77673bfca134c1847818"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.513481 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-7pvlk" event={"ID":"bb25c8ff-bce3-4e50-ad7e-6cea6815a02b","Type":"ContainerStarted","Data":"f45787474f68ccd8594ff2ea5193e0c5f6421788180d89a667b53a41420f9a52"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.515642 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-dwnvl" event={"ID":"5f9100cf-9045-42a6-b0f5-27913d4839b4","Type":"ContainerStarted","Data":"bb2cda71c565dcbffec51b565e7e805aa822e5cc7c809fdcac2deb2c9144db56"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.515686 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-dwnvl" event={"ID":"5f9100cf-9045-42a6-b0f5-27913d4839b4","Type":"ContainerStarted","Data":"142e59138680c1e45a86dea1e27cbfb7264b982835184dd4ac900b1b3987eef6"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.519221 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbpp5" event={"ID":"0828ca36-88fa-4eac-95ed-a8bc14a2b840","Type":"ContainerStarted","Data":"dbd44a01cc184b7aa682fe6f288ab411cfc1b40fb7835269ef26263fa57dd3d8"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.519258 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbpp5" event={"ID":"0828ca36-88fa-4eac-95ed-a8bc14a2b840","Type":"ContainerStarted","Data":"235b0401578c794ce342f238e924c6deb3a947d96b678ad5fca67e3981d5d5fb"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.538259 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zgpwn" event={"ID":"96ed7ec2-f67e-4a3a-9023-34ac783d91db","Type":"ContainerStarted","Data":"d4c5b80948f483dc51f800a0f0b88ae445ce9d592bc0269f76ebad15c187e92e"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.547805 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:06 crc kubenswrapper[4915]: E1124 21:22:06.548112 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:07.048100421 +0000 UTC m=+145.364352584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.559140 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-bzdrs" event={"ID":"c5109e97-fa8d-4eb5-afff-4c4ea89f1948","Type":"ContainerStarted","Data":"f63f5158854cbcd86786068a936f9a166c5755652092cc361697043172e65e2f"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.563595 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lbpp5" podStartSLOduration=124.563579532 podStartE2EDuration="2m4.563579532s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:06.56246392 +0000 UTC m=+144.878716093" watchObservedRunningTime="2025-11-24 21:22:06.563579532 +0000 UTC m=+144.879831705" Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.564010 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-brhzz" podStartSLOduration=124.564005407 podStartE2EDuration="2m4.564005407s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:06.531104545 +0000 UTC m=+144.847356708" watchObservedRunningTime="2025-11-24 21:22:06.564005407 +0000 UTC m=+144.880257600" Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.571051 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rflqm" event={"ID":"7eb280c8-f12e-4971-bf46-3aaaff234a87","Type":"ContainerStarted","Data":"e5c65e2aec78d7370c44f855243c0de24e92a396dadd2045567f4570dc3c2860"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.586964 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-llzdr" event={"ID":"f3e83404-96f6-4a56-9642-316296b61562","Type":"ContainerStarted","Data":"b8bf69c6d1a19d7b2859db11b8308e9b5e7242712a91149f32143fad0323b94a"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.592663 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-dwnvl" podStartSLOduration=124.592631982 podStartE2EDuration="2m4.592631982s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:06.592409584 +0000 UTC m=+144.908661757" watchObservedRunningTime="2025-11-24 21:22:06.592631982 +0000 UTC m=+144.908884155" Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.600948 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lz5zs" event={"ID":"65aeafa6-f0b4-4983-827d-9a70340306ae","Type":"ContainerStarted","Data":"758508d8bd339907d920313c76fe24b6ebc3fde0265d993571a8929a35ef1f21"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.602912 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" event={"ID":"15b4822e-5f78-4c15-a2ad-641d0af56a1b","Type":"ContainerStarted","Data":"f6a4ebeeca838c51b58040d439ba8686a24d743c53ef682f7c65effa686ab379"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.605218 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rpff6" event={"ID":"fe4383de-d0d0-423d-aca2-f3dc1da5acba","Type":"ContainerStarted","Data":"4d0d349e47512849e201871bea20686737d0d41c3eaa4c15aed76c2d30187cb4"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.610553 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk8zt" event={"ID":"5212e9eb-ead2-4974-b668-ffe2d6a586d4","Type":"ContainerStarted","Data":"60f0d0083a1ee901f0982f16017626d63876ec77510405e66fb903bd2ac1a86e"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.654577 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:06 crc kubenswrapper[4915]: E1124 21:22:06.656898 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:07.156878369 +0000 UTC m=+145.473130542 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.658788 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5qvrk" event={"ID":"56ba8653-31c9-4e8a-bc67-9bfd3514677b","Type":"ContainerStarted","Data":"eecac83e2fd714cf2e2aeffa0f5e4e90639d0141edc06a3c4392a2b0c3cb5612"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.694425 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-hnmlz" event={"ID":"d8e29389-d3c0-4175-81dc-ecec5a0c5f35","Type":"ContainerStarted","Data":"3d77f449ad9c0e7fa93c45ce8ce5a7b1495ff9645d0fd2f6a7dd37d9c0557feb"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.705009 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-zgpwn" podStartSLOduration=124.704987382 podStartE2EDuration="2m4.704987382s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:06.65906173 +0000 UTC m=+144.975313903" watchObservedRunningTime="2025-11-24 21:22:06.704987382 +0000 UTC m=+145.021239555" Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.705329 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zjqjg" event={"ID":"859b0c41-321d-4408-9971-f024de0f81aa","Type":"ContainerStarted","Data":"4b57a0560ee134d198059becbebb39a0eea02d7d2efdc1407a33cde55ab2e588"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.705389 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zjqjg" event={"ID":"859b0c41-321d-4408-9971-f024de0f81aa","Type":"ContainerStarted","Data":"ae412bdcd00cd24f79968a17b1378e9ca5670b1224bfa2d76b037f6860fe5697"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.752180 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" podStartSLOduration=124.75215458 podStartE2EDuration="2m4.75215458s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:06.704862177 +0000 UTC m=+145.021114360" watchObservedRunningTime="2025-11-24 21:22:06.75215458 +0000 UTC m=+145.068406763" Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.752698 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-zjqjg" podStartSLOduration=124.75269207 podStartE2EDuration="2m4.75269207s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:06.749535073 +0000 UTC m=+145.065787256" watchObservedRunningTime="2025-11-24 21:22:06.75269207 +0000 UTC m=+145.068944253" Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.756585 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:06 crc kubenswrapper[4915]: E1124 21:22:06.757793 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:07.257781077 +0000 UTC m=+145.574033250 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.784296 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-kjj6c" event={"ID":"c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5","Type":"ContainerStarted","Data":"b32176d677c89b00f47b959ad105a3e02d67770460896838097cf86c58d3f720"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.804049 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-d7cvw" event={"ID":"ea64de69-8cc1-4935-8dc5-908bf44bb2d4","Type":"ContainerStarted","Data":"4933f73c5e86a542051de3cd0a7b81c042d71fb6f61275d5b4569deced64c331"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.804910 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-d7cvw" Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.807940 4915 patch_prober.go:28] interesting pod/downloads-7954f5f757-d7cvw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.807979 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-d7cvw" podUID="ea64de69-8cc1-4935-8dc5-908bf44bb2d4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.833533 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nmx6q" event={"ID":"fa3f4f80-34b9-4283-874c-ace65ae5ed4d","Type":"ContainerStarted","Data":"c588fe579f726f5c38cadcbee041c3659d579a9c2c02025040d067972ef67eb0"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.835659 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4rr2s" event={"ID":"53094d5f-2a39-479b-912d-c766fd9f4fa1","Type":"ContainerStarted","Data":"83bc71706b7f7185aebfaf0d0507b3ad5520144be0e0cd6e2534b38d3a861319"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.835687 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4rr2s" event={"ID":"53094d5f-2a39-479b-912d-c766fd9f4fa1","Type":"ContainerStarted","Data":"7e562c10d7a6a74726faaf96810b6bbd174f337f34c0b2f84d2e89f923792cb8"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.843796 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q87rd" event={"ID":"78aca27e-b8fd-4b40-a1d0-389faf2593c6","Type":"ContainerStarted","Data":"197465154ade3ec26017b52a837e37497f1bf448cd498d7bd5ab6aa977a009aa"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.846502 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jct" event={"ID":"7f11f7bc-4f9d-4bed-8557-de43a62647b1","Type":"ContainerStarted","Data":"c9160fd157490d825d195abbc395ef2542a82f21250e867ff1118b224a8089c3"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.847163 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jct" Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.850977 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-kjj6c" podStartSLOduration=124.850962241 podStartE2EDuration="2m4.850962241s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:06.850311767 +0000 UTC m=+145.166563940" watchObservedRunningTime="2025-11-24 21:22:06.850962241 +0000 UTC m=+145.167214414" Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.856238 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s4nzx" event={"ID":"05335a1e-219e-4242-8c4a-4253ac1f346d","Type":"ContainerStarted","Data":"5ebdd8b3d5536050d795a3f6a5e1ca5ff7b61d26fc1abd851a2fde033a90cfc1"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.857261 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:06 crc kubenswrapper[4915]: E1124 21:22:06.858611 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:07.358592162 +0000 UTC m=+145.674844335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.864012 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j2dm5" event={"ID":"e70a867a-f31a-495c-9f2a-13a996188c84","Type":"ContainerStarted","Data":"449d8b3858a5b3d79a8feede3918eb2303aaf058dc5d23c946eefd44f008b75e"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.864094 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j2dm5" Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.870805 4915 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-j2dm5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.870898 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j2dm5" podUID="e70a867a-f31a-495c-9f2a-13a996188c84" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.890576 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nz685" event={"ID":"54fcdac8-7a83-4cc3-8115-3600324106c4","Type":"ContainerStarted","Data":"40f47a2c8b1160adb1293238a7b2cc083fe070da74285109b5fe4b376e683dd9"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.904995 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-d7cvw" podStartSLOduration=124.904952981 podStartE2EDuration="2m4.904952981s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:06.903366882 +0000 UTC m=+145.219619055" watchObservedRunningTime="2025-11-24 21:22:06.904952981 +0000 UTC m=+145.221205154" Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.944201 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" event={"ID":"e0cfbbde-c3a3-4306-a714-76134e43b495","Type":"ContainerStarted","Data":"a70b7116eb6b3d2d5388bd487b52c42e5f09dcd44a2171c56cd76d726bc179c0"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.944731 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4rr2s" podStartSLOduration=124.944710066 podStartE2EDuration="2m4.944710066s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:06.942206993 +0000 UTC m=+145.258459166" watchObservedRunningTime="2025-11-24 21:22:06.944710066 +0000 UTC m=+145.260962239" Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.945062 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.946965 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-kjj6c" Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.947955 4915 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-rdk5z container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.22:6443/healthz\": dial tcp 10.217.0.22:6443: connect: connection refused" start-of-body= Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.948004 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" podUID="e0cfbbde-c3a3-4306-a714-76134e43b495" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.22:6443/healthz\": dial tcp 10.217.0.22:6443: connect: connection refused" Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.959587 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:06 crc kubenswrapper[4915]: E1124 21:22:06.961786 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:07.461771534 +0000 UTC m=+145.778023797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.971409 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-lrzbs" event={"ID":"aee2119f-dcf9-46fa-9341-13a9b5faec77","Type":"ContainerStarted","Data":"c0986256f430ddb6616f5d84cfeb62ccfd8e4a8f117cc584bc2e2e5544482025"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.973442 4915 patch_prober.go:28] interesting pod/router-default-5444994796-kjj6c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:22:06 crc kubenswrapper[4915]: [-]has-synced failed: reason withheld Nov 24 21:22:06 crc kubenswrapper[4915]: [+]process-running ok Nov 24 21:22:06 crc kubenswrapper[4915]: healthz check failed Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.973504 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kjj6c" podUID="c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.986877 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zbtbl" event={"ID":"38f0c117-4d15-4ac3-aece-8f0189d91bdb","Type":"ContainerStarted","Data":"40339628c858c0f7593565cd65339b6a5a2bf62fb2f98b21abc4b89e3e62d790"} Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.987665 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-zbtbl" Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.999362 4915 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zbtbl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Nov 24 21:22:06 crc kubenswrapper[4915]: I1124 21:22:06.999425 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zbtbl" podUID="38f0c117-4d15-4ac3-aece-8f0189d91bdb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Nov 24 21:22:07 crc kubenswrapper[4915]: I1124 21:22:07.002075 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2vd2" event={"ID":"29df6b7f-c6e5-4a9e-9257-57ccea81b430","Type":"ContainerStarted","Data":"de4576604e5f17f42e81feda2d9d1d866efdf3258d91f85f463bc971a8237fcb"} Nov 24 21:22:07 crc kubenswrapper[4915]: I1124 21:22:07.011097 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hmd9t" event={"ID":"3d71d3fc-a6c9-4e59-992f-03d27f8d14b9","Type":"ContainerStarted","Data":"b70c83695eb4719f9028b5b93ea55bd5036c86a197b81fefbd81b0a897364a20"} Nov 24 21:22:07 crc kubenswrapper[4915]: I1124 21:22:07.038811 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jct" podStartSLOduration=125.038784592 podStartE2EDuration="2m5.038784592s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:07.000325455 +0000 UTC m=+145.316577638" watchObservedRunningTime="2025-11-24 21:22:07.038784592 +0000 UTC m=+145.355036765" Nov 24 21:22:07 crc kubenswrapper[4915]: I1124 21:22:07.063416 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-762rz" Nov 24 21:22:07 crc kubenswrapper[4915]: I1124 21:22:07.064200 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:07 crc kubenswrapper[4915]: E1124 21:22:07.065649 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:07.565626621 +0000 UTC m=+145.881878864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:07 crc kubenswrapper[4915]: I1124 21:22:07.102837 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" Nov 24 21:22:07 crc kubenswrapper[4915]: I1124 21:22:07.103081 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" Nov 24 21:22:07 crc kubenswrapper[4915]: I1124 21:22:07.117636 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-zbtbl" podStartSLOduration=125.117612636 podStartE2EDuration="2m5.117612636s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:07.10630951 +0000 UTC m=+145.422561673" watchObservedRunningTime="2025-11-24 21:22:07.117612636 +0000 UTC m=+145.433864809" Nov 24 21:22:07 crc kubenswrapper[4915]: I1124 21:22:07.118035 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j2dm5" podStartSLOduration=125.118027222 podStartE2EDuration="2m5.118027222s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:07.050848107 +0000 UTC m=+145.367100270" watchObservedRunningTime="2025-11-24 21:22:07.118027222 +0000 UTC m=+145.434279395" Nov 24 21:22:07 crc kubenswrapper[4915]: I1124 21:22:07.119335 4915 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-zsbmp container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.24:8443/livez\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Nov 24 21:22:07 crc kubenswrapper[4915]: I1124 21:22:07.119381 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" podUID="15b4822e-5f78-4c15-a2ad-641d0af56a1b" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.24:8443/livez\": dial tcp 10.217.0.24:8443: connect: connection refused" Nov 24 21:22:07 crc kubenswrapper[4915]: I1124 21:22:07.142086 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-lrzbs" podStartSLOduration=7.142063967 podStartE2EDuration="7.142063967s" podCreationTimestamp="2025-11-24 21:22:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:07.13969767 +0000 UTC m=+145.455949843" watchObservedRunningTime="2025-11-24 21:22:07.142063967 +0000 UTC m=+145.458316140" Nov 24 21:22:07 crc kubenswrapper[4915]: I1124 21:22:07.166185 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:07 crc kubenswrapper[4915]: E1124 21:22:07.184205 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:07.684155808 +0000 UTC m=+146.000407991 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:07 crc kubenswrapper[4915]: I1124 21:22:07.265351 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" podStartSLOduration=125.26532821 podStartE2EDuration="2m5.26532821s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:07.212487102 +0000 UTC m=+145.528739275" watchObservedRunningTime="2025-11-24 21:22:07.26532821 +0000 UTC m=+145.581580383" Nov 24 21:22:07 crc kubenswrapper[4915]: I1124 21:22:07.267294 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:07 crc kubenswrapper[4915]: E1124 21:22:07.267696 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:07.767675086 +0000 UTC m=+146.083927259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:07 crc kubenswrapper[4915]: I1124 21:22:07.372517 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:07 crc kubenswrapper[4915]: E1124 21:22:07.372880 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:07.872867243 +0000 UTC m=+146.189119416 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:07 crc kubenswrapper[4915]: I1124 21:22:07.474392 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:07 crc kubenswrapper[4915]: E1124 21:22:07.474612 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:07.974594321 +0000 UTC m=+146.290846494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:07 crc kubenswrapper[4915]: I1124 21:22:07.580722 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:07 crc kubenswrapper[4915]: E1124 21:22:07.581735 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:08.081714238 +0000 UTC m=+146.397966411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:07 crc kubenswrapper[4915]: I1124 21:22:07.658013 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-sb4zq" Nov 24 21:22:07 crc kubenswrapper[4915]: I1124 21:22:07.682012 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:07 crc kubenswrapper[4915]: E1124 21:22:07.682619 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:08.182599035 +0000 UTC m=+146.498851208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:07 crc kubenswrapper[4915]: I1124 21:22:07.784226 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:07 crc kubenswrapper[4915]: E1124 21:22:07.784988 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:08.284975258 +0000 UTC m=+146.601227431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:07 crc kubenswrapper[4915]: I1124 21:22:07.855169 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5xxjt" Nov 24 21:22:07 crc kubenswrapper[4915]: I1124 21:22:07.885890 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:07 crc kubenswrapper[4915]: E1124 21:22:07.886321 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:08.386305592 +0000 UTC m=+146.702557765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:07 crc kubenswrapper[4915]: I1124 21:22:07.946933 4915 patch_prober.go:28] interesting pod/router-default-5444994796-kjj6c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:22:07 crc kubenswrapper[4915]: [-]has-synced failed: reason withheld Nov 24 21:22:07 crc kubenswrapper[4915]: [+]process-running ok Nov 24 21:22:07 crc kubenswrapper[4915]: healthz check failed Nov 24 21:22:07 crc kubenswrapper[4915]: I1124 21:22:07.946986 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kjj6c" podUID="c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:22:07 crc kubenswrapper[4915]: E1124 21:22:07.988967 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:08.488947434 +0000 UTC m=+146.805199607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:07 crc kubenswrapper[4915]: I1124 21:22:07.988460 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.014630 4915 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-c98n4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.014701 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c98n4" podUID="fc3e3b86-8f8c-4d9e-ad67-9f562e077367" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.27:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.019366 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk8zt" event={"ID":"5212e9eb-ead2-4974-b668-ffe2d6a586d4","Type":"ContainerStarted","Data":"0ac61026ddd903b83f0d298111b5220f3b3165868ac985c7d2874f336a282ea9"} Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.019671 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk8zt" event={"ID":"5212e9eb-ead2-4974-b668-ffe2d6a586d4","Type":"ContainerStarted","Data":"d20becd0899b2e41a120f83f15bf204a7ecd67989fd8a49651077f8f609c4052"} Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.022899 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5qvrk" event={"ID":"56ba8653-31c9-4e8a-bc67-9bfd3514677b","Type":"ContainerStarted","Data":"0da196c2cc6f71b081769f1a4af6ab409bd522f1056b7b1f14f5884ccb532ebf"} Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.025419 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rflqm" event={"ID":"7eb280c8-f12e-4971-bf46-3aaaff234a87","Type":"ContainerStarted","Data":"a61312e113c56d7852322b513ed7fe50cb13d28f34dee1f1aca5e91dda0be7ab"} Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.025461 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-rflqm" event={"ID":"7eb280c8-f12e-4971-bf46-3aaaff234a87","Type":"ContainerStarted","Data":"9b52b62d61917908f38a6e7384eee5e0fa0e81d05a6a1e5f488163490b8e35e5"} Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.027045 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rpff6" event={"ID":"fe4383de-d0d0-423d-aca2-f3dc1da5acba","Type":"ContainerStarted","Data":"5ac0b11f97aafa6dd1b1fc94bb688c390571d61a05186f748f6019f186723369"} Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.029420 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lz5zs" event={"ID":"65aeafa6-f0b4-4983-827d-9a70340306ae","Type":"ContainerStarted","Data":"71d42a758a4ef6a388d70bc11e1aeef7b3f2ec293c779d3d8cc3d2c706d03d0a"} Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.031525 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s4nzx" event={"ID":"05335a1e-219e-4242-8c4a-4253ac1f346d","Type":"ContainerStarted","Data":"7c2c4fcef3a9ee3b6b648dcaa2ef86d7bac2b63ab32a840326bc49d4575ab51d"} Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.033301 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s4nzx" Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.034275 4915 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-s4nzx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.034341 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s4nzx" podUID="05335a1e-219e-4242-8c4a-4253ac1f346d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.035548 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zbtbl" event={"ID":"38f0c117-4d15-4ac3-aece-8f0189d91bdb","Type":"ContainerStarted","Data":"fcc6ba456b5973228b4b147ab3d4bbbbc4698a26856b640381f15b9f47caa322"} Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.036684 4915 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zbtbl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.036717 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zbtbl" podUID="38f0c117-4d15-4ac3-aece-8f0189d91bdb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.038428 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-hnmlz" event={"ID":"d8e29389-d3c0-4175-81dc-ecec5a0c5f35","Type":"ContainerStarted","Data":"d2c3b9371de3176ecfcf1bd0014531bf5846d041c44d265d3cdeea6e238ad4c9"} Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.040489 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nz685" event={"ID":"54fcdac8-7a83-4cc3-8115-3600324106c4","Type":"ContainerStarted","Data":"7a7db943329480fbdc763594e784bea6f24b62e9cc59c4849ac24209898e16c7"} Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.042750 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-d7cvw" event={"ID":"ea64de69-8cc1-4935-8dc5-908bf44bb2d4","Type":"ContainerStarted","Data":"6202637181ed9e8ab2b75af6e428fbb1fc41ab502c21ac5b1a5ba1a518b7f7bf"} Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.044261 4915 patch_prober.go:28] interesting pod/downloads-7954f5f757-d7cvw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.044310 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-d7cvw" podUID="ea64de69-8cc1-4935-8dc5-908bf44bb2d4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.046832 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2vd2" event={"ID":"29df6b7f-c6e5-4a9e-9257-57ccea81b430","Type":"ContainerStarted","Data":"74f15e91f721734bd55a3977fa3432db055034143d9f32c2244aabf9308101e2"} Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.053493 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-bzdrs" event={"ID":"c5109e97-fa8d-4eb5-afff-4c4ea89f1948","Type":"ContainerStarted","Data":"2baaa9fd9604edb280cdd4daa49c67c6d00166937e0dca3dbd41626011b5124a"} Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.057043 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hmd9t" event={"ID":"3d71d3fc-a6c9-4e59-992f-03d27f8d14b9","Type":"ContainerStarted","Data":"e490f5847c2162c275d2f7a0e26dfb46cb02c8a968a0c95173a3da494a726672"} Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.057089 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-hmd9t" event={"ID":"3d71d3fc-a6c9-4e59-992f-03d27f8d14b9","Type":"ContainerStarted","Data":"b20727bf4843c23a003669e50385a3845ebd590aec37a1e7ed98b230bb7c7c7d"} Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.057209 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-hmd9t" Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.063890 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q87rd" event={"ID":"78aca27e-b8fd-4b40-a1d0-389faf2593c6","Type":"ContainerStarted","Data":"396684bfc1a7a7a5dde07dbbc3518b7cec2080fa6fecc3d0d62990f7f8553f59"} Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.067528 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nmx6q" event={"ID":"fa3f4f80-34b9-4283-874c-ace65ae5ed4d","Type":"ContainerStarted","Data":"873a3bd6a7f7d87e4f5c990804dac2473363dee84308d5790071fc1b0095203c"} Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.070532 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-7pvlk" event={"ID":"bb25c8ff-bce3-4e50-ad7e-6cea6815a02b","Type":"ContainerStarted","Data":"d43733512bc213a8bb9948db60d6be0edd2a1f0a13d06f843ab65a75f4307398"} Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.090318 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.101622 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j2dm5" Nov 24 21:22:08 crc kubenswrapper[4915]: E1124 21:22:08.102463 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:08.602431036 +0000 UTC m=+146.918683209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.179587 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pk8zt" podStartSLOduration=126.179571488 podStartE2EDuration="2m6.179571488s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:08.120385047 +0000 UTC m=+146.436637220" watchObservedRunningTime="2025-11-24 21:22:08.179571488 +0000 UTC m=+146.495823661" Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.179721 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rpff6" podStartSLOduration=126.179716784 podStartE2EDuration="2m6.179716784s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:08.162713637 +0000 UTC m=+146.478965820" watchObservedRunningTime="2025-11-24 21:22:08.179716784 +0000 UTC m=+146.495968957" Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.197091 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:08 crc kubenswrapper[4915]: E1124 21:22:08.208192 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:08.708177833 +0000 UTC m=+147.024430006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.300579 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:08 crc kubenswrapper[4915]: E1124 21:22:08.301511 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:08.80146993 +0000 UTC m=+147.117722103 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.340183 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-q87rd" podStartSLOduration=126.340162176 podStartE2EDuration="2m6.340162176s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:08.265207974 +0000 UTC m=+146.581460147" watchObservedRunningTime="2025-11-24 21:22:08.340162176 +0000 UTC m=+146.656414349" Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.402717 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:08 crc kubenswrapper[4915]: E1124 21:22:08.403035 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:08.903021342 +0000 UTC m=+147.219273515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.455344 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lz5zs" podStartSLOduration=126.455325699 podStartE2EDuration="2m6.455325699s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:08.378866642 +0000 UTC m=+146.695118825" watchObservedRunningTime="2025-11-24 21:22:08.455325699 +0000 UTC m=+146.771577872" Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.486946 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.504018 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:08 crc kubenswrapper[4915]: E1124 21:22:08.504582 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:09.004560834 +0000 UTC m=+147.320813007 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.604470 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-rflqm" podStartSLOduration=126.604450884 podStartE2EDuration="2m6.604450884s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:08.600835581 +0000 UTC m=+146.917087764" watchObservedRunningTime="2025-11-24 21:22:08.604450884 +0000 UTC m=+146.920703057" Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.605588 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5qvrk" podStartSLOduration=126.605580667 podStartE2EDuration="2m6.605580667s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:08.527180338 +0000 UTC m=+146.843432521" watchObservedRunningTime="2025-11-24 21:22:08.605580667 +0000 UTC m=+146.921832840" Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.606409 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:08 crc kubenswrapper[4915]: E1124 21:22:08.606984 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:09.106966607 +0000 UTC m=+147.423218770 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.707723 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:08 crc kubenswrapper[4915]: E1124 21:22:08.708123 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:09.208104884 +0000 UTC m=+147.524357057 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.749655 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-j2vd2" podStartSLOduration=126.749639004 podStartE2EDuration="2m6.749639004s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:08.749081874 +0000 UTC m=+147.065334037" watchObservedRunningTime="2025-11-24 21:22:08.749639004 +0000 UTC m=+147.065891177" Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.750075 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-bzdrs" podStartSLOduration=8.75007069 podStartE2EDuration="8.75007069s" podCreationTimestamp="2025-11-24 21:22:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:08.658724935 +0000 UTC m=+146.974977108" watchObservedRunningTime="2025-11-24 21:22:08.75007069 +0000 UTC m=+147.066322863" Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.811175 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:08 crc kubenswrapper[4915]: E1124 21:22:08.811729 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:09.311708742 +0000 UTC m=+147.627960915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.826476 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-nz685" podStartSLOduration=126.826457625 podStartE2EDuration="2m6.826457625s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:08.819181207 +0000 UTC m=+147.135433380" watchObservedRunningTime="2025-11-24 21:22:08.826457625 +0000 UTC m=+147.142709798" Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.879053 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-nmx6q" podStartSLOduration=126.879015222 podStartE2EDuration="2m6.879015222s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:08.871008667 +0000 UTC m=+147.187260850" watchObservedRunningTime="2025-11-24 21:22:08.879015222 +0000 UTC m=+147.195267395" Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.914026 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:08 crc kubenswrapper[4915]: E1124 21:22:08.914421 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:09.414405486 +0000 UTC m=+147.730657659 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.958829 4915 patch_prober.go:28] interesting pod/router-default-5444994796-kjj6c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:22:08 crc kubenswrapper[4915]: [-]has-synced failed: reason withheld Nov 24 21:22:08 crc kubenswrapper[4915]: [+]process-running ok Nov 24 21:22:08 crc kubenswrapper[4915]: healthz check failed Nov 24 21:22:08 crc kubenswrapper[4915]: I1124 21:22:08.958884 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kjj6c" podUID="c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:22:09 crc kubenswrapper[4915]: I1124 21:22:09.016487 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:09 crc kubenswrapper[4915]: E1124 21:22:09.016823 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:09.516795999 +0000 UTC m=+147.833048172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:09 crc kubenswrapper[4915]: I1124 21:22:09.018604 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-hnmlz" podStartSLOduration=127.018595675 podStartE2EDuration="2m7.018595675s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:08.95387227 +0000 UTC m=+147.270124443" watchObservedRunningTime="2025-11-24 21:22:09.018595675 +0000 UTC m=+147.334847848" Nov 24 21:22:09 crc kubenswrapper[4915]: I1124 21:22:09.089870 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-llzdr" event={"ID":"f3e83404-96f6-4a56-9642-316296b61562","Type":"ContainerStarted","Data":"8c79746e759d80ca52d5c929c473829c54cc151d8d8bfcae4edfbbd1f84f1f9c"} Nov 24 21:22:09 crc kubenswrapper[4915]: I1124 21:22:09.091655 4915 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-zbtbl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Nov 24 21:22:09 crc kubenswrapper[4915]: I1124 21:22:09.092080 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-zbtbl" podUID="38f0c117-4d15-4ac3-aece-8f0189d91bdb" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Nov 24 21:22:09 crc kubenswrapper[4915]: I1124 21:22:09.093140 4915 patch_prober.go:28] interesting pod/downloads-7954f5f757-d7cvw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Nov 24 21:22:09 crc kubenswrapper[4915]: I1124 21:22:09.093173 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-d7cvw" podUID="ea64de69-8cc1-4935-8dc5-908bf44bb2d4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Nov 24 21:22:09 crc kubenswrapper[4915]: I1124 21:22:09.118570 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:09 crc kubenswrapper[4915]: E1124 21:22:09.119327 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:09.619299086 +0000 UTC m=+147.935551259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:09 crc kubenswrapper[4915]: I1124 21:22:09.128707 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s4nzx" podStartSLOduration=127.128689202 podStartE2EDuration="2m7.128689202s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:09.021305585 +0000 UTC m=+147.337557758" watchObservedRunningTime="2025-11-24 21:22:09.128689202 +0000 UTC m=+147.444941375" Nov 24 21:22:09 crc kubenswrapper[4915]: I1124 21:22:09.129691 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-s4nzx" Nov 24 21:22:09 crc kubenswrapper[4915]: I1124 21:22:09.180211 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-7pvlk" podStartSLOduration=127.1801913 podStartE2EDuration="2m7.1801913s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:09.175757836 +0000 UTC m=+147.492010009" watchObservedRunningTime="2025-11-24 21:22:09.1801913 +0000 UTC m=+147.496443473" Nov 24 21:22:09 crc kubenswrapper[4915]: I1124 21:22:09.181795 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-hmd9t" podStartSLOduration=9.181788388 podStartE2EDuration="9.181788388s" podCreationTimestamp="2025-11-24 21:22:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:09.129411879 +0000 UTC m=+147.445664052" watchObservedRunningTime="2025-11-24 21:22:09.181788388 +0000 UTC m=+147.498040561" Nov 24 21:22:09 crc kubenswrapper[4915]: I1124 21:22:09.222068 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:09 crc kubenswrapper[4915]: E1124 21:22:09.224886 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:09.724865166 +0000 UTC m=+148.041117369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:09 crc kubenswrapper[4915]: I1124 21:22:09.324149 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:09 crc kubenswrapper[4915]: E1124 21:22:09.324312 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:09.824285059 +0000 UTC m=+148.140537232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:09 crc kubenswrapper[4915]: I1124 21:22:09.324446 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:09 crc kubenswrapper[4915]: E1124 21:22:09.324744 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:09.824731286 +0000 UTC m=+148.140983459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:09 crc kubenswrapper[4915]: I1124 21:22:09.426018 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:09 crc kubenswrapper[4915]: E1124 21:22:09.426188 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:09.926159463 +0000 UTC m=+148.242411636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:09 crc kubenswrapper[4915]: I1124 21:22:09.426660 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:09 crc kubenswrapper[4915]: E1124 21:22:09.427048 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:09.927040126 +0000 UTC m=+148.243292299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:09 crc kubenswrapper[4915]: I1124 21:22:09.528204 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:09 crc kubenswrapper[4915]: E1124 21:22:09.528369 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:10.028346049 +0000 UTC m=+148.344598222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:09 crc kubenswrapper[4915]: I1124 21:22:09.528443 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:09 crc kubenswrapper[4915]: E1124 21:22:09.528813 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:10.028802356 +0000 UTC m=+148.345054529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:09 crc kubenswrapper[4915]: I1124 21:22:09.631760 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:09 crc kubenswrapper[4915]: E1124 21:22:09.631911 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:10.131888234 +0000 UTC m=+148.448140407 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:09 crc kubenswrapper[4915]: I1124 21:22:09.632061 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:09 crc kubenswrapper[4915]: E1124 21:22:09.632376 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:10.132364122 +0000 UTC m=+148.448616295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:09 crc kubenswrapper[4915]: I1124 21:22:09.732901 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:09 crc kubenswrapper[4915]: E1124 21:22:09.733089 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:10.233064862 +0000 UTC m=+148.549317035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:09 crc kubenswrapper[4915]: I1124 21:22:09.733476 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:09 crc kubenswrapper[4915]: E1124 21:22:09.733766 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:10.233753508 +0000 UTC m=+148.550005681 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:09 crc kubenswrapper[4915]: I1124 21:22:09.836356 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:09 crc kubenswrapper[4915]: E1124 21:22:09.836697 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:10.336682371 +0000 UTC m=+148.652934544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:09 crc kubenswrapper[4915]: I1124 21:22:09.937748 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:09 crc kubenswrapper[4915]: E1124 21:22:09.938146 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:10.438127689 +0000 UTC m=+148.754379862 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:09 crc kubenswrapper[4915]: I1124 21:22:09.942497 4915 patch_prober.go:28] interesting pod/router-default-5444994796-kjj6c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:22:09 crc kubenswrapper[4915]: [-]has-synced failed: reason withheld Nov 24 21:22:09 crc kubenswrapper[4915]: [+]process-running ok Nov 24 21:22:09 crc kubenswrapper[4915]: healthz check failed Nov 24 21:22:09 crc kubenswrapper[4915]: I1124 21:22:09.942572 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kjj6c" podUID="c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.038583 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:10 crc kubenswrapper[4915]: E1124 21:22:10.038734 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:10.538710115 +0000 UTC m=+148.854962288 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.038939 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:10 crc kubenswrapper[4915]: E1124 21:22:10.039272 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:10.539259735 +0000 UTC m=+148.855511908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.100597 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-llzdr" event={"ID":"f3e83404-96f6-4a56-9642-316296b61562","Type":"ContainerStarted","Data":"69a887f42c4ef7ce01cf796cdb982c4527a6ef30494c0bce3320f5c5a87379e6"} Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.100645 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-llzdr" event={"ID":"f3e83404-96f6-4a56-9642-316296b61562","Type":"ContainerStarted","Data":"715f7ad7382049c307fb2d3ff912a765c9bd3095fe78c47ac1945742f0f14fa1"} Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.140580 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:10 crc kubenswrapper[4915]: E1124 21:22:10.141151 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:10.641128079 +0000 UTC m=+148.957380242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.242935 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:10 crc kubenswrapper[4915]: E1124 21:22:10.247281 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:10.74725198 +0000 UTC m=+149.063504443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.344207 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:10 crc kubenswrapper[4915]: E1124 21:22:10.344457 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:10.84441069 +0000 UTC m=+149.160662863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.344811 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.344844 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.345754 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.345826 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.345867 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:10 crc kubenswrapper[4915]: E1124 21:22:10.346464 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:10.846447095 +0000 UTC m=+149.162699268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.349207 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.373050 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.382410 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.382628 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.413893 4915 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.447050 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:10 crc kubenswrapper[4915]: E1124 21:22:10.447233 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:10.947208528 +0000 UTC m=+149.263460701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.447282 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:10 crc kubenswrapper[4915]: E1124 21:22:10.447621 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:10.947608432 +0000 UTC m=+149.263860605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.457508 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.458098 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-prtlc"] Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.458993 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prtlc" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.464906 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.468649 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-prtlc"] Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.469657 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.548579 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.548741 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g8m8\" (UniqueName: \"kubernetes.io/projected/200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd-kube-api-access-2g8m8\") pod \"certified-operators-prtlc\" (UID: \"200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd\") " pod="openshift-marketplace/certified-operators-prtlc" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.548785 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd-catalog-content\") pod \"certified-operators-prtlc\" (UID: \"200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd\") " pod="openshift-marketplace/certified-operators-prtlc" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.548863 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd-utilities\") pod \"certified-operators-prtlc\" (UID: \"200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd\") " pod="openshift-marketplace/certified-operators-prtlc" Nov 24 21:22:10 crc kubenswrapper[4915]: E1124 21:22:10.548957 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-24 21:22:11.048942076 +0000 UTC m=+149.365194249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.549405 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.645994 4915 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-24T21:22:10.413920001Z","Handler":null,"Name":""} Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.649980 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd-utilities\") pod \"certified-operators-prtlc\" (UID: \"200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd\") " pod="openshift-marketplace/certified-operators-prtlc" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.650053 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g8m8\" (UniqueName: \"kubernetes.io/projected/200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd-kube-api-access-2g8m8\") pod \"certified-operators-prtlc\" (UID: \"200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd\") " pod="openshift-marketplace/certified-operators-prtlc" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.650082 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.650104 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd-catalog-content\") pod \"certified-operators-prtlc\" (UID: \"200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd\") " pod="openshift-marketplace/certified-operators-prtlc" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.651960 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd-catalog-content\") pod \"certified-operators-prtlc\" (UID: \"200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd\") " pod="openshift-marketplace/certified-operators-prtlc" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.652724 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd-utilities\") pod \"certified-operators-prtlc\" (UID: \"200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd\") " pod="openshift-marketplace/certified-operators-prtlc" Nov 24 21:22:10 crc kubenswrapper[4915]: E1124 21:22:10.652777 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-24 21:22:11.152748451 +0000 UTC m=+149.469000624 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-97jl2" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.656669 4915 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.656737 4915 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.661386 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rwqx9"] Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.662994 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rwqx9" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.669273 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.683319 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g8m8\" (UniqueName: \"kubernetes.io/projected/200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd-kube-api-access-2g8m8\") pod \"certified-operators-prtlc\" (UID: \"200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd\") " pod="openshift-marketplace/certified-operators-prtlc" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.683672 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rwqx9"] Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.754433 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.754579 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04ecdca8-2d28-4a23-9c7d-107d0a882bc9-utilities\") pod \"community-operators-rwqx9\" (UID: \"04ecdca8-2d28-4a23-9c7d-107d0a882bc9\") " pod="openshift-marketplace/community-operators-rwqx9" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.754639 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skdmm\" (UniqueName: \"kubernetes.io/projected/04ecdca8-2d28-4a23-9c7d-107d0a882bc9-kube-api-access-skdmm\") pod \"community-operators-rwqx9\" (UID: \"04ecdca8-2d28-4a23-9c7d-107d0a882bc9\") " pod="openshift-marketplace/community-operators-rwqx9" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.754671 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04ecdca8-2d28-4a23-9c7d-107d0a882bc9-catalog-content\") pod \"community-operators-rwqx9\" (UID: \"04ecdca8-2d28-4a23-9c7d-107d0a882bc9\") " pod="openshift-marketplace/community-operators-rwqx9" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.772149 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.811090 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prtlc" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.857483 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skdmm\" (UniqueName: \"kubernetes.io/projected/04ecdca8-2d28-4a23-9c7d-107d0a882bc9-kube-api-access-skdmm\") pod \"community-operators-rwqx9\" (UID: \"04ecdca8-2d28-4a23-9c7d-107d0a882bc9\") " pod="openshift-marketplace/community-operators-rwqx9" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.857826 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.857862 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04ecdca8-2d28-4a23-9c7d-107d0a882bc9-catalog-content\") pod \"community-operators-rwqx9\" (UID: \"04ecdca8-2d28-4a23-9c7d-107d0a882bc9\") " pod="openshift-marketplace/community-operators-rwqx9" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.857923 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04ecdca8-2d28-4a23-9c7d-107d0a882bc9-utilities\") pod \"community-operators-rwqx9\" (UID: \"04ecdca8-2d28-4a23-9c7d-107d0a882bc9\") " pod="openshift-marketplace/community-operators-rwqx9" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.858539 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04ecdca8-2d28-4a23-9c7d-107d0a882bc9-utilities\") pod \"community-operators-rwqx9\" (UID: \"04ecdca8-2d28-4a23-9c7d-107d0a882bc9\") " pod="openshift-marketplace/community-operators-rwqx9" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.861308 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04ecdca8-2d28-4a23-9c7d-107d0a882bc9-catalog-content\") pod \"community-operators-rwqx9\" (UID: \"04ecdca8-2d28-4a23-9c7d-107d0a882bc9\") " pod="openshift-marketplace/community-operators-rwqx9" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.870457 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5gsdw"] Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.871603 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5gsdw" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.882905 4915 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.882954 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.899251 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skdmm\" (UniqueName: \"kubernetes.io/projected/04ecdca8-2d28-4a23-9c7d-107d0a882bc9-kube-api-access-skdmm\") pod \"community-operators-rwqx9\" (UID: \"04ecdca8-2d28-4a23-9c7d-107d0a882bc9\") " pod="openshift-marketplace/community-operators-rwqx9" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.909981 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5gsdw"] Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.942615 4915 patch_prober.go:28] interesting pod/router-default-5444994796-kjj6c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:22:10 crc kubenswrapper[4915]: [-]has-synced failed: reason withheld Nov 24 21:22:10 crc kubenswrapper[4915]: [+]process-running ok Nov 24 21:22:10 crc kubenswrapper[4915]: healthz check failed Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.943735 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kjj6c" podUID="c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.963735 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db16378b-3c05-4a15-82f2-eb3d06649681-utilities\") pod \"certified-operators-5gsdw\" (UID: \"db16378b-3c05-4a15-82f2-eb3d06649681\") " pod="openshift-marketplace/certified-operators-5gsdw" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.963820 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppj7k\" (UniqueName: \"kubernetes.io/projected/db16378b-3c05-4a15-82f2-eb3d06649681-kube-api-access-ppj7k\") pod \"certified-operators-5gsdw\" (UID: \"db16378b-3c05-4a15-82f2-eb3d06649681\") " pod="openshift-marketplace/certified-operators-5gsdw" Nov 24 21:22:10 crc kubenswrapper[4915]: I1124 21:22:10.963886 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db16378b-3c05-4a15-82f2-eb3d06649681-catalog-content\") pod \"certified-operators-5gsdw\" (UID: \"db16378b-3c05-4a15-82f2-eb3d06649681\") " pod="openshift-marketplace/certified-operators-5gsdw" Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.016272 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-97jl2\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.063354 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-btbhr"] Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.064649 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db16378b-3c05-4a15-82f2-eb3d06649681-catalog-content\") pod \"certified-operators-5gsdw\" (UID: \"db16378b-3c05-4a15-82f2-eb3d06649681\") " pod="openshift-marketplace/certified-operators-5gsdw" Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.064734 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db16378b-3c05-4a15-82f2-eb3d06649681-utilities\") pod \"certified-operators-5gsdw\" (UID: \"db16378b-3c05-4a15-82f2-eb3d06649681\") " pod="openshift-marketplace/certified-operators-5gsdw" Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.064752 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-btbhr" Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.064816 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppj7k\" (UniqueName: \"kubernetes.io/projected/db16378b-3c05-4a15-82f2-eb3d06649681-kube-api-access-ppj7k\") pod \"certified-operators-5gsdw\" (UID: \"db16378b-3c05-4a15-82f2-eb3d06649681\") " pod="openshift-marketplace/certified-operators-5gsdw" Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.065109 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db16378b-3c05-4a15-82f2-eb3d06649681-catalog-content\") pod \"certified-operators-5gsdw\" (UID: \"db16378b-3c05-4a15-82f2-eb3d06649681\") " pod="openshift-marketplace/certified-operators-5gsdw" Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.065435 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db16378b-3c05-4a15-82f2-eb3d06649681-utilities\") pod \"certified-operators-5gsdw\" (UID: \"db16378b-3c05-4a15-82f2-eb3d06649681\") " pod="openshift-marketplace/certified-operators-5gsdw" Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.068639 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rwqx9" Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.073267 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-btbhr"] Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.093154 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppj7k\" (UniqueName: \"kubernetes.io/projected/db16378b-3c05-4a15-82f2-eb3d06649681-kube-api-access-ppj7k\") pod \"certified-operators-5gsdw\" (UID: \"db16378b-3c05-4a15-82f2-eb3d06649681\") " pod="openshift-marketplace/certified-operators-5gsdw" Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.136189 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-llzdr" event={"ID":"f3e83404-96f6-4a56-9642-316296b61562","Type":"ContainerStarted","Data":"9a8f1141c3527de2ca0a2ba719acbeef6ffb4842967ae4f61733facde8710ce3"} Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.146514 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"a1c2338f45cf39c1430b368e4eee205c0b48c94542fa4abe9d4fcae0ece01232"} Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.146568 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"1968a7ed877f97c29569bb0881d5eb9de2c21b3146f89a8bf9c1570b5034705f"} Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.156583 4915 generic.go:334] "Generic (PLEG): container finished" podID="d8e29389-d3c0-4175-81dc-ecec5a0c5f35" containerID="d2c3b9371de3176ecfcf1bd0014531bf5846d041c44d265d3cdeea6e238ad4c9" exitCode=0 Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.156679 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-hnmlz" event={"ID":"d8e29389-d3c0-4175-81dc-ecec5a0c5f35","Type":"ContainerDied","Data":"d2c3b9371de3176ecfcf1bd0014531bf5846d041c44d265d3cdeea6e238ad4c9"} Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.163838 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"17d7b3d98e7b677a342d2de4609fb6e1734f534a5038c5e3f9c6ba7ad009de2b"} Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.164165 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.166323 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfqkf\" (UniqueName: \"kubernetes.io/projected/206d8bda-d169-42af-bbaa-ac3ddbef52a2-kube-api-access-kfqkf\") pod \"community-operators-btbhr\" (UID: \"206d8bda-d169-42af-bbaa-ac3ddbef52a2\") " pod="openshift-marketplace/community-operators-btbhr" Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.166364 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/206d8bda-d169-42af-bbaa-ac3ddbef52a2-catalog-content\") pod \"community-operators-btbhr\" (UID: \"206d8bda-d169-42af-bbaa-ac3ddbef52a2\") " pod="openshift-marketplace/community-operators-btbhr" Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.166388 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/206d8bda-d169-42af-bbaa-ac3ddbef52a2-utilities\") pod \"community-operators-btbhr\" (UID: \"206d8bda-d169-42af-bbaa-ac3ddbef52a2\") " pod="openshift-marketplace/community-operators-btbhr" Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.167266 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-llzdr" podStartSLOduration=11.16723581 podStartE2EDuration="11.16723581s" podCreationTimestamp="2025-11-24 21:22:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:11.161860152 +0000 UTC m=+149.478112345" watchObservedRunningTime="2025-11-24 21:22:11.16723581 +0000 UTC m=+149.483487983" Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.180816 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-prtlc"] Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.239311 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5gsdw" Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.264484 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.267535 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/206d8bda-d169-42af-bbaa-ac3ddbef52a2-catalog-content\") pod \"community-operators-btbhr\" (UID: \"206d8bda-d169-42af-bbaa-ac3ddbef52a2\") " pod="openshift-marketplace/community-operators-btbhr" Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.267612 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/206d8bda-d169-42af-bbaa-ac3ddbef52a2-utilities\") pod \"community-operators-btbhr\" (UID: \"206d8bda-d169-42af-bbaa-ac3ddbef52a2\") " pod="openshift-marketplace/community-operators-btbhr" Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.267707 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfqkf\" (UniqueName: \"kubernetes.io/projected/206d8bda-d169-42af-bbaa-ac3ddbef52a2-kube-api-access-kfqkf\") pod \"community-operators-btbhr\" (UID: \"206d8bda-d169-42af-bbaa-ac3ddbef52a2\") " pod="openshift-marketplace/community-operators-btbhr" Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.269364 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/206d8bda-d169-42af-bbaa-ac3ddbef52a2-catalog-content\") pod \"community-operators-btbhr\" (UID: \"206d8bda-d169-42af-bbaa-ac3ddbef52a2\") " pod="openshift-marketplace/community-operators-btbhr" Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.269667 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/206d8bda-d169-42af-bbaa-ac3ddbef52a2-utilities\") pod \"community-operators-btbhr\" (UID: \"206d8bda-d169-42af-bbaa-ac3ddbef52a2\") " pod="openshift-marketplace/community-operators-btbhr" Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.300425 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfqkf\" (UniqueName: \"kubernetes.io/projected/206d8bda-d169-42af-bbaa-ac3ddbef52a2-kube-api-access-kfqkf\") pod \"community-operators-btbhr\" (UID: \"206d8bda-d169-42af-bbaa-ac3ddbef52a2\") " pod="openshift-marketplace/community-operators-btbhr" Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.328667 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rwqx9"] Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.396446 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-btbhr" Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.497395 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.504965 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-d8b76" Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.546468 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5gsdw"] Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.641628 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-97jl2"] Nov 24 21:22:11 crc kubenswrapper[4915]: W1124 21:22:11.663021 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03660b87_4011_4ee8_ac77_a26a9f853005.slice/crio-1b431c8d983a04a81624e0f7c65196ff9b33e70bbf148ee7c2e910df1eaded68 WatchSource:0}: Error finding container 1b431c8d983a04a81624e0f7c65196ff9b33e70bbf148ee7c2e910df1eaded68: Status 404 returned error can't find the container with id 1b431c8d983a04a81624e0f7c65196ff9b33e70bbf148ee7c2e910df1eaded68 Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.721895 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-p7n9h" Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.764875 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-btbhr"] Nov 24 21:22:11 crc kubenswrapper[4915]: W1124 21:22:11.786912 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod206d8bda_d169_42af_bbaa_ac3ddbef52a2.slice/crio-c472332d0250f91a57be7f373713e2a0db646fc77962f0e6d01bda5fc164e30c WatchSource:0}: Error finding container c472332d0250f91a57be7f373713e2a0db646fc77962f0e6d01bda5fc164e30c: Status 404 returned error can't find the container with id c472332d0250f91a57be7f373713e2a0db646fc77962f0e6d01bda5fc164e30c Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.942295 4915 patch_prober.go:28] interesting pod/router-default-5444994796-kjj6c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:22:11 crc kubenswrapper[4915]: [-]has-synced failed: reason withheld Nov 24 21:22:11 crc kubenswrapper[4915]: [+]process-running ok Nov 24 21:22:11 crc kubenswrapper[4915]: healthz check failed Nov 24 21:22:11 crc kubenswrapper[4915]: I1124 21:22:11.942353 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kjj6c" podUID="c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.109048 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.113314 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zsbmp" Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.180210 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"5d804705fd7d0a6e949ee4053f98e3c65ba52fda249a242ad7186790366e2365"} Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.180280 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"74e61f8f09900938945e5eb0e582943ddf8e00a22e36cf6ac0d1c338575e1645"} Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.183714 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0aca269aa987e6929e850e44469d0fa522a13f4df52be6000defe43469c4ad56"} Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.185346 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btbhr" event={"ID":"206d8bda-d169-42af-bbaa-ac3ddbef52a2","Type":"ContainerStarted","Data":"c472332d0250f91a57be7f373713e2a0db646fc77962f0e6d01bda5fc164e30c"} Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.186425 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5gsdw" event={"ID":"db16378b-3c05-4a15-82f2-eb3d06649681","Type":"ContainerStarted","Data":"8403b1ac37d9b374e3be7e1e892767e9c83d8c93e28b9f926fc72a8160d8d9e5"} Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.189216 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" event={"ID":"03660b87-4011-4ee8-ac77-a26a9f853005","Type":"ContainerStarted","Data":"1b431c8d983a04a81624e0f7c65196ff9b33e70bbf148ee7c2e910df1eaded68"} Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.193332 4915 generic.go:334] "Generic (PLEG): container finished" podID="04ecdca8-2d28-4a23-9c7d-107d0a882bc9" containerID="fea4aa568a5ed6298c30c2e1427ce17bf59aa3db08ebe21b426a31a386b9c61a" exitCode=0 Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.194092 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rwqx9" event={"ID":"04ecdca8-2d28-4a23-9c7d-107d0a882bc9","Type":"ContainerDied","Data":"fea4aa568a5ed6298c30c2e1427ce17bf59aa3db08ebe21b426a31a386b9c61a"} Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.194111 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rwqx9" event={"ID":"04ecdca8-2d28-4a23-9c7d-107d0a882bc9","Type":"ContainerStarted","Data":"cde896f2ccc269ef487e18696dda8ba45ad5a2baf5303e139145c68d1903b671"} Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.195641 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.196601 4915 generic.go:334] "Generic (PLEG): container finished" podID="200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd" containerID="38ca78ec8ca521c59a1d0c9f1100906e0876cf355309bb6d257c6305c2c35927" exitCode=0 Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.197626 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prtlc" event={"ID":"200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd","Type":"ContainerDied","Data":"38ca78ec8ca521c59a1d0c9f1100906e0876cf355309bb6d257c6305c2c35927"} Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.197678 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prtlc" event={"ID":"200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd","Type":"ContainerStarted","Data":"06037f09690cbb286c4d17642b8a910684be392b93628eb0e85bf6a5c1717357"} Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.442417 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-hnmlz" Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.445490 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.583751 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8e29389-d3c0-4175-81dc-ecec5a0c5f35-secret-volume\") pod \"d8e29389-d3c0-4175-81dc-ecec5a0c5f35\" (UID: \"d8e29389-d3c0-4175-81dc-ecec5a0c5f35\") " Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.583857 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8e29389-d3c0-4175-81dc-ecec5a0c5f35-config-volume\") pod \"d8e29389-d3c0-4175-81dc-ecec5a0c5f35\" (UID: \"d8e29389-d3c0-4175-81dc-ecec5a0c5f35\") " Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.583885 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpf8p\" (UniqueName: \"kubernetes.io/projected/d8e29389-d3c0-4175-81dc-ecec5a0c5f35-kube-api-access-fpf8p\") pod \"d8e29389-d3c0-4175-81dc-ecec5a0c5f35\" (UID: \"d8e29389-d3c0-4175-81dc-ecec5a0c5f35\") " Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.587365 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8e29389-d3c0-4175-81dc-ecec5a0c5f35-config-volume" (OuterVolumeSpecName: "config-volume") pod "d8e29389-d3c0-4175-81dc-ecec5a0c5f35" (UID: "d8e29389-d3c0-4175-81dc-ecec5a0c5f35"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.590873 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8e29389-d3c0-4175-81dc-ecec5a0c5f35-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d8e29389-d3c0-4175-81dc-ecec5a0c5f35" (UID: "d8e29389-d3c0-4175-81dc-ecec5a0c5f35"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.593102 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8e29389-d3c0-4175-81dc-ecec5a0c5f35-kube-api-access-fpf8p" (OuterVolumeSpecName: "kube-api-access-fpf8p") pod "d8e29389-d3c0-4175-81dc-ecec5a0c5f35" (UID: "d8e29389-d3c0-4175-81dc-ecec5a0c5f35"). InnerVolumeSpecName "kube-api-access-fpf8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.673673 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-khkz6"] Nov 24 21:22:12 crc kubenswrapper[4915]: E1124 21:22:12.673881 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8e29389-d3c0-4175-81dc-ecec5a0c5f35" containerName="collect-profiles" Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.673892 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8e29389-d3c0-4175-81dc-ecec5a0c5f35" containerName="collect-profiles" Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.673989 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8e29389-d3c0-4175-81dc-ecec5a0c5f35" containerName="collect-profiles" Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.674760 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-khkz6" Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.678469 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.684825 4915 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d8e29389-d3c0-4175-81dc-ecec5a0c5f35-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.684843 4915 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d8e29389-d3c0-4175-81dc-ecec5a0c5f35-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.684853 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpf8p\" (UniqueName: \"kubernetes.io/projected/d8e29389-d3c0-4175-81dc-ecec5a0c5f35-kube-api-access-fpf8p\") on node \"crc\" DevicePath \"\"" Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.736453 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-khkz6"] Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.786157 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4499c65-6038-409a-964e-4b00d5286518-catalog-content\") pod \"redhat-marketplace-khkz6\" (UID: \"a4499c65-6038-409a-964e-4b00d5286518\") " pod="openshift-marketplace/redhat-marketplace-khkz6" Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.786208 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4499c65-6038-409a-964e-4b00d5286518-utilities\") pod \"redhat-marketplace-khkz6\" (UID: \"a4499c65-6038-409a-964e-4b00d5286518\") " pod="openshift-marketplace/redhat-marketplace-khkz6" Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.786250 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5grtb\" (UniqueName: \"kubernetes.io/projected/a4499c65-6038-409a-964e-4b00d5286518-kube-api-access-5grtb\") pod \"redhat-marketplace-khkz6\" (UID: \"a4499c65-6038-409a-964e-4b00d5286518\") " pod="openshift-marketplace/redhat-marketplace-khkz6" Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.887968 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4499c65-6038-409a-964e-4b00d5286518-catalog-content\") pod \"redhat-marketplace-khkz6\" (UID: \"a4499c65-6038-409a-964e-4b00d5286518\") " pod="openshift-marketplace/redhat-marketplace-khkz6" Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.888040 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4499c65-6038-409a-964e-4b00d5286518-utilities\") pod \"redhat-marketplace-khkz6\" (UID: \"a4499c65-6038-409a-964e-4b00d5286518\") " pod="openshift-marketplace/redhat-marketplace-khkz6" Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.888081 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5grtb\" (UniqueName: \"kubernetes.io/projected/a4499c65-6038-409a-964e-4b00d5286518-kube-api-access-5grtb\") pod \"redhat-marketplace-khkz6\" (UID: \"a4499c65-6038-409a-964e-4b00d5286518\") " pod="openshift-marketplace/redhat-marketplace-khkz6" Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.888911 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4499c65-6038-409a-964e-4b00d5286518-catalog-content\") pod \"redhat-marketplace-khkz6\" (UID: \"a4499c65-6038-409a-964e-4b00d5286518\") " pod="openshift-marketplace/redhat-marketplace-khkz6" Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.889033 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4499c65-6038-409a-964e-4b00d5286518-utilities\") pod \"redhat-marketplace-khkz6\" (UID: \"a4499c65-6038-409a-964e-4b00d5286518\") " pod="openshift-marketplace/redhat-marketplace-khkz6" Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.891212 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.891932 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.894369 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.896482 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.912359 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5grtb\" (UniqueName: \"kubernetes.io/projected/a4499c65-6038-409a-964e-4b00d5286518-kube-api-access-5grtb\") pod \"redhat-marketplace-khkz6\" (UID: \"a4499c65-6038-409a-964e-4b00d5286518\") " pod="openshift-marketplace/redhat-marketplace-khkz6" Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.938008 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.943359 4915 patch_prober.go:28] interesting pod/router-default-5444994796-kjj6c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:22:12 crc kubenswrapper[4915]: [-]has-synced failed: reason withheld Nov 24 21:22:12 crc kubenswrapper[4915]: [+]process-running ok Nov 24 21:22:12 crc kubenswrapper[4915]: healthz check failed Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.943421 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kjj6c" podUID="c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.986441 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-khkz6" Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.988836 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/723c1a8c-4c1d-4851-ac8d-006f106f9fba-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"723c1a8c-4c1d-4851-ac8d-006f106f9fba\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 21:22:12 crc kubenswrapper[4915]: I1124 21:22:12.989053 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/723c1a8c-4c1d-4851-ac8d-006f106f9fba-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"723c1a8c-4c1d-4851-ac8d-006f106f9fba\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.059308 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b99gc"] Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.060733 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b99gc" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.068320 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b99gc"] Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.095843 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/723c1a8c-4c1d-4851-ac8d-006f106f9fba-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"723c1a8c-4c1d-4851-ac8d-006f106f9fba\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.095915 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/723c1a8c-4c1d-4851-ac8d-006f106f9fba-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"723c1a8c-4c1d-4851-ac8d-006f106f9fba\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.095956 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/723c1a8c-4c1d-4851-ac8d-006f106f9fba-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"723c1a8c-4c1d-4851-ac8d-006f106f9fba\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.133758 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/723c1a8c-4c1d-4851-ac8d-006f106f9fba-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"723c1a8c-4c1d-4851-ac8d-006f106f9fba\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.196840 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dc0e018-8dcb-4b41-869e-18ed5010c682-utilities\") pod \"redhat-marketplace-b99gc\" (UID: \"3dc0e018-8dcb-4b41-869e-18ed5010c682\") " pod="openshift-marketplace/redhat-marketplace-b99gc" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.196940 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gcr7\" (UniqueName: \"kubernetes.io/projected/3dc0e018-8dcb-4b41-869e-18ed5010c682-kube-api-access-6gcr7\") pod \"redhat-marketplace-b99gc\" (UID: \"3dc0e018-8dcb-4b41-869e-18ed5010c682\") " pod="openshift-marketplace/redhat-marketplace-b99gc" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.196988 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dc0e018-8dcb-4b41-869e-18ed5010c682-catalog-content\") pod \"redhat-marketplace-b99gc\" (UID: \"3dc0e018-8dcb-4b41-869e-18ed5010c682\") " pod="openshift-marketplace/redhat-marketplace-b99gc" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.211118 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.220224 4915 generic.go:334] "Generic (PLEG): container finished" podID="206d8bda-d169-42af-bbaa-ac3ddbef52a2" containerID="12fe7519e08cb46df9e88abf0bee7c1d53cd15f3af605fb00baf3091af3c6164" exitCode=0 Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.220394 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btbhr" event={"ID":"206d8bda-d169-42af-bbaa-ac3ddbef52a2","Type":"ContainerDied","Data":"12fe7519e08cb46df9e88abf0bee7c1d53cd15f3af605fb00baf3091af3c6164"} Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.241013 4915 generic.go:334] "Generic (PLEG): container finished" podID="db16378b-3c05-4a15-82f2-eb3d06649681" containerID="28ab6e3e3f9500112cc2420d694545c3efe91ba66717dc125ae3c3d87694b743" exitCode=0 Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.241126 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5gsdw" event={"ID":"db16378b-3c05-4a15-82f2-eb3d06649681","Type":"ContainerDied","Data":"28ab6e3e3f9500112cc2420d694545c3efe91ba66717dc125ae3c3d87694b743"} Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.243022 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-c98n4" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.249506 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" event={"ID":"03660b87-4011-4ee8-ac77-a26a9f853005","Type":"ContainerStarted","Data":"baf6a2c4ea026d31999b13c60fb342d528aedb9a5d653f881fd9a7d8c8bb6b8e"} Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.249739 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.266177 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-hnmlz" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.266358 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400315-hnmlz" event={"ID":"d8e29389-d3c0-4175-81dc-ecec5a0c5f35","Type":"ContainerDied","Data":"3d77f449ad9c0e7fa93c45ce8ce5a7b1495ff9645d0fd2f6a7dd37d9c0557feb"} Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.266415 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d77f449ad9c0e7fa93c45ce8ce5a7b1495ff9645d0fd2f6a7dd37d9c0557feb" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.298398 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dc0e018-8dcb-4b41-869e-18ed5010c682-utilities\") pod \"redhat-marketplace-b99gc\" (UID: \"3dc0e018-8dcb-4b41-869e-18ed5010c682\") " pod="openshift-marketplace/redhat-marketplace-b99gc" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.298499 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gcr7\" (UniqueName: \"kubernetes.io/projected/3dc0e018-8dcb-4b41-869e-18ed5010c682-kube-api-access-6gcr7\") pod \"redhat-marketplace-b99gc\" (UID: \"3dc0e018-8dcb-4b41-869e-18ed5010c682\") " pod="openshift-marketplace/redhat-marketplace-b99gc" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.298543 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dc0e018-8dcb-4b41-869e-18ed5010c682-catalog-content\") pod \"redhat-marketplace-b99gc\" (UID: \"3dc0e018-8dcb-4b41-869e-18ed5010c682\") " pod="openshift-marketplace/redhat-marketplace-b99gc" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.299150 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dc0e018-8dcb-4b41-869e-18ed5010c682-catalog-content\") pod \"redhat-marketplace-b99gc\" (UID: \"3dc0e018-8dcb-4b41-869e-18ed5010c682\") " pod="openshift-marketplace/redhat-marketplace-b99gc" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.299339 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dc0e018-8dcb-4b41-869e-18ed5010c682-utilities\") pod \"redhat-marketplace-b99gc\" (UID: \"3dc0e018-8dcb-4b41-869e-18ed5010c682\") " pod="openshift-marketplace/redhat-marketplace-b99gc" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.345845 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" podStartSLOduration=131.345821598 podStartE2EDuration="2m11.345821598s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:13.336508564 +0000 UTC m=+151.652760767" watchObservedRunningTime="2025-11-24 21:22:13.345821598 +0000 UTC m=+151.662073771" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.356861 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gcr7\" (UniqueName: \"kubernetes.io/projected/3dc0e018-8dcb-4b41-869e-18ed5010c682-kube-api-access-6gcr7\") pod \"redhat-marketplace-b99gc\" (UID: \"3dc0e018-8dcb-4b41-869e-18ed5010c682\") " pod="openshift-marketplace/redhat-marketplace-b99gc" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.379345 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b99gc" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.439145 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-x7cqd" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.440081 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-x7cqd" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.449931 4915 patch_prober.go:28] interesting pod/console-f9d7485db-x7cqd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.449977 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-x7cqd" podUID="c25872a5-42e3-4e20-ad54-594477784fa2" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.644445 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-khkz6"] Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.694052 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-br8zr"] Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.695327 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-br8zr" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.699153 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.730318 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-br8zr"] Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.780870 4915 patch_prober.go:28] interesting pod/downloads-7954f5f757-d7cvw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.781175 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-d7cvw" podUID="ea64de69-8cc1-4935-8dc5-908bf44bb2d4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.782455 4915 patch_prober.go:28] interesting pod/downloads-7954f5f757-d7cvw container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.782498 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-d7cvw" podUID="ea64de69-8cc1-4935-8dc5-908bf44bb2d4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.824095 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd92d417-da27-4446-aade-0a17abc2eecf-utilities\") pod \"redhat-operators-br8zr\" (UID: \"dd92d417-da27-4446-aade-0a17abc2eecf\") " pod="openshift-marketplace/redhat-operators-br8zr" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.825044 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd92d417-da27-4446-aade-0a17abc2eecf-catalog-content\") pod \"redhat-operators-br8zr\" (UID: \"dd92d417-da27-4446-aade-0a17abc2eecf\") " pod="openshift-marketplace/redhat-operators-br8zr" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.825112 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvf9l\" (UniqueName: \"kubernetes.io/projected/dd92d417-da27-4446-aade-0a17abc2eecf-kube-api-access-nvf9l\") pod \"redhat-operators-br8zr\" (UID: \"dd92d417-da27-4446-aade-0a17abc2eecf\") " pod="openshift-marketplace/redhat-operators-br8zr" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.839612 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.926750 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd92d417-da27-4446-aade-0a17abc2eecf-catalog-content\") pod \"redhat-operators-br8zr\" (UID: \"dd92d417-da27-4446-aade-0a17abc2eecf\") " pod="openshift-marketplace/redhat-operators-br8zr" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.926848 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvf9l\" (UniqueName: \"kubernetes.io/projected/dd92d417-da27-4446-aade-0a17abc2eecf-kube-api-access-nvf9l\") pod \"redhat-operators-br8zr\" (UID: \"dd92d417-da27-4446-aade-0a17abc2eecf\") " pod="openshift-marketplace/redhat-operators-br8zr" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.926909 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd92d417-da27-4446-aade-0a17abc2eecf-utilities\") pod \"redhat-operators-br8zr\" (UID: \"dd92d417-da27-4446-aade-0a17abc2eecf\") " pod="openshift-marketplace/redhat-operators-br8zr" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.927422 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd92d417-da27-4446-aade-0a17abc2eecf-utilities\") pod \"redhat-operators-br8zr\" (UID: \"dd92d417-da27-4446-aade-0a17abc2eecf\") " pod="openshift-marketplace/redhat-operators-br8zr" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.927601 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd92d417-da27-4446-aade-0a17abc2eecf-catalog-content\") pod \"redhat-operators-br8zr\" (UID: \"dd92d417-da27-4446-aade-0a17abc2eecf\") " pod="openshift-marketplace/redhat-operators-br8zr" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.939027 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-kjj6c" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.945891 4915 patch_prober.go:28] interesting pod/router-default-5444994796-kjj6c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:22:13 crc kubenswrapper[4915]: [-]has-synced failed: reason withheld Nov 24 21:22:13 crc kubenswrapper[4915]: [+]process-running ok Nov 24 21:22:13 crc kubenswrapper[4915]: healthz check failed Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.946003 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kjj6c" podUID="c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.966109 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvf9l\" (UniqueName: \"kubernetes.io/projected/dd92d417-da27-4446-aade-0a17abc2eecf-kube-api-access-nvf9l\") pod \"redhat-operators-br8zr\" (UID: \"dd92d417-da27-4446-aade-0a17abc2eecf\") " pod="openshift-marketplace/redhat-operators-br8zr" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.996847 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-zbtbl" Nov 24 21:22:13 crc kubenswrapper[4915]: I1124 21:22:13.999642 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b99gc"] Nov 24 21:22:14 crc kubenswrapper[4915]: W1124 21:22:14.016217 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dc0e018_8dcb_4b41_869e_18ed5010c682.slice/crio-5695a62aec404145b5273c805432bc0d587f6bf9763ba7576fd37b59867b1ab3 WatchSource:0}: Error finding container 5695a62aec404145b5273c805432bc0d587f6bf9763ba7576fd37b59867b1ab3: Status 404 returned error can't find the container with id 5695a62aec404145b5273c805432bc0d587f6bf9763ba7576fd37b59867b1ab3 Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.050611 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-br8zr" Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.076130 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4lth2"] Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.078295 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4lth2" Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.094894 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4lth2"] Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.136153 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68b56c92-7ee3-4d25-8161-1331558cc024-catalog-content\") pod \"redhat-operators-4lth2\" (UID: \"68b56c92-7ee3-4d25-8161-1331558cc024\") " pod="openshift-marketplace/redhat-operators-4lth2" Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.136238 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtb6z\" (UniqueName: \"kubernetes.io/projected/68b56c92-7ee3-4d25-8161-1331558cc024-kube-api-access-rtb6z\") pod \"redhat-operators-4lth2\" (UID: \"68b56c92-7ee3-4d25-8161-1331558cc024\") " pod="openshift-marketplace/redhat-operators-4lth2" Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.136369 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68b56c92-7ee3-4d25-8161-1331558cc024-utilities\") pod \"redhat-operators-4lth2\" (UID: \"68b56c92-7ee3-4d25-8161-1331558cc024\") " pod="openshift-marketplace/redhat-operators-4lth2" Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.162203 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.164006 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.167905 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.174051 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.174169 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.239182 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aa8db4aa-bd83-40e5-8673-4f6db186c161-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"aa8db4aa-bd83-40e5-8673-4f6db186c161\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.239251 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68b56c92-7ee3-4d25-8161-1331558cc024-catalog-content\") pod \"redhat-operators-4lth2\" (UID: \"68b56c92-7ee3-4d25-8161-1331558cc024\") " pod="openshift-marketplace/redhat-operators-4lth2" Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.239286 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtb6z\" (UniqueName: \"kubernetes.io/projected/68b56c92-7ee3-4d25-8161-1331558cc024-kube-api-access-rtb6z\") pod \"redhat-operators-4lth2\" (UID: \"68b56c92-7ee3-4d25-8161-1331558cc024\") " pod="openshift-marketplace/redhat-operators-4lth2" Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.239321 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aa8db4aa-bd83-40e5-8673-4f6db186c161-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"aa8db4aa-bd83-40e5-8673-4f6db186c161\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.239453 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68b56c92-7ee3-4d25-8161-1331558cc024-utilities\") pod \"redhat-operators-4lth2\" (UID: \"68b56c92-7ee3-4d25-8161-1331558cc024\") " pod="openshift-marketplace/redhat-operators-4lth2" Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.239998 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68b56c92-7ee3-4d25-8161-1331558cc024-utilities\") pod \"redhat-operators-4lth2\" (UID: \"68b56c92-7ee3-4d25-8161-1331558cc024\") " pod="openshift-marketplace/redhat-operators-4lth2" Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.240281 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68b56c92-7ee3-4d25-8161-1331558cc024-catalog-content\") pod \"redhat-operators-4lth2\" (UID: \"68b56c92-7ee3-4d25-8161-1331558cc024\") " pod="openshift-marketplace/redhat-operators-4lth2" Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.266488 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtb6z\" (UniqueName: \"kubernetes.io/projected/68b56c92-7ee3-4d25-8161-1331558cc024-kube-api-access-rtb6z\") pod \"redhat-operators-4lth2\" (UID: \"68b56c92-7ee3-4d25-8161-1331558cc024\") " pod="openshift-marketplace/redhat-operators-4lth2" Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.273047 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b99gc" event={"ID":"3dc0e018-8dcb-4b41-869e-18ed5010c682","Type":"ContainerStarted","Data":"5695a62aec404145b5273c805432bc0d587f6bf9763ba7576fd37b59867b1ab3"} Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.284467 4915 generic.go:334] "Generic (PLEG): container finished" podID="a4499c65-6038-409a-964e-4b00d5286518" containerID="63513af541226991bcef3215f7f8cb4d7ecc02e0dbe2a21f655bf6ba8d68fe1c" exitCode=0 Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.284533 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-khkz6" event={"ID":"a4499c65-6038-409a-964e-4b00d5286518","Type":"ContainerDied","Data":"63513af541226991bcef3215f7f8cb4d7ecc02e0dbe2a21f655bf6ba8d68fe1c"} Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.284562 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-khkz6" event={"ID":"a4499c65-6038-409a-964e-4b00d5286518","Type":"ContainerStarted","Data":"47c1298d348eec9ebbd737abd8f2f5ac7b9884f3d4b5de5b0bb7be0cfd769737"} Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.287921 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"723c1a8c-4c1d-4851-ac8d-006f106f9fba","Type":"ContainerStarted","Data":"9d34da62236da37606c26b3d416f2c545ca600f1b14ba485fa4e616798d1a82f"} Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.341212 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aa8db4aa-bd83-40e5-8673-4f6db186c161-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"aa8db4aa-bd83-40e5-8673-4f6db186c161\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.342584 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aa8db4aa-bd83-40e5-8673-4f6db186c161-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"aa8db4aa-bd83-40e5-8673-4f6db186c161\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.345805 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aa8db4aa-bd83-40e5-8673-4f6db186c161-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"aa8db4aa-bd83-40e5-8673-4f6db186c161\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.365487 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aa8db4aa-bd83-40e5-8673-4f6db186c161-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"aa8db4aa-bd83-40e5-8673-4f6db186c161\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.475893 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4lth2" Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.514438 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.589723 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-br8zr"] Nov 24 21:22:14 crc kubenswrapper[4915]: W1124 21:22:14.604501 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd92d417_da27_4446_aade_0a17abc2eecf.slice/crio-b20f8ec2607998e4c976915ad7496c5a0ca0edeba79684dce8d98449633a61e2 WatchSource:0}: Error finding container b20f8ec2607998e4c976915ad7496c5a0ca0edeba79684dce8d98449633a61e2: Status 404 returned error can't find the container with id b20f8ec2607998e4c976915ad7496c5a0ca0edeba79684dce8d98449633a61e2 Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.944361 4915 patch_prober.go:28] interesting pod/router-default-5444994796-kjj6c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:22:14 crc kubenswrapper[4915]: [-]has-synced failed: reason withheld Nov 24 21:22:14 crc kubenswrapper[4915]: [+]process-running ok Nov 24 21:22:14 crc kubenswrapper[4915]: healthz check failed Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.944890 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kjj6c" podUID="c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:22:14 crc kubenswrapper[4915]: I1124 21:22:14.982638 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4lth2"] Nov 24 21:22:15 crc kubenswrapper[4915]: I1124 21:22:15.191218 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 24 21:22:15 crc kubenswrapper[4915]: W1124 21:22:15.244084 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podaa8db4aa_bd83_40e5_8673_4f6db186c161.slice/crio-4912a0bebf3b435c2274347517fe93c42197df261ffd91d14214fe4e8bfdedf7 WatchSource:0}: Error finding container 4912a0bebf3b435c2274347517fe93c42197df261ffd91d14214fe4e8bfdedf7: Status 404 returned error can't find the container with id 4912a0bebf3b435c2274347517fe93c42197df261ffd91d14214fe4e8bfdedf7 Nov 24 21:22:15 crc kubenswrapper[4915]: I1124 21:22:15.298051 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"aa8db4aa-bd83-40e5-8673-4f6db186c161","Type":"ContainerStarted","Data":"4912a0bebf3b435c2274347517fe93c42197df261ffd91d14214fe4e8bfdedf7"} Nov 24 21:22:15 crc kubenswrapper[4915]: I1124 21:22:15.301022 4915 generic.go:334] "Generic (PLEG): container finished" podID="68b56c92-7ee3-4d25-8161-1331558cc024" containerID="77343e3dd0dbf9a47d1cfaf1ed5f7e435f9f5d41188162ad0d50c0a46281a7ae" exitCode=0 Nov 24 21:22:15 crc kubenswrapper[4915]: I1124 21:22:15.301339 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4lth2" event={"ID":"68b56c92-7ee3-4d25-8161-1331558cc024","Type":"ContainerDied","Data":"77343e3dd0dbf9a47d1cfaf1ed5f7e435f9f5d41188162ad0d50c0a46281a7ae"} Nov 24 21:22:15 crc kubenswrapper[4915]: I1124 21:22:15.301385 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4lth2" event={"ID":"68b56c92-7ee3-4d25-8161-1331558cc024","Type":"ContainerStarted","Data":"63aea461ee2596a117b0b9832232994833070e026984bc81bcc3c0bf24d0cc85"} Nov 24 21:22:15 crc kubenswrapper[4915]: I1124 21:22:15.304405 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"723c1a8c-4c1d-4851-ac8d-006f106f9fba","Type":"ContainerStarted","Data":"14c32cab105b403a4731828db5a2e2a982ec457af58b66e1be5cd911ee14c4ff"} Nov 24 21:22:15 crc kubenswrapper[4915]: I1124 21:22:15.316858 4915 generic.go:334] "Generic (PLEG): container finished" podID="dd92d417-da27-4446-aade-0a17abc2eecf" containerID="f1d385bc2c955e3cde30a00736ed2d1a46d99d7f1909251b9be501aa49ec8f0e" exitCode=0 Nov 24 21:22:15 crc kubenswrapper[4915]: I1124 21:22:15.316964 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-br8zr" event={"ID":"dd92d417-da27-4446-aade-0a17abc2eecf","Type":"ContainerDied","Data":"f1d385bc2c955e3cde30a00736ed2d1a46d99d7f1909251b9be501aa49ec8f0e"} Nov 24 21:22:15 crc kubenswrapper[4915]: I1124 21:22:15.316993 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-br8zr" event={"ID":"dd92d417-da27-4446-aade-0a17abc2eecf","Type":"ContainerStarted","Data":"b20f8ec2607998e4c976915ad7496c5a0ca0edeba79684dce8d98449633a61e2"} Nov 24 21:22:15 crc kubenswrapper[4915]: I1124 21:22:15.326495 4915 generic.go:334] "Generic (PLEG): container finished" podID="3dc0e018-8dcb-4b41-869e-18ed5010c682" containerID="2e9bc6a405c444acc7f1282ea19cd17b5976301498bcfdc34edfd06ca1877a28" exitCode=0 Nov 24 21:22:15 crc kubenswrapper[4915]: I1124 21:22:15.326573 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b99gc" event={"ID":"3dc0e018-8dcb-4b41-869e-18ed5010c682","Type":"ContainerDied","Data":"2e9bc6a405c444acc7f1282ea19cd17b5976301498bcfdc34edfd06ca1877a28"} Nov 24 21:22:15 crc kubenswrapper[4915]: I1124 21:22:15.335760 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.335740493 podStartE2EDuration="3.335740493s" podCreationTimestamp="2025-11-24 21:22:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:15.333729259 +0000 UTC m=+153.649981432" watchObservedRunningTime="2025-11-24 21:22:15.335740493 +0000 UTC m=+153.651992666" Nov 24 21:22:15 crc kubenswrapper[4915]: I1124 21:22:15.943616 4915 patch_prober.go:28] interesting pod/router-default-5444994796-kjj6c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:22:15 crc kubenswrapper[4915]: [-]has-synced failed: reason withheld Nov 24 21:22:15 crc kubenswrapper[4915]: [+]process-running ok Nov 24 21:22:15 crc kubenswrapper[4915]: healthz check failed Nov 24 21:22:15 crc kubenswrapper[4915]: I1124 21:22:15.944107 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kjj6c" podUID="c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:22:16 crc kubenswrapper[4915]: I1124 21:22:16.072725 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-hmd9t" Nov 24 21:22:16 crc kubenswrapper[4915]: I1124 21:22:16.353869 4915 generic.go:334] "Generic (PLEG): container finished" podID="723c1a8c-4c1d-4851-ac8d-006f106f9fba" containerID="14c32cab105b403a4731828db5a2e2a982ec457af58b66e1be5cd911ee14c4ff" exitCode=0 Nov 24 21:22:16 crc kubenswrapper[4915]: I1124 21:22:16.354045 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"723c1a8c-4c1d-4851-ac8d-006f106f9fba","Type":"ContainerDied","Data":"14c32cab105b403a4731828db5a2e2a982ec457af58b66e1be5cd911ee14c4ff"} Nov 24 21:22:16 crc kubenswrapper[4915]: I1124 21:22:16.376062 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"aa8db4aa-bd83-40e5-8673-4f6db186c161","Type":"ContainerStarted","Data":"a2165e57683f4067fb27c07114764c20b81f068b6eb9e75fa7b220479ab2a238"} Nov 24 21:22:16 crc kubenswrapper[4915]: I1124 21:22:16.418702 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.418683339 podStartE2EDuration="2.418683339s" podCreationTimestamp="2025-11-24 21:22:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:16.415297643 +0000 UTC m=+154.731549816" watchObservedRunningTime="2025-11-24 21:22:16.418683339 +0000 UTC m=+154.734935512" Nov 24 21:22:16 crc kubenswrapper[4915]: I1124 21:22:16.943661 4915 patch_prober.go:28] interesting pod/router-default-5444994796-kjj6c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 24 21:22:16 crc kubenswrapper[4915]: [+]has-synced ok Nov 24 21:22:16 crc kubenswrapper[4915]: [+]process-running ok Nov 24 21:22:16 crc kubenswrapper[4915]: healthz check failed Nov 24 21:22:16 crc kubenswrapper[4915]: I1124 21:22:16.943755 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kjj6c" podUID="c67fa5c2-5760-49b8-83d8-9de5cb8bcbc5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:22:17 crc kubenswrapper[4915]: I1124 21:22:17.392305 4915 generic.go:334] "Generic (PLEG): container finished" podID="aa8db4aa-bd83-40e5-8673-4f6db186c161" containerID="a2165e57683f4067fb27c07114764c20b81f068b6eb9e75fa7b220479ab2a238" exitCode=0 Nov 24 21:22:17 crc kubenswrapper[4915]: I1124 21:22:17.392555 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"aa8db4aa-bd83-40e5-8673-4f6db186c161","Type":"ContainerDied","Data":"a2165e57683f4067fb27c07114764c20b81f068b6eb9e75fa7b220479ab2a238"} Nov 24 21:22:17 crc kubenswrapper[4915]: I1124 21:22:17.728565 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 21:22:17 crc kubenswrapper[4915]: I1124 21:22:17.806619 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/723c1a8c-4c1d-4851-ac8d-006f106f9fba-kube-api-access\") pod \"723c1a8c-4c1d-4851-ac8d-006f106f9fba\" (UID: \"723c1a8c-4c1d-4851-ac8d-006f106f9fba\") " Nov 24 21:22:17 crc kubenswrapper[4915]: I1124 21:22:17.806719 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/723c1a8c-4c1d-4851-ac8d-006f106f9fba-kubelet-dir\") pod \"723c1a8c-4c1d-4851-ac8d-006f106f9fba\" (UID: \"723c1a8c-4c1d-4851-ac8d-006f106f9fba\") " Nov 24 21:22:17 crc kubenswrapper[4915]: I1124 21:22:17.807294 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/723c1a8c-4c1d-4851-ac8d-006f106f9fba-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "723c1a8c-4c1d-4851-ac8d-006f106f9fba" (UID: "723c1a8c-4c1d-4851-ac8d-006f106f9fba"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:22:17 crc kubenswrapper[4915]: I1124 21:22:17.813967 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/723c1a8c-4c1d-4851-ac8d-006f106f9fba-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "723c1a8c-4c1d-4851-ac8d-006f106f9fba" (UID: "723c1a8c-4c1d-4851-ac8d-006f106f9fba"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:22:17 crc kubenswrapper[4915]: I1124 21:22:17.908323 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/723c1a8c-4c1d-4851-ac8d-006f106f9fba-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 21:22:17 crc kubenswrapper[4915]: I1124 21:22:17.908363 4915 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/723c1a8c-4c1d-4851-ac8d-006f106f9fba-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 24 21:22:17 crc kubenswrapper[4915]: I1124 21:22:17.942149 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-kjj6c" Nov 24 21:22:17 crc kubenswrapper[4915]: I1124 21:22:17.946853 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-kjj6c" Nov 24 21:22:18 crc kubenswrapper[4915]: I1124 21:22:18.428121 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 24 21:22:18 crc kubenswrapper[4915]: I1124 21:22:18.443474 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"723c1a8c-4c1d-4851-ac8d-006f106f9fba","Type":"ContainerDied","Data":"9d34da62236da37606c26b3d416f2c545ca600f1b14ba485fa4e616798d1a82f"} Nov 24 21:22:18 crc kubenswrapper[4915]: I1124 21:22:18.443729 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d34da62236da37606c26b3d416f2c545ca600f1b14ba485fa4e616798d1a82f" Nov 24 21:22:18 crc kubenswrapper[4915]: I1124 21:22:18.780297 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 21:22:18 crc kubenswrapper[4915]: I1124 21:22:18.821996 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aa8db4aa-bd83-40e5-8673-4f6db186c161-kubelet-dir\") pod \"aa8db4aa-bd83-40e5-8673-4f6db186c161\" (UID: \"aa8db4aa-bd83-40e5-8673-4f6db186c161\") " Nov 24 21:22:18 crc kubenswrapper[4915]: I1124 21:22:18.822055 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aa8db4aa-bd83-40e5-8673-4f6db186c161-kube-api-access\") pod \"aa8db4aa-bd83-40e5-8673-4f6db186c161\" (UID: \"aa8db4aa-bd83-40e5-8673-4f6db186c161\") " Nov 24 21:22:18 crc kubenswrapper[4915]: I1124 21:22:18.822179 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa8db4aa-bd83-40e5-8673-4f6db186c161-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "aa8db4aa-bd83-40e5-8673-4f6db186c161" (UID: "aa8db4aa-bd83-40e5-8673-4f6db186c161"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:22:18 crc kubenswrapper[4915]: I1124 21:22:18.822297 4915 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aa8db4aa-bd83-40e5-8673-4f6db186c161-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 24 21:22:18 crc kubenswrapper[4915]: I1124 21:22:18.827658 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa8db4aa-bd83-40e5-8673-4f6db186c161-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "aa8db4aa-bd83-40e5-8673-4f6db186c161" (UID: "aa8db4aa-bd83-40e5-8673-4f6db186c161"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:22:18 crc kubenswrapper[4915]: I1124 21:22:18.923147 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aa8db4aa-bd83-40e5-8673-4f6db186c161-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 24 21:22:19 crc kubenswrapper[4915]: I1124 21:22:19.446059 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"aa8db4aa-bd83-40e5-8673-4f6db186c161","Type":"ContainerDied","Data":"4912a0bebf3b435c2274347517fe93c42197df261ffd91d14214fe4e8bfdedf7"} Nov 24 21:22:19 crc kubenswrapper[4915]: I1124 21:22:19.446102 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4912a0bebf3b435c2274347517fe93c42197df261ffd91d14214fe4e8bfdedf7" Nov 24 21:22:19 crc kubenswrapper[4915]: I1124 21:22:19.446155 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 24 21:22:23 crc kubenswrapper[4915]: I1124 21:22:23.449590 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-x7cqd" Nov 24 21:22:23 crc kubenswrapper[4915]: I1124 21:22:23.454067 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-x7cqd" Nov 24 21:22:23 crc kubenswrapper[4915]: I1124 21:22:23.779476 4915 patch_prober.go:28] interesting pod/downloads-7954f5f757-d7cvw container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Nov 24 21:22:23 crc kubenswrapper[4915]: I1124 21:22:23.779528 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-d7cvw" podUID="ea64de69-8cc1-4935-8dc5-908bf44bb2d4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Nov 24 21:22:23 crc kubenswrapper[4915]: I1124 21:22:23.779563 4915 patch_prober.go:28] interesting pod/downloads-7954f5f757-d7cvw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Nov 24 21:22:23 crc kubenswrapper[4915]: I1124 21:22:23.779610 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-d7cvw" podUID="ea64de69-8cc1-4935-8dc5-908bf44bb2d4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Nov 24 21:22:24 crc kubenswrapper[4915]: I1124 21:22:24.327570 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:22:24 crc kubenswrapper[4915]: I1124 21:22:24.327852 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:22:24 crc kubenswrapper[4915]: I1124 21:22:24.403653 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a785aaf6-e561-47e9-a3ff-69e6930c5941-metrics-certs\") pod \"network-metrics-daemon-hkc4w\" (UID: \"a785aaf6-e561-47e9-a3ff-69e6930c5941\") " pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:22:24 crc kubenswrapper[4915]: I1124 21:22:24.411336 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a785aaf6-e561-47e9-a3ff-69e6930c5941-metrics-certs\") pod \"network-metrics-daemon-hkc4w\" (UID: \"a785aaf6-e561-47e9-a3ff-69e6930c5941\") " pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:22:24 crc kubenswrapper[4915]: I1124 21:22:24.664497 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hkc4w" Nov 24 21:22:31 crc kubenswrapper[4915]: I1124 21:22:31.271135 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:22:33 crc kubenswrapper[4915]: I1124 21:22:33.787562 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-d7cvw" Nov 24 21:22:43 crc kubenswrapper[4915]: I1124 21:22:43.241750 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-w5jct" Nov 24 21:22:48 crc kubenswrapper[4915]: E1124 21:22:48.141632 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 24 21:22:48 crc kubenswrapper[4915]: E1124 21:22:48.142148 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kfqkf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-btbhr_openshift-marketplace(206d8bda-d169-42af-bbaa-ac3ddbef52a2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 21:22:48 crc kubenswrapper[4915]: E1124 21:22:48.143518 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-btbhr" podUID="206d8bda-d169-42af-bbaa-ac3ddbef52a2" Nov 24 21:22:50 crc kubenswrapper[4915]: I1124 21:22:50.494508 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 24 21:22:52 crc kubenswrapper[4915]: E1124 21:22:52.842068 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-btbhr" podUID="206d8bda-d169-42af-bbaa-ac3ddbef52a2" Nov 24 21:22:53 crc kubenswrapper[4915]: E1124 21:22:53.549153 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 24 21:22:53 crc kubenswrapper[4915]: E1124 21:22:53.549423 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ppj7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-5gsdw_openshift-marketplace(db16378b-3c05-4a15-82f2-eb3d06649681): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 21:22:53 crc kubenswrapper[4915]: E1124 21:22:53.551098 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-5gsdw" podUID="db16378b-3c05-4a15-82f2-eb3d06649681" Nov 24 21:22:54 crc kubenswrapper[4915]: I1124 21:22:54.327920 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:22:54 crc kubenswrapper[4915]: I1124 21:22:54.328359 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:22:54 crc kubenswrapper[4915]: E1124 21:22:54.503871 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-5gsdw" podUID="db16378b-3c05-4a15-82f2-eb3d06649681" Nov 24 21:22:54 crc kubenswrapper[4915]: E1124 21:22:54.641838 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 24 21:22:54 crc kubenswrapper[4915]: E1124 21:22:54.641983 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6gcr7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-b99gc_openshift-marketplace(3dc0e018-8dcb-4b41-869e-18ed5010c682): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 21:22:54 crc kubenswrapper[4915]: E1124 21:22:54.643272 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-b99gc" podUID="3dc0e018-8dcb-4b41-869e-18ed5010c682" Nov 24 21:22:54 crc kubenswrapper[4915]: E1124 21:22:54.667005 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 24 21:22:54 crc kubenswrapper[4915]: E1124 21:22:54.667186 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-skdmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-rwqx9_openshift-marketplace(04ecdca8-2d28-4a23-9c7d-107d0a882bc9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 21:22:54 crc kubenswrapper[4915]: E1124 21:22:54.669156 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-rwqx9" podUID="04ecdca8-2d28-4a23-9c7d-107d0a882bc9" Nov 24 21:22:57 crc kubenswrapper[4915]: E1124 21:22:57.334361 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rwqx9" podUID="04ecdca8-2d28-4a23-9c7d-107d0a882bc9" Nov 24 21:22:57 crc kubenswrapper[4915]: E1124 21:22:57.334361 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-b99gc" podUID="3dc0e018-8dcb-4b41-869e-18ed5010c682" Nov 24 21:22:57 crc kubenswrapper[4915]: E1124 21:22:57.429787 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 24 21:22:57 crc kubenswrapper[4915]: E1124 21:22:57.430377 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nvf9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-br8zr_openshift-marketplace(dd92d417-da27-4446-aade-0a17abc2eecf): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 21:22:57 crc kubenswrapper[4915]: E1124 21:22:57.431678 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-br8zr" podUID="dd92d417-da27-4446-aade-0a17abc2eecf" Nov 24 21:22:57 crc kubenswrapper[4915]: E1124 21:22:57.453134 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 24 21:22:57 crc kubenswrapper[4915]: E1124 21:22:57.453312 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2g8m8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-prtlc_openshift-marketplace(200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 21:22:57 crc kubenswrapper[4915]: E1124 21:22:57.454486 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-prtlc" podUID="200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd" Nov 24 21:22:57 crc kubenswrapper[4915]: E1124 21:22:57.472677 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 24 21:22:57 crc kubenswrapper[4915]: E1124 21:22:57.472842 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rtb6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-4lth2_openshift-marketplace(68b56c92-7ee3-4d25-8161-1331558cc024): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 21:22:57 crc kubenswrapper[4915]: E1124 21:22:57.474050 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-4lth2" podUID="68b56c92-7ee3-4d25-8161-1331558cc024" Nov 24 21:22:57 crc kubenswrapper[4915]: E1124 21:22:57.504009 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 24 21:22:57 crc kubenswrapper[4915]: E1124 21:22:57.504177 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5grtb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-khkz6_openshift-marketplace(a4499c65-6038-409a-964e-4b00d5286518): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 21:22:57 crc kubenswrapper[4915]: E1124 21:22:57.505657 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-khkz6" podUID="a4499c65-6038-409a-964e-4b00d5286518" Nov 24 21:22:57 crc kubenswrapper[4915]: E1124 21:22:57.665833 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-prtlc" podUID="200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd" Nov 24 21:22:57 crc kubenswrapper[4915]: E1124 21:22:57.666305 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-4lth2" podUID="68b56c92-7ee3-4d25-8161-1331558cc024" Nov 24 21:22:57 crc kubenswrapper[4915]: E1124 21:22:57.666676 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-khkz6" podUID="a4499c65-6038-409a-964e-4b00d5286518" Nov 24 21:22:57 crc kubenswrapper[4915]: E1124 21:22:57.667192 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-br8zr" podUID="dd92d417-da27-4446-aade-0a17abc2eecf" Nov 24 21:22:57 crc kubenswrapper[4915]: I1124 21:22:57.784353 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hkc4w"] Nov 24 21:22:57 crc kubenswrapper[4915]: W1124 21:22:57.794106 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda785aaf6_e561_47e9_a3ff_69e6930c5941.slice/crio-245b263560db227860edf17387f81a63fc000e4115d0a7f3eff194c53b096b50 WatchSource:0}: Error finding container 245b263560db227860edf17387f81a63fc000e4115d0a7f3eff194c53b096b50: Status 404 returned error can't find the container with id 245b263560db227860edf17387f81a63fc000e4115d0a7f3eff194c53b096b50 Nov 24 21:22:58 crc kubenswrapper[4915]: I1124 21:22:58.672328 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hkc4w" event={"ID":"a785aaf6-e561-47e9-a3ff-69e6930c5941","Type":"ContainerStarted","Data":"8decdfef4dabd878420ef7ec4dc5b60800a799672298e9ca3b374f5a91db1116"} Nov 24 21:22:58 crc kubenswrapper[4915]: I1124 21:22:58.673484 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hkc4w" event={"ID":"a785aaf6-e561-47e9-a3ff-69e6930c5941","Type":"ContainerStarted","Data":"34adfe943728383dc85d2e21894c1e0eb915d6a54640a5cec2e10d1319045f4a"} Nov 24 21:22:58 crc kubenswrapper[4915]: I1124 21:22:58.673528 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hkc4w" event={"ID":"a785aaf6-e561-47e9-a3ff-69e6930c5941","Type":"ContainerStarted","Data":"245b263560db227860edf17387f81a63fc000e4115d0a7f3eff194c53b096b50"} Nov 24 21:22:58 crc kubenswrapper[4915]: I1124 21:22:58.691941 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-hkc4w" podStartSLOduration=176.691917186 podStartE2EDuration="2m56.691917186s" podCreationTimestamp="2025-11-24 21:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:22:58.688832421 +0000 UTC m=+197.005084604" watchObservedRunningTime="2025-11-24 21:22:58.691917186 +0000 UTC m=+197.008169369" Nov 24 21:23:08 crc kubenswrapper[4915]: I1124 21:23:08.751566 4915 generic.go:334] "Generic (PLEG): container finished" podID="db16378b-3c05-4a15-82f2-eb3d06649681" containerID="6aecb21ca8621ea797596ce306628e7236eb358b82d8281c429801ebe64cc210" exitCode=0 Nov 24 21:23:08 crc kubenswrapper[4915]: I1124 21:23:08.751662 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5gsdw" event={"ID":"db16378b-3c05-4a15-82f2-eb3d06649681","Type":"ContainerDied","Data":"6aecb21ca8621ea797596ce306628e7236eb358b82d8281c429801ebe64cc210"} Nov 24 21:23:09 crc kubenswrapper[4915]: I1124 21:23:09.766853 4915 generic.go:334] "Generic (PLEG): container finished" podID="206d8bda-d169-42af-bbaa-ac3ddbef52a2" containerID="a692e785bd91523b83f911d3e07a159b91fe2af546a2821df23771dbcfa717e6" exitCode=0 Nov 24 21:23:09 crc kubenswrapper[4915]: I1124 21:23:09.766919 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btbhr" event={"ID":"206d8bda-d169-42af-bbaa-ac3ddbef52a2","Type":"ContainerDied","Data":"a692e785bd91523b83f911d3e07a159b91fe2af546a2821df23771dbcfa717e6"} Nov 24 21:23:09 crc kubenswrapper[4915]: I1124 21:23:09.779321 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5gsdw" event={"ID":"db16378b-3c05-4a15-82f2-eb3d06649681","Type":"ContainerStarted","Data":"1478faccc4798a5f6a5547be92841d704f998a0cc9beac9fdce02811e37da96d"} Nov 24 21:23:09 crc kubenswrapper[4915]: I1124 21:23:09.822027 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5gsdw" podStartSLOduration=3.709063044 podStartE2EDuration="59.822005539s" podCreationTimestamp="2025-11-24 21:22:10 +0000 UTC" firstStartedPulling="2025-11-24 21:22:13.259685794 +0000 UTC m=+151.575937967" lastFinishedPulling="2025-11-24 21:23:09.372628289 +0000 UTC m=+207.688880462" observedRunningTime="2025-11-24 21:23:09.820709819 +0000 UTC m=+208.136962012" watchObservedRunningTime="2025-11-24 21:23:09.822005539 +0000 UTC m=+208.138257712" Nov 24 21:23:11 crc kubenswrapper[4915]: I1124 21:23:11.240163 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5gsdw" Nov 24 21:23:11 crc kubenswrapper[4915]: I1124 21:23:11.241128 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5gsdw" Nov 24 21:23:11 crc kubenswrapper[4915]: I1124 21:23:11.384037 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5gsdw" Nov 24 21:23:11 crc kubenswrapper[4915]: I1124 21:23:11.790999 4915 generic.go:334] "Generic (PLEG): container finished" podID="04ecdca8-2d28-4a23-9c7d-107d0a882bc9" containerID="f9518f07aa72dd5e935edbb660bca3daade6449bdd826135a51be1d3ecd6642e" exitCode=0 Nov 24 21:23:11 crc kubenswrapper[4915]: I1124 21:23:11.791024 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rwqx9" event={"ID":"04ecdca8-2d28-4a23-9c7d-107d0a882bc9","Type":"ContainerDied","Data":"f9518f07aa72dd5e935edbb660bca3daade6449bdd826135a51be1d3ecd6642e"} Nov 24 21:23:11 crc kubenswrapper[4915]: I1124 21:23:11.793484 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btbhr" event={"ID":"206d8bda-d169-42af-bbaa-ac3ddbef52a2","Type":"ContainerStarted","Data":"9eb9217b6cb01571b93473f1246da1f3c1de3a425a8fc47da53fa3e3617c234a"} Nov 24 21:23:11 crc kubenswrapper[4915]: I1124 21:23:11.827893 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-btbhr" podStartSLOduration=3.126011951 podStartE2EDuration="1m0.827854895s" podCreationTimestamp="2025-11-24 21:22:11 +0000 UTC" firstStartedPulling="2025-11-24 21:22:13.227765058 +0000 UTC m=+151.544017231" lastFinishedPulling="2025-11-24 21:23:10.929608002 +0000 UTC m=+209.245860175" observedRunningTime="2025-11-24 21:23:11.824288164 +0000 UTC m=+210.140540337" watchObservedRunningTime="2025-11-24 21:23:11.827854895 +0000 UTC m=+210.144107078" Nov 24 21:23:12 crc kubenswrapper[4915]: I1124 21:23:12.204798 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rdk5z"] Nov 24 21:23:12 crc kubenswrapper[4915]: I1124 21:23:12.801379 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-khkz6" event={"ID":"a4499c65-6038-409a-964e-4b00d5286518","Type":"ContainerDied","Data":"bc6647fdbbd07473aa1375d49fd255d9977f5811eb769162e82d982b469dc8dc"} Nov 24 21:23:12 crc kubenswrapper[4915]: I1124 21:23:12.801328 4915 generic.go:334] "Generic (PLEG): container finished" podID="a4499c65-6038-409a-964e-4b00d5286518" containerID="bc6647fdbbd07473aa1375d49fd255d9977f5811eb769162e82d982b469dc8dc" exitCode=0 Nov 24 21:23:12 crc kubenswrapper[4915]: I1124 21:23:12.804963 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4lth2" event={"ID":"68b56c92-7ee3-4d25-8161-1331558cc024","Type":"ContainerStarted","Data":"8b14bdefdae96e706f617e9f334c389224e60336048e98b6018ef110787c2e6e"} Nov 24 21:23:12 crc kubenswrapper[4915]: I1124 21:23:12.807029 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-br8zr" event={"ID":"dd92d417-da27-4446-aade-0a17abc2eecf","Type":"ContainerStarted","Data":"cf7a08c1d4cd0788738dfd5e9f59e567d622912523c4edf22d5e3311774b0968"} Nov 24 21:23:12 crc kubenswrapper[4915]: I1124 21:23:12.810443 4915 generic.go:334] "Generic (PLEG): container finished" podID="3dc0e018-8dcb-4b41-869e-18ed5010c682" containerID="be8c6d077734ad9c1ffbc45dd9207198767bbf1f2c3a8c6dcc4f3a9c19703b42" exitCode=0 Nov 24 21:23:12 crc kubenswrapper[4915]: I1124 21:23:12.810530 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b99gc" event={"ID":"3dc0e018-8dcb-4b41-869e-18ed5010c682","Type":"ContainerDied","Data":"be8c6d077734ad9c1ffbc45dd9207198767bbf1f2c3a8c6dcc4f3a9c19703b42"} Nov 24 21:23:12 crc kubenswrapper[4915]: I1124 21:23:12.813492 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rwqx9" event={"ID":"04ecdca8-2d28-4a23-9c7d-107d0a882bc9","Type":"ContainerStarted","Data":"560b8398d00c30a3989c565fe6c3711f9dfd0ad7555ec122c7e3ee35679151d6"} Nov 24 21:23:12 crc kubenswrapper[4915]: I1124 21:23:12.863712 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rwqx9" podStartSLOduration=2.574928571 podStartE2EDuration="1m2.863685466s" podCreationTimestamp="2025-11-24 21:22:10 +0000 UTC" firstStartedPulling="2025-11-24 21:22:12.195081335 +0000 UTC m=+150.511333508" lastFinishedPulling="2025-11-24 21:23:12.48383823 +0000 UTC m=+210.800090403" observedRunningTime="2025-11-24 21:23:12.862028146 +0000 UTC m=+211.178280329" watchObservedRunningTime="2025-11-24 21:23:12.863685466 +0000 UTC m=+211.179937639" Nov 24 21:23:13 crc kubenswrapper[4915]: I1124 21:23:13.822654 4915 generic.go:334] "Generic (PLEG): container finished" podID="dd92d417-da27-4446-aade-0a17abc2eecf" containerID="cf7a08c1d4cd0788738dfd5e9f59e567d622912523c4edf22d5e3311774b0968" exitCode=0 Nov 24 21:23:13 crc kubenswrapper[4915]: I1124 21:23:13.822730 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-br8zr" event={"ID":"dd92d417-da27-4446-aade-0a17abc2eecf","Type":"ContainerDied","Data":"cf7a08c1d4cd0788738dfd5e9f59e567d622912523c4edf22d5e3311774b0968"} Nov 24 21:23:13 crc kubenswrapper[4915]: I1124 21:23:13.826123 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b99gc" event={"ID":"3dc0e018-8dcb-4b41-869e-18ed5010c682","Type":"ContainerStarted","Data":"5ca7221f1c1603ed9d558f68ed9f2a39fb923e92261126e1c08f808b4983e07d"} Nov 24 21:23:13 crc kubenswrapper[4915]: I1124 21:23:13.828221 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-khkz6" event={"ID":"a4499c65-6038-409a-964e-4b00d5286518","Type":"ContainerStarted","Data":"82c195af7d0e980f6af098fb8348a12a5cec23d3fbcbf254c9709a0312bb87cf"} Nov 24 21:23:13 crc kubenswrapper[4915]: I1124 21:23:13.830684 4915 generic.go:334] "Generic (PLEG): container finished" podID="68b56c92-7ee3-4d25-8161-1331558cc024" containerID="8b14bdefdae96e706f617e9f334c389224e60336048e98b6018ef110787c2e6e" exitCode=0 Nov 24 21:23:13 crc kubenswrapper[4915]: I1124 21:23:13.830719 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4lth2" event={"ID":"68b56c92-7ee3-4d25-8161-1331558cc024","Type":"ContainerDied","Data":"8b14bdefdae96e706f617e9f334c389224e60336048e98b6018ef110787c2e6e"} Nov 24 21:23:13 crc kubenswrapper[4915]: I1124 21:23:13.879951 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-khkz6" podStartSLOduration=2.712546339 podStartE2EDuration="1m1.879933952s" podCreationTimestamp="2025-11-24 21:22:12 +0000 UTC" firstStartedPulling="2025-11-24 21:22:14.286717519 +0000 UTC m=+152.602969692" lastFinishedPulling="2025-11-24 21:23:13.454104922 +0000 UTC m=+211.770357305" observedRunningTime="2025-11-24 21:23:13.878500318 +0000 UTC m=+212.194752491" watchObservedRunningTime="2025-11-24 21:23:13.879933952 +0000 UTC m=+212.196186125" Nov 24 21:23:13 crc kubenswrapper[4915]: I1124 21:23:13.899164 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b99gc" podStartSLOduration=2.739605692 podStartE2EDuration="1m0.899143157s" podCreationTimestamp="2025-11-24 21:22:13 +0000 UTC" firstStartedPulling="2025-11-24 21:22:15.330261942 +0000 UTC m=+153.646514115" lastFinishedPulling="2025-11-24 21:23:13.489799417 +0000 UTC m=+211.806051580" observedRunningTime="2025-11-24 21:23:13.896263718 +0000 UTC m=+212.212515901" watchObservedRunningTime="2025-11-24 21:23:13.899143157 +0000 UTC m=+212.215395340" Nov 24 21:23:14 crc kubenswrapper[4915]: I1124 21:23:14.840719 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-br8zr" event={"ID":"dd92d417-da27-4446-aade-0a17abc2eecf","Type":"ContainerStarted","Data":"2a36c36042c61b3950094923804384177640ab48da42471912298fb840416310"} Nov 24 21:23:14 crc kubenswrapper[4915]: I1124 21:23:14.843219 4915 generic.go:334] "Generic (PLEG): container finished" podID="200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd" containerID="89d4e35e7a4bd517ac92a18fdf4106e929a2ab204bcd3897c26a50cb1cf0b24e" exitCode=0 Nov 24 21:23:14 crc kubenswrapper[4915]: I1124 21:23:14.843303 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prtlc" event={"ID":"200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd","Type":"ContainerDied","Data":"89d4e35e7a4bd517ac92a18fdf4106e929a2ab204bcd3897c26a50cb1cf0b24e"} Nov 24 21:23:14 crc kubenswrapper[4915]: I1124 21:23:14.846206 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4lth2" event={"ID":"68b56c92-7ee3-4d25-8161-1331558cc024","Type":"ContainerStarted","Data":"938876c1c1a5372a2f5cc09015b1548a0fe3c0d98ca8a1f3664fb8b79c2a7fd9"} Nov 24 21:23:14 crc kubenswrapper[4915]: I1124 21:23:14.870808 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-br8zr" podStartSLOduration=2.837708669 podStartE2EDuration="1m1.870788572s" podCreationTimestamp="2025-11-24 21:22:13 +0000 UTC" firstStartedPulling="2025-11-24 21:22:15.323400469 +0000 UTC m=+153.639652642" lastFinishedPulling="2025-11-24 21:23:14.356480372 +0000 UTC m=+212.672732545" observedRunningTime="2025-11-24 21:23:14.867524431 +0000 UTC m=+213.183776604" watchObservedRunningTime="2025-11-24 21:23:14.870788572 +0000 UTC m=+213.187040745" Nov 24 21:23:14 crc kubenswrapper[4915]: I1124 21:23:14.928905 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4lth2" podStartSLOduration=2.007745082 podStartE2EDuration="1m0.92888711s" podCreationTimestamp="2025-11-24 21:22:14 +0000 UTC" firstStartedPulling="2025-11-24 21:22:15.303246356 +0000 UTC m=+153.619498529" lastFinishedPulling="2025-11-24 21:23:14.224388384 +0000 UTC m=+212.540640557" observedRunningTime="2025-11-24 21:23:14.927509637 +0000 UTC m=+213.243761840" watchObservedRunningTime="2025-11-24 21:23:14.92888711 +0000 UTC m=+213.245139293" Nov 24 21:23:19 crc kubenswrapper[4915]: I1124 21:23:19.879658 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prtlc" event={"ID":"200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd","Type":"ContainerStarted","Data":"dda6270f5d47452c44dc53908e74c3202a6b3ae97cc9cbf0f8af9badd1542a9f"} Nov 24 21:23:19 crc kubenswrapper[4915]: I1124 21:23:19.902695 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-prtlc" podStartSLOduration=4.4183164569999995 podStartE2EDuration="1m9.902677481s" podCreationTimestamp="2025-11-24 21:22:10 +0000 UTC" firstStartedPulling="2025-11-24 21:22:12.198263572 +0000 UTC m=+150.514515745" lastFinishedPulling="2025-11-24 21:23:17.682624586 +0000 UTC m=+215.998876769" observedRunningTime="2025-11-24 21:23:19.900496704 +0000 UTC m=+218.216748877" watchObservedRunningTime="2025-11-24 21:23:19.902677481 +0000 UTC m=+218.218929654" Nov 24 21:23:20 crc kubenswrapper[4915]: I1124 21:23:20.811265 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-prtlc" Nov 24 21:23:20 crc kubenswrapper[4915]: I1124 21:23:20.811375 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-prtlc" Nov 24 21:23:20 crc kubenswrapper[4915]: I1124 21:23:20.876533 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-prtlc" Nov 24 21:23:21 crc kubenswrapper[4915]: I1124 21:23:21.070981 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rwqx9" Nov 24 21:23:21 crc kubenswrapper[4915]: I1124 21:23:21.071726 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rwqx9" Nov 24 21:23:21 crc kubenswrapper[4915]: I1124 21:23:21.130810 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rwqx9" Nov 24 21:23:21 crc kubenswrapper[4915]: I1124 21:23:21.330061 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5gsdw" Nov 24 21:23:21 crc kubenswrapper[4915]: I1124 21:23:21.397302 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-btbhr" Nov 24 21:23:21 crc kubenswrapper[4915]: I1124 21:23:21.397351 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-btbhr" Nov 24 21:23:21 crc kubenswrapper[4915]: I1124 21:23:21.435468 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-btbhr" Nov 24 21:23:21 crc kubenswrapper[4915]: I1124 21:23:21.960950 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rwqx9" Nov 24 21:23:21 crc kubenswrapper[4915]: I1124 21:23:21.975477 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-btbhr" Nov 24 21:23:22 crc kubenswrapper[4915]: I1124 21:23:22.061681 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5gsdw"] Nov 24 21:23:22 crc kubenswrapper[4915]: I1124 21:23:22.062071 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5gsdw" podUID="db16378b-3c05-4a15-82f2-eb3d06649681" containerName="registry-server" containerID="cri-o://1478faccc4798a5f6a5547be92841d704f998a0cc9beac9fdce02811e37da96d" gracePeriod=2 Nov 24 21:23:22 crc kubenswrapper[4915]: I1124 21:23:22.987681 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-khkz6" Nov 24 21:23:22 crc kubenswrapper[4915]: I1124 21:23:22.987811 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-khkz6" Nov 24 21:23:23 crc kubenswrapper[4915]: I1124 21:23:23.040142 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-khkz6" Nov 24 21:23:23 crc kubenswrapper[4915]: I1124 21:23:23.379505 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b99gc" Nov 24 21:23:23 crc kubenswrapper[4915]: I1124 21:23:23.379570 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b99gc" Nov 24 21:23:23 crc kubenswrapper[4915]: I1124 21:23:23.443180 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b99gc" Nov 24 21:23:23 crc kubenswrapper[4915]: I1124 21:23:23.458117 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-btbhr"] Nov 24 21:23:23 crc kubenswrapper[4915]: I1124 21:23:23.908363 4915 generic.go:334] "Generic (PLEG): container finished" podID="db16378b-3c05-4a15-82f2-eb3d06649681" containerID="1478faccc4798a5f6a5547be92841d704f998a0cc9beac9fdce02811e37da96d" exitCode=0 Nov 24 21:23:23 crc kubenswrapper[4915]: I1124 21:23:23.909072 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5gsdw" event={"ID":"db16378b-3c05-4a15-82f2-eb3d06649681","Type":"ContainerDied","Data":"1478faccc4798a5f6a5547be92841d704f998a0cc9beac9fdce02811e37da96d"} Nov 24 21:23:23 crc kubenswrapper[4915]: I1124 21:23:23.909587 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-btbhr" podUID="206d8bda-d169-42af-bbaa-ac3ddbef52a2" containerName="registry-server" containerID="cri-o://9eb9217b6cb01571b93473f1246da1f3c1de3a425a8fc47da53fa3e3617c234a" gracePeriod=2 Nov 24 21:23:23 crc kubenswrapper[4915]: I1124 21:23:23.962602 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-khkz6" Nov 24 21:23:23 crc kubenswrapper[4915]: I1124 21:23:23.982759 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b99gc" Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.051237 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-br8zr" Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.051301 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-br8zr" Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.106250 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-br8zr" Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.205130 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5gsdw" Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.257955 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppj7k\" (UniqueName: \"kubernetes.io/projected/db16378b-3c05-4a15-82f2-eb3d06649681-kube-api-access-ppj7k\") pod \"db16378b-3c05-4a15-82f2-eb3d06649681\" (UID: \"db16378b-3c05-4a15-82f2-eb3d06649681\") " Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.258024 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db16378b-3c05-4a15-82f2-eb3d06649681-catalog-content\") pod \"db16378b-3c05-4a15-82f2-eb3d06649681\" (UID: \"db16378b-3c05-4a15-82f2-eb3d06649681\") " Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.258042 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db16378b-3c05-4a15-82f2-eb3d06649681-utilities\") pod \"db16378b-3c05-4a15-82f2-eb3d06649681\" (UID: \"db16378b-3c05-4a15-82f2-eb3d06649681\") " Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.259547 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db16378b-3c05-4a15-82f2-eb3d06649681-utilities" (OuterVolumeSpecName: "utilities") pod "db16378b-3c05-4a15-82f2-eb3d06649681" (UID: "db16378b-3c05-4a15-82f2-eb3d06649681"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.265956 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db16378b-3c05-4a15-82f2-eb3d06649681-kube-api-access-ppj7k" (OuterVolumeSpecName: "kube-api-access-ppj7k") pod "db16378b-3c05-4a15-82f2-eb3d06649681" (UID: "db16378b-3c05-4a15-82f2-eb3d06649681"). InnerVolumeSpecName "kube-api-access-ppj7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.270511 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-btbhr" Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.308390 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db16378b-3c05-4a15-82f2-eb3d06649681-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db16378b-3c05-4a15-82f2-eb3d06649681" (UID: "db16378b-3c05-4a15-82f2-eb3d06649681"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.348447 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.348497 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.348537 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.349092 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336"} pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.349189 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" containerID="cri-o://34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336" gracePeriod=600 Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.359590 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/206d8bda-d169-42af-bbaa-ac3ddbef52a2-catalog-content\") pod \"206d8bda-d169-42af-bbaa-ac3ddbef52a2\" (UID: \"206d8bda-d169-42af-bbaa-ac3ddbef52a2\") " Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.359693 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/206d8bda-d169-42af-bbaa-ac3ddbef52a2-utilities\") pod \"206d8bda-d169-42af-bbaa-ac3ddbef52a2\" (UID: \"206d8bda-d169-42af-bbaa-ac3ddbef52a2\") " Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.359807 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfqkf\" (UniqueName: \"kubernetes.io/projected/206d8bda-d169-42af-bbaa-ac3ddbef52a2-kube-api-access-kfqkf\") pod \"206d8bda-d169-42af-bbaa-ac3ddbef52a2\" (UID: \"206d8bda-d169-42af-bbaa-ac3ddbef52a2\") " Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.360276 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppj7k\" (UniqueName: \"kubernetes.io/projected/db16378b-3c05-4a15-82f2-eb3d06649681-kube-api-access-ppj7k\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.360310 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db16378b-3c05-4a15-82f2-eb3d06649681-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.360326 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db16378b-3c05-4a15-82f2-eb3d06649681-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.367422 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/206d8bda-d169-42af-bbaa-ac3ddbef52a2-utilities" (OuterVolumeSpecName: "utilities") pod "206d8bda-d169-42af-bbaa-ac3ddbef52a2" (UID: "206d8bda-d169-42af-bbaa-ac3ddbef52a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.368191 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/206d8bda-d169-42af-bbaa-ac3ddbef52a2-kube-api-access-kfqkf" (OuterVolumeSpecName: "kube-api-access-kfqkf") pod "206d8bda-d169-42af-bbaa-ac3ddbef52a2" (UID: "206d8bda-d169-42af-bbaa-ac3ddbef52a2"). InnerVolumeSpecName "kube-api-access-kfqkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.433308 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/206d8bda-d169-42af-bbaa-ac3ddbef52a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "206d8bda-d169-42af-bbaa-ac3ddbef52a2" (UID: "206d8bda-d169-42af-bbaa-ac3ddbef52a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.461410 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfqkf\" (UniqueName: \"kubernetes.io/projected/206d8bda-d169-42af-bbaa-ac3ddbef52a2-kube-api-access-kfqkf\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.461470 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/206d8bda-d169-42af-bbaa-ac3ddbef52a2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.461482 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/206d8bda-d169-42af-bbaa-ac3ddbef52a2-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:24 crc kubenswrapper[4915]: E1124 21:23:24.470998 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a95ccb9_af8d_493c_b3c5_4fcb2e28b992.slice/crio-34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a95ccb9_af8d_493c_b3c5_4fcb2e28b992.slice/crio-conmon-34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336.scope\": RecentStats: unable to find data in memory cache]" Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.477199 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4lth2" Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.477249 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4lth2" Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.537408 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4lth2" Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.917162 4915 generic.go:334] "Generic (PLEG): container finished" podID="206d8bda-d169-42af-bbaa-ac3ddbef52a2" containerID="9eb9217b6cb01571b93473f1246da1f3c1de3a425a8fc47da53fa3e3617c234a" exitCode=0 Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.917317 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-btbhr" Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.917317 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btbhr" event={"ID":"206d8bda-d169-42af-bbaa-ac3ddbef52a2","Type":"ContainerDied","Data":"9eb9217b6cb01571b93473f1246da1f3c1de3a425a8fc47da53fa3e3617c234a"} Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.917868 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btbhr" event={"ID":"206d8bda-d169-42af-bbaa-ac3ddbef52a2","Type":"ContainerDied","Data":"c472332d0250f91a57be7f373713e2a0db646fc77962f0e6d01bda5fc164e30c"} Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.917917 4915 scope.go:117] "RemoveContainer" containerID="9eb9217b6cb01571b93473f1246da1f3c1de3a425a8fc47da53fa3e3617c234a" Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.921593 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5gsdw" event={"ID":"db16378b-3c05-4a15-82f2-eb3d06649681","Type":"ContainerDied","Data":"8403b1ac37d9b374e3be7e1e892767e9c83d8c93e28b9f926fc72a8160d8d9e5"} Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.921747 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5gsdw" Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.928751 4915 generic.go:334] "Generic (PLEG): container finished" podID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerID="34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336" exitCode=0 Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.928836 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerDied","Data":"34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336"} Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.928891 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerStarted","Data":"7a927e1b01e5942836750a7160599076a5dc63cd1a6d7a3fc0c7b1b258e1c919"} Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.947025 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5gsdw"] Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.953093 4915 scope.go:117] "RemoveContainer" containerID="a692e785bd91523b83f911d3e07a159b91fe2af546a2821df23771dbcfa717e6" Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.956062 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5gsdw"] Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.973854 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-btbhr"] Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.978716 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-btbhr"] Nov 24 21:23:24 crc kubenswrapper[4915]: I1124 21:23:24.984387 4915 scope.go:117] "RemoveContainer" containerID="12fe7519e08cb46df9e88abf0bee7c1d53cd15f3af605fb00baf3091af3c6164" Nov 24 21:23:25 crc kubenswrapper[4915]: I1124 21:23:25.007213 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4lth2" Nov 24 21:23:25 crc kubenswrapper[4915]: I1124 21:23:25.009810 4915 scope.go:117] "RemoveContainer" containerID="9eb9217b6cb01571b93473f1246da1f3c1de3a425a8fc47da53fa3e3617c234a" Nov 24 21:23:25 crc kubenswrapper[4915]: E1124 21:23:25.010328 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9eb9217b6cb01571b93473f1246da1f3c1de3a425a8fc47da53fa3e3617c234a\": container with ID starting with 9eb9217b6cb01571b93473f1246da1f3c1de3a425a8fc47da53fa3e3617c234a not found: ID does not exist" containerID="9eb9217b6cb01571b93473f1246da1f3c1de3a425a8fc47da53fa3e3617c234a" Nov 24 21:23:25 crc kubenswrapper[4915]: I1124 21:23:25.010365 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eb9217b6cb01571b93473f1246da1f3c1de3a425a8fc47da53fa3e3617c234a"} err="failed to get container status \"9eb9217b6cb01571b93473f1246da1f3c1de3a425a8fc47da53fa3e3617c234a\": rpc error: code = NotFound desc = could not find container \"9eb9217b6cb01571b93473f1246da1f3c1de3a425a8fc47da53fa3e3617c234a\": container with ID starting with 9eb9217b6cb01571b93473f1246da1f3c1de3a425a8fc47da53fa3e3617c234a not found: ID does not exist" Nov 24 21:23:25 crc kubenswrapper[4915]: I1124 21:23:25.010393 4915 scope.go:117] "RemoveContainer" containerID="a692e785bd91523b83f911d3e07a159b91fe2af546a2821df23771dbcfa717e6" Nov 24 21:23:25 crc kubenswrapper[4915]: E1124 21:23:25.010979 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a692e785bd91523b83f911d3e07a159b91fe2af546a2821df23771dbcfa717e6\": container with ID starting with a692e785bd91523b83f911d3e07a159b91fe2af546a2821df23771dbcfa717e6 not found: ID does not exist" containerID="a692e785bd91523b83f911d3e07a159b91fe2af546a2821df23771dbcfa717e6" Nov 24 21:23:25 crc kubenswrapper[4915]: I1124 21:23:25.011051 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a692e785bd91523b83f911d3e07a159b91fe2af546a2821df23771dbcfa717e6"} err="failed to get container status \"a692e785bd91523b83f911d3e07a159b91fe2af546a2821df23771dbcfa717e6\": rpc error: code = NotFound desc = could not find container \"a692e785bd91523b83f911d3e07a159b91fe2af546a2821df23771dbcfa717e6\": container with ID starting with a692e785bd91523b83f911d3e07a159b91fe2af546a2821df23771dbcfa717e6 not found: ID does not exist" Nov 24 21:23:25 crc kubenswrapper[4915]: I1124 21:23:25.011093 4915 scope.go:117] "RemoveContainer" containerID="12fe7519e08cb46df9e88abf0bee7c1d53cd15f3af605fb00baf3091af3c6164" Nov 24 21:23:25 crc kubenswrapper[4915]: E1124 21:23:25.011410 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12fe7519e08cb46df9e88abf0bee7c1d53cd15f3af605fb00baf3091af3c6164\": container with ID starting with 12fe7519e08cb46df9e88abf0bee7c1d53cd15f3af605fb00baf3091af3c6164 not found: ID does not exist" containerID="12fe7519e08cb46df9e88abf0bee7c1d53cd15f3af605fb00baf3091af3c6164" Nov 24 21:23:25 crc kubenswrapper[4915]: I1124 21:23:25.011446 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12fe7519e08cb46df9e88abf0bee7c1d53cd15f3af605fb00baf3091af3c6164"} err="failed to get container status \"12fe7519e08cb46df9e88abf0bee7c1d53cd15f3af605fb00baf3091af3c6164\": rpc error: code = NotFound desc = could not find container \"12fe7519e08cb46df9e88abf0bee7c1d53cd15f3af605fb00baf3091af3c6164\": container with ID starting with 12fe7519e08cb46df9e88abf0bee7c1d53cd15f3af605fb00baf3091af3c6164 not found: ID does not exist" Nov 24 21:23:25 crc kubenswrapper[4915]: I1124 21:23:25.011468 4915 scope.go:117] "RemoveContainer" containerID="1478faccc4798a5f6a5547be92841d704f998a0cc9beac9fdce02811e37da96d" Nov 24 21:23:25 crc kubenswrapper[4915]: I1124 21:23:25.013522 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-br8zr" Nov 24 21:23:25 crc kubenswrapper[4915]: I1124 21:23:25.053937 4915 scope.go:117] "RemoveContainer" containerID="6aecb21ca8621ea797596ce306628e7236eb358b82d8281c429801ebe64cc210" Nov 24 21:23:25 crc kubenswrapper[4915]: I1124 21:23:25.084261 4915 scope.go:117] "RemoveContainer" containerID="28ab6e3e3f9500112cc2420d694545c3efe91ba66717dc125ae3c3d87694b743" Nov 24 21:23:25 crc kubenswrapper[4915]: I1124 21:23:25.854592 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b99gc"] Nov 24 21:23:25 crc kubenswrapper[4915]: I1124 21:23:25.937877 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b99gc" podUID="3dc0e018-8dcb-4b41-869e-18ed5010c682" containerName="registry-server" containerID="cri-o://5ca7221f1c1603ed9d558f68ed9f2a39fb923e92261126e1c08f808b4983e07d" gracePeriod=2 Nov 24 21:23:26 crc kubenswrapper[4915]: I1124 21:23:26.421186 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b99gc" Nov 24 21:23:26 crc kubenswrapper[4915]: I1124 21:23:26.442718 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="206d8bda-d169-42af-bbaa-ac3ddbef52a2" path="/var/lib/kubelet/pods/206d8bda-d169-42af-bbaa-ac3ddbef52a2/volumes" Nov 24 21:23:26 crc kubenswrapper[4915]: I1124 21:23:26.443953 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db16378b-3c05-4a15-82f2-eb3d06649681" path="/var/lib/kubelet/pods/db16378b-3c05-4a15-82f2-eb3d06649681/volumes" Nov 24 21:23:26 crc kubenswrapper[4915]: I1124 21:23:26.596887 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dc0e018-8dcb-4b41-869e-18ed5010c682-utilities\") pod \"3dc0e018-8dcb-4b41-869e-18ed5010c682\" (UID: \"3dc0e018-8dcb-4b41-869e-18ed5010c682\") " Nov 24 21:23:26 crc kubenswrapper[4915]: I1124 21:23:26.596979 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gcr7\" (UniqueName: \"kubernetes.io/projected/3dc0e018-8dcb-4b41-869e-18ed5010c682-kube-api-access-6gcr7\") pod \"3dc0e018-8dcb-4b41-869e-18ed5010c682\" (UID: \"3dc0e018-8dcb-4b41-869e-18ed5010c682\") " Nov 24 21:23:26 crc kubenswrapper[4915]: I1124 21:23:26.597030 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dc0e018-8dcb-4b41-869e-18ed5010c682-catalog-content\") pod \"3dc0e018-8dcb-4b41-869e-18ed5010c682\" (UID: \"3dc0e018-8dcb-4b41-869e-18ed5010c682\") " Nov 24 21:23:26 crc kubenswrapper[4915]: I1124 21:23:26.599204 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3dc0e018-8dcb-4b41-869e-18ed5010c682-utilities" (OuterVolumeSpecName: "utilities") pod "3dc0e018-8dcb-4b41-869e-18ed5010c682" (UID: "3dc0e018-8dcb-4b41-869e-18ed5010c682"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:23:26 crc kubenswrapper[4915]: I1124 21:23:26.603522 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dc0e018-8dcb-4b41-869e-18ed5010c682-kube-api-access-6gcr7" (OuterVolumeSpecName: "kube-api-access-6gcr7") pod "3dc0e018-8dcb-4b41-869e-18ed5010c682" (UID: "3dc0e018-8dcb-4b41-869e-18ed5010c682"). InnerVolumeSpecName "kube-api-access-6gcr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:23:26 crc kubenswrapper[4915]: I1124 21:23:26.642678 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3dc0e018-8dcb-4b41-869e-18ed5010c682-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3dc0e018-8dcb-4b41-869e-18ed5010c682" (UID: "3dc0e018-8dcb-4b41-869e-18ed5010c682"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:23:26 crc kubenswrapper[4915]: I1124 21:23:26.698259 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3dc0e018-8dcb-4b41-869e-18ed5010c682-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:26 crc kubenswrapper[4915]: I1124 21:23:26.698302 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gcr7\" (UniqueName: \"kubernetes.io/projected/3dc0e018-8dcb-4b41-869e-18ed5010c682-kube-api-access-6gcr7\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:26 crc kubenswrapper[4915]: I1124 21:23:26.698321 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3dc0e018-8dcb-4b41-869e-18ed5010c682-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:26 crc kubenswrapper[4915]: I1124 21:23:26.946769 4915 generic.go:334] "Generic (PLEG): container finished" podID="3dc0e018-8dcb-4b41-869e-18ed5010c682" containerID="5ca7221f1c1603ed9d558f68ed9f2a39fb923e92261126e1c08f808b4983e07d" exitCode=0 Nov 24 21:23:26 crc kubenswrapper[4915]: I1124 21:23:26.946842 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b99gc" event={"ID":"3dc0e018-8dcb-4b41-869e-18ed5010c682","Type":"ContainerDied","Data":"5ca7221f1c1603ed9d558f68ed9f2a39fb923e92261126e1c08f808b4983e07d"} Nov 24 21:23:26 crc kubenswrapper[4915]: I1124 21:23:26.946888 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b99gc" event={"ID":"3dc0e018-8dcb-4b41-869e-18ed5010c682","Type":"ContainerDied","Data":"5695a62aec404145b5273c805432bc0d587f6bf9763ba7576fd37b59867b1ab3"} Nov 24 21:23:26 crc kubenswrapper[4915]: I1124 21:23:26.946913 4915 scope.go:117] "RemoveContainer" containerID="5ca7221f1c1603ed9d558f68ed9f2a39fb923e92261126e1c08f808b4983e07d" Nov 24 21:23:26 crc kubenswrapper[4915]: I1124 21:23:26.946946 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b99gc" Nov 24 21:23:26 crc kubenswrapper[4915]: I1124 21:23:26.974981 4915 scope.go:117] "RemoveContainer" containerID="be8c6d077734ad9c1ffbc45dd9207198767bbf1f2c3a8c6dcc4f3a9c19703b42" Nov 24 21:23:27 crc kubenswrapper[4915]: I1124 21:23:27.001031 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b99gc"] Nov 24 21:23:27 crc kubenswrapper[4915]: I1124 21:23:27.005233 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b99gc"] Nov 24 21:23:27 crc kubenswrapper[4915]: I1124 21:23:27.015527 4915 scope.go:117] "RemoveContainer" containerID="2e9bc6a405c444acc7f1282ea19cd17b5976301498bcfdc34edfd06ca1877a28" Nov 24 21:23:27 crc kubenswrapper[4915]: I1124 21:23:27.038895 4915 scope.go:117] "RemoveContainer" containerID="5ca7221f1c1603ed9d558f68ed9f2a39fb923e92261126e1c08f808b4983e07d" Nov 24 21:23:27 crc kubenswrapper[4915]: E1124 21:23:27.039451 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ca7221f1c1603ed9d558f68ed9f2a39fb923e92261126e1c08f808b4983e07d\": container with ID starting with 5ca7221f1c1603ed9d558f68ed9f2a39fb923e92261126e1c08f808b4983e07d not found: ID does not exist" containerID="5ca7221f1c1603ed9d558f68ed9f2a39fb923e92261126e1c08f808b4983e07d" Nov 24 21:23:27 crc kubenswrapper[4915]: I1124 21:23:27.039526 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ca7221f1c1603ed9d558f68ed9f2a39fb923e92261126e1c08f808b4983e07d"} err="failed to get container status \"5ca7221f1c1603ed9d558f68ed9f2a39fb923e92261126e1c08f808b4983e07d\": rpc error: code = NotFound desc = could not find container \"5ca7221f1c1603ed9d558f68ed9f2a39fb923e92261126e1c08f808b4983e07d\": container with ID starting with 5ca7221f1c1603ed9d558f68ed9f2a39fb923e92261126e1c08f808b4983e07d not found: ID does not exist" Nov 24 21:23:27 crc kubenswrapper[4915]: I1124 21:23:27.039632 4915 scope.go:117] "RemoveContainer" containerID="be8c6d077734ad9c1ffbc45dd9207198767bbf1f2c3a8c6dcc4f3a9c19703b42" Nov 24 21:23:27 crc kubenswrapper[4915]: E1124 21:23:27.040354 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be8c6d077734ad9c1ffbc45dd9207198767bbf1f2c3a8c6dcc4f3a9c19703b42\": container with ID starting with be8c6d077734ad9c1ffbc45dd9207198767bbf1f2c3a8c6dcc4f3a9c19703b42 not found: ID does not exist" containerID="be8c6d077734ad9c1ffbc45dd9207198767bbf1f2c3a8c6dcc4f3a9c19703b42" Nov 24 21:23:27 crc kubenswrapper[4915]: I1124 21:23:27.040454 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be8c6d077734ad9c1ffbc45dd9207198767bbf1f2c3a8c6dcc4f3a9c19703b42"} err="failed to get container status \"be8c6d077734ad9c1ffbc45dd9207198767bbf1f2c3a8c6dcc4f3a9c19703b42\": rpc error: code = NotFound desc = could not find container \"be8c6d077734ad9c1ffbc45dd9207198767bbf1f2c3a8c6dcc4f3a9c19703b42\": container with ID starting with be8c6d077734ad9c1ffbc45dd9207198767bbf1f2c3a8c6dcc4f3a9c19703b42 not found: ID does not exist" Nov 24 21:23:27 crc kubenswrapper[4915]: I1124 21:23:27.040542 4915 scope.go:117] "RemoveContainer" containerID="2e9bc6a405c444acc7f1282ea19cd17b5976301498bcfdc34edfd06ca1877a28" Nov 24 21:23:27 crc kubenswrapper[4915]: E1124 21:23:27.041105 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e9bc6a405c444acc7f1282ea19cd17b5976301498bcfdc34edfd06ca1877a28\": container with ID starting with 2e9bc6a405c444acc7f1282ea19cd17b5976301498bcfdc34edfd06ca1877a28 not found: ID does not exist" containerID="2e9bc6a405c444acc7f1282ea19cd17b5976301498bcfdc34edfd06ca1877a28" Nov 24 21:23:27 crc kubenswrapper[4915]: I1124 21:23:27.041200 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e9bc6a405c444acc7f1282ea19cd17b5976301498bcfdc34edfd06ca1877a28"} err="failed to get container status \"2e9bc6a405c444acc7f1282ea19cd17b5976301498bcfdc34edfd06ca1877a28\": rpc error: code = NotFound desc = could not find container \"2e9bc6a405c444acc7f1282ea19cd17b5976301498bcfdc34edfd06ca1877a28\": container with ID starting with 2e9bc6a405c444acc7f1282ea19cd17b5976301498bcfdc34edfd06ca1877a28 not found: ID does not exist" Nov 24 21:23:28 crc kubenswrapper[4915]: I1124 21:23:28.260271 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4lth2"] Nov 24 21:23:28 crc kubenswrapper[4915]: I1124 21:23:28.261938 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4lth2" podUID="68b56c92-7ee3-4d25-8161-1331558cc024" containerName="registry-server" containerID="cri-o://938876c1c1a5372a2f5cc09015b1548a0fe3c0d98ca8a1f3664fb8b79c2a7fd9" gracePeriod=2 Nov 24 21:23:28 crc kubenswrapper[4915]: I1124 21:23:28.434677 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dc0e018-8dcb-4b41-869e-18ed5010c682" path="/var/lib/kubelet/pods/3dc0e018-8dcb-4b41-869e-18ed5010c682/volumes" Nov 24 21:23:28 crc kubenswrapper[4915]: I1124 21:23:28.861677 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4lth2" Nov 24 21:23:28 crc kubenswrapper[4915]: I1124 21:23:28.968207 4915 generic.go:334] "Generic (PLEG): container finished" podID="68b56c92-7ee3-4d25-8161-1331558cc024" containerID="938876c1c1a5372a2f5cc09015b1548a0fe3c0d98ca8a1f3664fb8b79c2a7fd9" exitCode=0 Nov 24 21:23:28 crc kubenswrapper[4915]: I1124 21:23:28.968279 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4lth2" event={"ID":"68b56c92-7ee3-4d25-8161-1331558cc024","Type":"ContainerDied","Data":"938876c1c1a5372a2f5cc09015b1548a0fe3c0d98ca8a1f3664fb8b79c2a7fd9"} Nov 24 21:23:28 crc kubenswrapper[4915]: I1124 21:23:28.968340 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4lth2" Nov 24 21:23:28 crc kubenswrapper[4915]: I1124 21:23:28.968369 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4lth2" event={"ID":"68b56c92-7ee3-4d25-8161-1331558cc024","Type":"ContainerDied","Data":"63aea461ee2596a117b0b9832232994833070e026984bc81bcc3c0bf24d0cc85"} Nov 24 21:23:28 crc kubenswrapper[4915]: I1124 21:23:28.968422 4915 scope.go:117] "RemoveContainer" containerID="938876c1c1a5372a2f5cc09015b1548a0fe3c0d98ca8a1f3664fb8b79c2a7fd9" Nov 24 21:23:28 crc kubenswrapper[4915]: I1124 21:23:28.987393 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtb6z\" (UniqueName: \"kubernetes.io/projected/68b56c92-7ee3-4d25-8161-1331558cc024-kube-api-access-rtb6z\") pod \"68b56c92-7ee3-4d25-8161-1331558cc024\" (UID: \"68b56c92-7ee3-4d25-8161-1331558cc024\") " Nov 24 21:23:28 crc kubenswrapper[4915]: I1124 21:23:28.987446 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68b56c92-7ee3-4d25-8161-1331558cc024-catalog-content\") pod \"68b56c92-7ee3-4d25-8161-1331558cc024\" (UID: \"68b56c92-7ee3-4d25-8161-1331558cc024\") " Nov 24 21:23:28 crc kubenswrapper[4915]: I1124 21:23:28.987464 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68b56c92-7ee3-4d25-8161-1331558cc024-utilities\") pod \"68b56c92-7ee3-4d25-8161-1331558cc024\" (UID: \"68b56c92-7ee3-4d25-8161-1331558cc024\") " Nov 24 21:23:28 crc kubenswrapper[4915]: I1124 21:23:28.988578 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68b56c92-7ee3-4d25-8161-1331558cc024-utilities" (OuterVolumeSpecName: "utilities") pod "68b56c92-7ee3-4d25-8161-1331558cc024" (UID: "68b56c92-7ee3-4d25-8161-1331558cc024"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:23:28 crc kubenswrapper[4915]: I1124 21:23:28.993220 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68b56c92-7ee3-4d25-8161-1331558cc024-kube-api-access-rtb6z" (OuterVolumeSpecName: "kube-api-access-rtb6z") pod "68b56c92-7ee3-4d25-8161-1331558cc024" (UID: "68b56c92-7ee3-4d25-8161-1331558cc024"). InnerVolumeSpecName "kube-api-access-rtb6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:23:28 crc kubenswrapper[4915]: I1124 21:23:28.998527 4915 scope.go:117] "RemoveContainer" containerID="8b14bdefdae96e706f617e9f334c389224e60336048e98b6018ef110787c2e6e" Nov 24 21:23:29 crc kubenswrapper[4915]: I1124 21:23:29.031971 4915 scope.go:117] "RemoveContainer" containerID="77343e3dd0dbf9a47d1cfaf1ed5f7e435f9f5d41188162ad0d50c0a46281a7ae" Nov 24 21:23:29 crc kubenswrapper[4915]: I1124 21:23:29.050698 4915 scope.go:117] "RemoveContainer" containerID="938876c1c1a5372a2f5cc09015b1548a0fe3c0d98ca8a1f3664fb8b79c2a7fd9" Nov 24 21:23:29 crc kubenswrapper[4915]: E1124 21:23:29.051238 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"938876c1c1a5372a2f5cc09015b1548a0fe3c0d98ca8a1f3664fb8b79c2a7fd9\": container with ID starting with 938876c1c1a5372a2f5cc09015b1548a0fe3c0d98ca8a1f3664fb8b79c2a7fd9 not found: ID does not exist" containerID="938876c1c1a5372a2f5cc09015b1548a0fe3c0d98ca8a1f3664fb8b79c2a7fd9" Nov 24 21:23:29 crc kubenswrapper[4915]: I1124 21:23:29.051340 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"938876c1c1a5372a2f5cc09015b1548a0fe3c0d98ca8a1f3664fb8b79c2a7fd9"} err="failed to get container status \"938876c1c1a5372a2f5cc09015b1548a0fe3c0d98ca8a1f3664fb8b79c2a7fd9\": rpc error: code = NotFound desc = could not find container \"938876c1c1a5372a2f5cc09015b1548a0fe3c0d98ca8a1f3664fb8b79c2a7fd9\": container with ID starting with 938876c1c1a5372a2f5cc09015b1548a0fe3c0d98ca8a1f3664fb8b79c2a7fd9 not found: ID does not exist" Nov 24 21:23:29 crc kubenswrapper[4915]: I1124 21:23:29.051379 4915 scope.go:117] "RemoveContainer" containerID="8b14bdefdae96e706f617e9f334c389224e60336048e98b6018ef110787c2e6e" Nov 24 21:23:29 crc kubenswrapper[4915]: E1124 21:23:29.051796 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b14bdefdae96e706f617e9f334c389224e60336048e98b6018ef110787c2e6e\": container with ID starting with 8b14bdefdae96e706f617e9f334c389224e60336048e98b6018ef110787c2e6e not found: ID does not exist" containerID="8b14bdefdae96e706f617e9f334c389224e60336048e98b6018ef110787c2e6e" Nov 24 21:23:29 crc kubenswrapper[4915]: I1124 21:23:29.051847 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b14bdefdae96e706f617e9f334c389224e60336048e98b6018ef110787c2e6e"} err="failed to get container status \"8b14bdefdae96e706f617e9f334c389224e60336048e98b6018ef110787c2e6e\": rpc error: code = NotFound desc = could not find container \"8b14bdefdae96e706f617e9f334c389224e60336048e98b6018ef110787c2e6e\": container with ID starting with 8b14bdefdae96e706f617e9f334c389224e60336048e98b6018ef110787c2e6e not found: ID does not exist" Nov 24 21:23:29 crc kubenswrapper[4915]: I1124 21:23:29.051882 4915 scope.go:117] "RemoveContainer" containerID="77343e3dd0dbf9a47d1cfaf1ed5f7e435f9f5d41188162ad0d50c0a46281a7ae" Nov 24 21:23:29 crc kubenswrapper[4915]: E1124 21:23:29.052439 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77343e3dd0dbf9a47d1cfaf1ed5f7e435f9f5d41188162ad0d50c0a46281a7ae\": container with ID starting with 77343e3dd0dbf9a47d1cfaf1ed5f7e435f9f5d41188162ad0d50c0a46281a7ae not found: ID does not exist" containerID="77343e3dd0dbf9a47d1cfaf1ed5f7e435f9f5d41188162ad0d50c0a46281a7ae" Nov 24 21:23:29 crc kubenswrapper[4915]: I1124 21:23:29.052483 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77343e3dd0dbf9a47d1cfaf1ed5f7e435f9f5d41188162ad0d50c0a46281a7ae"} err="failed to get container status \"77343e3dd0dbf9a47d1cfaf1ed5f7e435f9f5d41188162ad0d50c0a46281a7ae\": rpc error: code = NotFound desc = could not find container \"77343e3dd0dbf9a47d1cfaf1ed5f7e435f9f5d41188162ad0d50c0a46281a7ae\": container with ID starting with 77343e3dd0dbf9a47d1cfaf1ed5f7e435f9f5d41188162ad0d50c0a46281a7ae not found: ID does not exist" Nov 24 21:23:29 crc kubenswrapper[4915]: I1124 21:23:29.082313 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68b56c92-7ee3-4d25-8161-1331558cc024-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68b56c92-7ee3-4d25-8161-1331558cc024" (UID: "68b56c92-7ee3-4d25-8161-1331558cc024"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:23:29 crc kubenswrapper[4915]: I1124 21:23:29.089357 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtb6z\" (UniqueName: \"kubernetes.io/projected/68b56c92-7ee3-4d25-8161-1331558cc024-kube-api-access-rtb6z\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:29 crc kubenswrapper[4915]: I1124 21:23:29.089399 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68b56c92-7ee3-4d25-8161-1331558cc024-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:29 crc kubenswrapper[4915]: I1124 21:23:29.089409 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68b56c92-7ee3-4d25-8161-1331558cc024-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:29 crc kubenswrapper[4915]: I1124 21:23:29.314050 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4lth2"] Nov 24 21:23:29 crc kubenswrapper[4915]: I1124 21:23:29.315528 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4lth2"] Nov 24 21:23:30 crc kubenswrapper[4915]: I1124 21:23:30.435790 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68b56c92-7ee3-4d25-8161-1331558cc024" path="/var/lib/kubelet/pods/68b56c92-7ee3-4d25-8161-1331558cc024/volumes" Nov 24 21:23:30 crc kubenswrapper[4915]: I1124 21:23:30.860025 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-prtlc" Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.233885 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" podUID="e0cfbbde-c3a3-4306-a714-76134e43b495" containerName="oauth-openshift" containerID="cri-o://a70b7116eb6b3d2d5388bd487b52c42e5f09dcd44a2171c56cd76d726bc179c0" gracePeriod=15 Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.658716 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.809440 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e0cfbbde-c3a3-4306-a714-76134e43b495-audit-dir\") pod \"e0cfbbde-c3a3-4306-a714-76134e43b495\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.809552 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-service-ca\") pod \"e0cfbbde-c3a3-4306-a714-76134e43b495\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.809593 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-ocp-branding-template\") pod \"e0cfbbde-c3a3-4306-a714-76134e43b495\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.809631 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-session\") pod \"e0cfbbde-c3a3-4306-a714-76134e43b495\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.809646 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0cfbbde-c3a3-4306-a714-76134e43b495-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e0cfbbde-c3a3-4306-a714-76134e43b495" (UID: "e0cfbbde-c3a3-4306-a714-76134e43b495"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.809675 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vvgd\" (UniqueName: \"kubernetes.io/projected/e0cfbbde-c3a3-4306-a714-76134e43b495-kube-api-access-6vvgd\") pod \"e0cfbbde-c3a3-4306-a714-76134e43b495\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.809803 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-user-template-login\") pod \"e0cfbbde-c3a3-4306-a714-76134e43b495\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.809900 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-trusted-ca-bundle\") pod \"e0cfbbde-c3a3-4306-a714-76134e43b495\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.809982 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-serving-cert\") pod \"e0cfbbde-c3a3-4306-a714-76134e43b495\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.810044 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-user-template-error\") pod \"e0cfbbde-c3a3-4306-a714-76134e43b495\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.810112 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-user-idp-0-file-data\") pod \"e0cfbbde-c3a3-4306-a714-76134e43b495\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.810185 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-cliconfig\") pod \"e0cfbbde-c3a3-4306-a714-76134e43b495\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.810241 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-router-certs\") pod \"e0cfbbde-c3a3-4306-a714-76134e43b495\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.810305 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e0cfbbde-c3a3-4306-a714-76134e43b495-audit-policies\") pod \"e0cfbbde-c3a3-4306-a714-76134e43b495\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.810367 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-user-template-provider-selection\") pod \"e0cfbbde-c3a3-4306-a714-76134e43b495\" (UID: \"e0cfbbde-c3a3-4306-a714-76134e43b495\") " Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.811064 4915 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e0cfbbde-c3a3-4306-a714-76134e43b495-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.811102 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "e0cfbbde-c3a3-4306-a714-76134e43b495" (UID: "e0cfbbde-c3a3-4306-a714-76134e43b495"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.811131 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "e0cfbbde-c3a3-4306-a714-76134e43b495" (UID: "e0cfbbde-c3a3-4306-a714-76134e43b495"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.811154 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "e0cfbbde-c3a3-4306-a714-76134e43b495" (UID: "e0cfbbde-c3a3-4306-a714-76134e43b495"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.812029 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0cfbbde-c3a3-4306-a714-76134e43b495-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "e0cfbbde-c3a3-4306-a714-76134e43b495" (UID: "e0cfbbde-c3a3-4306-a714-76134e43b495"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.816308 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "e0cfbbde-c3a3-4306-a714-76134e43b495" (UID: "e0cfbbde-c3a3-4306-a714-76134e43b495"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.816395 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0cfbbde-c3a3-4306-a714-76134e43b495-kube-api-access-6vvgd" (OuterVolumeSpecName: "kube-api-access-6vvgd") pod "e0cfbbde-c3a3-4306-a714-76134e43b495" (UID: "e0cfbbde-c3a3-4306-a714-76134e43b495"). InnerVolumeSpecName "kube-api-access-6vvgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.817834 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "e0cfbbde-c3a3-4306-a714-76134e43b495" (UID: "e0cfbbde-c3a3-4306-a714-76134e43b495"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.818277 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "e0cfbbde-c3a3-4306-a714-76134e43b495" (UID: "e0cfbbde-c3a3-4306-a714-76134e43b495"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.818555 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "e0cfbbde-c3a3-4306-a714-76134e43b495" (UID: "e0cfbbde-c3a3-4306-a714-76134e43b495"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.818659 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "e0cfbbde-c3a3-4306-a714-76134e43b495" (UID: "e0cfbbde-c3a3-4306-a714-76134e43b495"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.818888 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "e0cfbbde-c3a3-4306-a714-76134e43b495" (UID: "e0cfbbde-c3a3-4306-a714-76134e43b495"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.819288 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "e0cfbbde-c3a3-4306-a714-76134e43b495" (UID: "e0cfbbde-c3a3-4306-a714-76134e43b495"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.819524 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "e0cfbbde-c3a3-4306-a714-76134e43b495" (UID: "e0cfbbde-c3a3-4306-a714-76134e43b495"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.912340 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.912394 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.912416 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vvgd\" (UniqueName: \"kubernetes.io/projected/e0cfbbde-c3a3-4306-a714-76134e43b495-kube-api-access-6vvgd\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.912437 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.912459 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.912480 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.912499 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.912521 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.912540 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.912561 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.912580 4915 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e0cfbbde-c3a3-4306-a714-76134e43b495-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.912600 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:37 crc kubenswrapper[4915]: I1124 21:23:37.912620 4915 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e0cfbbde-c3a3-4306-a714-76134e43b495-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.030769 4915 generic.go:334] "Generic (PLEG): container finished" podID="e0cfbbde-c3a3-4306-a714-76134e43b495" containerID="a70b7116eb6b3d2d5388bd487b52c42e5f09dcd44a2171c56cd76d726bc179c0" exitCode=0 Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.030938 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.030960 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" event={"ID":"e0cfbbde-c3a3-4306-a714-76134e43b495","Type":"ContainerDied","Data":"a70b7116eb6b3d2d5388bd487b52c42e5f09dcd44a2171c56cd76d726bc179c0"} Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.032051 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-rdk5z" event={"ID":"e0cfbbde-c3a3-4306-a714-76134e43b495","Type":"ContainerDied","Data":"6e60012216f579f35f80bd59c6ea8fa962467af67bfe34d8ec5fb121836b9191"} Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.032097 4915 scope.go:117] "RemoveContainer" containerID="a70b7116eb6b3d2d5388bd487b52c42e5f09dcd44a2171c56cd76d726bc179c0" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.068728 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rdk5z"] Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.069143 4915 scope.go:117] "RemoveContainer" containerID="a70b7116eb6b3d2d5388bd487b52c42e5f09dcd44a2171c56cd76d726bc179c0" Nov 24 21:23:38 crc kubenswrapper[4915]: E1124 21:23:38.069917 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a70b7116eb6b3d2d5388bd487b52c42e5f09dcd44a2171c56cd76d726bc179c0\": container with ID starting with a70b7116eb6b3d2d5388bd487b52c42e5f09dcd44a2171c56cd76d726bc179c0 not found: ID does not exist" containerID="a70b7116eb6b3d2d5388bd487b52c42e5f09dcd44a2171c56cd76d726bc179c0" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.069996 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a70b7116eb6b3d2d5388bd487b52c42e5f09dcd44a2171c56cd76d726bc179c0"} err="failed to get container status \"a70b7116eb6b3d2d5388bd487b52c42e5f09dcd44a2171c56cd76d726bc179c0\": rpc error: code = NotFound desc = could not find container \"a70b7116eb6b3d2d5388bd487b52c42e5f09dcd44a2171c56cd76d726bc179c0\": container with ID starting with a70b7116eb6b3d2d5388bd487b52c42e5f09dcd44a2171c56cd76d726bc179c0 not found: ID does not exist" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.071125 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-rdk5z"] Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.458664 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0cfbbde-c3a3-4306-a714-76134e43b495" path="/var/lib/kubelet/pods/e0cfbbde-c3a3-4306-a714-76134e43b495/volumes" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.840248 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-679fb67f4b-jcrnd"] Nov 24 21:23:38 crc kubenswrapper[4915]: E1124 21:23:38.840629 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa8db4aa-bd83-40e5-8673-4f6db186c161" containerName="pruner" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.840650 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa8db4aa-bd83-40e5-8673-4f6db186c161" containerName="pruner" Nov 24 21:23:38 crc kubenswrapper[4915]: E1124 21:23:38.840666 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0cfbbde-c3a3-4306-a714-76134e43b495" containerName="oauth-openshift" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.840679 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0cfbbde-c3a3-4306-a714-76134e43b495" containerName="oauth-openshift" Nov 24 21:23:38 crc kubenswrapper[4915]: E1124 21:23:38.840697 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b56c92-7ee3-4d25-8161-1331558cc024" containerName="extract-content" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.840710 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b56c92-7ee3-4d25-8161-1331558cc024" containerName="extract-content" Nov 24 21:23:38 crc kubenswrapper[4915]: E1124 21:23:38.840725 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db16378b-3c05-4a15-82f2-eb3d06649681" containerName="registry-server" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.840738 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="db16378b-3c05-4a15-82f2-eb3d06649681" containerName="registry-server" Nov 24 21:23:38 crc kubenswrapper[4915]: E1124 21:23:38.840758 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b56c92-7ee3-4d25-8161-1331558cc024" containerName="extract-utilities" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.840771 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b56c92-7ee3-4d25-8161-1331558cc024" containerName="extract-utilities" Nov 24 21:23:38 crc kubenswrapper[4915]: E1124 21:23:38.840829 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="206d8bda-d169-42af-bbaa-ac3ddbef52a2" containerName="extract-utilities" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.840842 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="206d8bda-d169-42af-bbaa-ac3ddbef52a2" containerName="extract-utilities" Nov 24 21:23:38 crc kubenswrapper[4915]: E1124 21:23:38.840858 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="206d8bda-d169-42af-bbaa-ac3ddbef52a2" containerName="registry-server" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.840871 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="206d8bda-d169-42af-bbaa-ac3ddbef52a2" containerName="registry-server" Nov 24 21:23:38 crc kubenswrapper[4915]: E1124 21:23:38.840892 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dc0e018-8dcb-4b41-869e-18ed5010c682" containerName="extract-content" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.840904 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dc0e018-8dcb-4b41-869e-18ed5010c682" containerName="extract-content" Nov 24 21:23:38 crc kubenswrapper[4915]: E1124 21:23:38.840923 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dc0e018-8dcb-4b41-869e-18ed5010c682" containerName="extract-utilities" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.840935 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dc0e018-8dcb-4b41-869e-18ed5010c682" containerName="extract-utilities" Nov 24 21:23:38 crc kubenswrapper[4915]: E1124 21:23:38.840951 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db16378b-3c05-4a15-82f2-eb3d06649681" containerName="extract-content" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.840963 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="db16378b-3c05-4a15-82f2-eb3d06649681" containerName="extract-content" Nov 24 21:23:38 crc kubenswrapper[4915]: E1124 21:23:38.840976 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68b56c92-7ee3-4d25-8161-1331558cc024" containerName="registry-server" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.840992 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="68b56c92-7ee3-4d25-8161-1331558cc024" containerName="registry-server" Nov 24 21:23:38 crc kubenswrapper[4915]: E1124 21:23:38.841009 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db16378b-3c05-4a15-82f2-eb3d06649681" containerName="extract-utilities" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.841021 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="db16378b-3c05-4a15-82f2-eb3d06649681" containerName="extract-utilities" Nov 24 21:23:38 crc kubenswrapper[4915]: E1124 21:23:38.841037 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dc0e018-8dcb-4b41-869e-18ed5010c682" containerName="registry-server" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.841050 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dc0e018-8dcb-4b41-869e-18ed5010c682" containerName="registry-server" Nov 24 21:23:38 crc kubenswrapper[4915]: E1124 21:23:38.841070 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="723c1a8c-4c1d-4851-ac8d-006f106f9fba" containerName="pruner" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.841084 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="723c1a8c-4c1d-4851-ac8d-006f106f9fba" containerName="pruner" Nov 24 21:23:38 crc kubenswrapper[4915]: E1124 21:23:38.841105 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="206d8bda-d169-42af-bbaa-ac3ddbef52a2" containerName="extract-content" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.841116 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="206d8bda-d169-42af-bbaa-ac3ddbef52a2" containerName="extract-content" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.841279 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="206d8bda-d169-42af-bbaa-ac3ddbef52a2" containerName="registry-server" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.841296 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="db16378b-3c05-4a15-82f2-eb3d06649681" containerName="registry-server" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.841315 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0cfbbde-c3a3-4306-a714-76134e43b495" containerName="oauth-openshift" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.841328 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa8db4aa-bd83-40e5-8673-4f6db186c161" containerName="pruner" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.841339 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="68b56c92-7ee3-4d25-8161-1331558cc024" containerName="registry-server" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.841360 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dc0e018-8dcb-4b41-869e-18ed5010c682" containerName="registry-server" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.841376 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="723c1a8c-4c1d-4851-ac8d-006f106f9fba" containerName="pruner" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.842005 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.845571 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.846069 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.846238 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.849453 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.849857 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.850217 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.850537 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.853888 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.854144 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.854204 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.854502 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.856078 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.866066 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.868885 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.877230 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-679fb67f4b-jcrnd"] Nov 24 21:23:38 crc kubenswrapper[4915]: I1124 21:23:38.881502 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.026554 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.026654 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-user-template-error\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.026724 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-system-serving-cert\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.026771 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.026890 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.026947 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-system-service-ca\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.027116 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-system-router-certs\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.027197 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-system-session\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.027242 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c1ac78f0-a240-4007-94e3-ae225f20fa57-audit-policies\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.027316 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-system-cliconfig\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.027393 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22259\" (UniqueName: \"kubernetes.io/projected/c1ac78f0-a240-4007-94e3-ae225f20fa57-kube-api-access-22259\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.027461 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-user-template-login\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.027517 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.027589 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c1ac78f0-a240-4007-94e3-ae225f20fa57-audit-dir\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.129434 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.129503 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-user-template-error\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.129540 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-system-serving-cert\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.129568 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.129601 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.129631 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-system-service-ca\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.129671 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-system-router-certs\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.129695 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-system-session\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.129718 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c1ac78f0-a240-4007-94e3-ae225f20fa57-audit-policies\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.129740 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-system-cliconfig\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.129766 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22259\" (UniqueName: \"kubernetes.io/projected/c1ac78f0-a240-4007-94e3-ae225f20fa57-kube-api-access-22259\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.129825 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-user-template-login\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.129850 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.129892 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c1ac78f0-a240-4007-94e3-ae225f20fa57-audit-dir\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.129979 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c1ac78f0-a240-4007-94e3-ae225f20fa57-audit-dir\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.130719 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-system-service-ca\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.130839 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-system-cliconfig\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.131821 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.131860 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c1ac78f0-a240-4007-94e3-ae225f20fa57-audit-policies\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.134667 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-system-session\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.134713 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-system-router-certs\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.134900 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.135127 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.135560 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-user-template-error\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.135962 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.136154 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-user-template-login\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.137516 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c1ac78f0-a240-4007-94e3-ae225f20fa57-v4-0-config-system-serving-cert\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.161336 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22259\" (UniqueName: \"kubernetes.io/projected/c1ac78f0-a240-4007-94e3-ae225f20fa57-kube-api-access-22259\") pod \"oauth-openshift-679fb67f4b-jcrnd\" (UID: \"c1ac78f0-a240-4007-94e3-ae225f20fa57\") " pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.178713 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:39 crc kubenswrapper[4915]: I1124 21:23:39.379278 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-679fb67f4b-jcrnd"] Nov 24 21:23:40 crc kubenswrapper[4915]: I1124 21:23:40.049829 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" event={"ID":"c1ac78f0-a240-4007-94e3-ae225f20fa57","Type":"ContainerStarted","Data":"83f62526c4f61a9b07eca657f06b80ecb251d2f4dad903ec917548ba64407528"} Nov 24 21:23:40 crc kubenswrapper[4915]: I1124 21:23:40.050856 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:23:40 crc kubenswrapper[4915]: I1124 21:23:40.050895 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" event={"ID":"c1ac78f0-a240-4007-94e3-ae225f20fa57","Type":"ContainerStarted","Data":"ae3522f046a3fb3a13caf7c8ed6cf7f77a2cfb4a2cc6bb134209c3d6178806f4"} Nov 24 21:23:40 crc kubenswrapper[4915]: I1124 21:23:40.082847 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" podStartSLOduration=28.082759981 podStartE2EDuration="28.082759981s" podCreationTimestamp="2025-11-24 21:23:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:23:40.078296012 +0000 UTC m=+238.394548235" watchObservedRunningTime="2025-11-24 21:23:40.082759981 +0000 UTC m=+238.399012194" Nov 24 21:23:40 crc kubenswrapper[4915]: I1124 21:23:40.205412 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-679fb67f4b-jcrnd" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.194003 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-prtlc"] Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.194858 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-prtlc" podUID="200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd" containerName="registry-server" containerID="cri-o://dda6270f5d47452c44dc53908e74c3202a6b3ae97cc9cbf0f8af9badd1542a9f" gracePeriod=30 Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.206038 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rwqx9"] Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.206302 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rwqx9" podUID="04ecdca8-2d28-4a23-9c7d-107d0a882bc9" containerName="registry-server" containerID="cri-o://560b8398d00c30a3989c565fe6c3711f9dfd0ad7555ec122c7e3ee35679151d6" gracePeriod=30 Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.216948 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zbtbl"] Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.217211 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-zbtbl" podUID="38f0c117-4d15-4ac3-aece-8f0189d91bdb" containerName="marketplace-operator" containerID="cri-o://fcc6ba456b5973228b4b147ab3d4bbbbc4698a26856b640381f15b9f47caa322" gracePeriod=30 Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.234629 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-khkz6"] Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.234911 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-khkz6" podUID="a4499c65-6038-409a-964e-4b00d5286518" containerName="registry-server" containerID="cri-o://82c195af7d0e980f6af098fb8348a12a5cec23d3fbcbf254c9709a0312bb87cf" gracePeriod=30 Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.242722 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d489h"] Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.245332 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d489h" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.252245 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-br8zr"] Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.252822 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-br8zr" podUID="dd92d417-da27-4446-aade-0a17abc2eecf" containerName="registry-server" containerID="cri-o://2a36c36042c61b3950094923804384177640ab48da42471912298fb840416310" gracePeriod=30 Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.284902 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d489h"] Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.403232 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3048109-51ee-4326-827b-979dc4ec0481-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d489h\" (UID: \"f3048109-51ee-4326-827b-979dc4ec0481\") " pod="openshift-marketplace/marketplace-operator-79b997595-d489h" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.403376 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn9mk\" (UniqueName: \"kubernetes.io/projected/f3048109-51ee-4326-827b-979dc4ec0481-kube-api-access-dn9mk\") pod \"marketplace-operator-79b997595-d489h\" (UID: \"f3048109-51ee-4326-827b-979dc4ec0481\") " pod="openshift-marketplace/marketplace-operator-79b997595-d489h" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.403471 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f3048109-51ee-4326-827b-979dc4ec0481-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d489h\" (UID: \"f3048109-51ee-4326-827b-979dc4ec0481\") " pod="openshift-marketplace/marketplace-operator-79b997595-d489h" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.504742 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3048109-51ee-4326-827b-979dc4ec0481-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d489h\" (UID: \"f3048109-51ee-4326-827b-979dc4ec0481\") " pod="openshift-marketplace/marketplace-operator-79b997595-d489h" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.505132 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn9mk\" (UniqueName: \"kubernetes.io/projected/f3048109-51ee-4326-827b-979dc4ec0481-kube-api-access-dn9mk\") pod \"marketplace-operator-79b997595-d489h\" (UID: \"f3048109-51ee-4326-827b-979dc4ec0481\") " pod="openshift-marketplace/marketplace-operator-79b997595-d489h" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.505159 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f3048109-51ee-4326-827b-979dc4ec0481-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d489h\" (UID: \"f3048109-51ee-4326-827b-979dc4ec0481\") " pod="openshift-marketplace/marketplace-operator-79b997595-d489h" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.510223 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f3048109-51ee-4326-827b-979dc4ec0481-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-d489h\" (UID: \"f3048109-51ee-4326-827b-979dc4ec0481\") " pod="openshift-marketplace/marketplace-operator-79b997595-d489h" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.516650 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f3048109-51ee-4326-827b-979dc4ec0481-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-d489h\" (UID: \"f3048109-51ee-4326-827b-979dc4ec0481\") " pod="openshift-marketplace/marketplace-operator-79b997595-d489h" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.537476 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn9mk\" (UniqueName: \"kubernetes.io/projected/f3048109-51ee-4326-827b-979dc4ec0481-kube-api-access-dn9mk\") pod \"marketplace-operator-79b997595-d489h\" (UID: \"f3048109-51ee-4326-827b-979dc4ec0481\") " pod="openshift-marketplace/marketplace-operator-79b997595-d489h" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.622586 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-d489h" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.628909 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prtlc" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.684477 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-khkz6" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.690962 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zbtbl" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.697590 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rwqx9" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.699764 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-br8zr" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.808866 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4499c65-6038-409a-964e-4b00d5286518-catalog-content\") pod \"a4499c65-6038-409a-964e-4b00d5286518\" (UID: \"a4499c65-6038-409a-964e-4b00d5286518\") " Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.808963 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvf9l\" (UniqueName: \"kubernetes.io/projected/dd92d417-da27-4446-aade-0a17abc2eecf-kube-api-access-nvf9l\") pod \"dd92d417-da27-4446-aade-0a17abc2eecf\" (UID: \"dd92d417-da27-4446-aade-0a17abc2eecf\") " Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.808992 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skdmm\" (UniqueName: \"kubernetes.io/projected/04ecdca8-2d28-4a23-9c7d-107d0a882bc9-kube-api-access-skdmm\") pod \"04ecdca8-2d28-4a23-9c7d-107d0a882bc9\" (UID: \"04ecdca8-2d28-4a23-9c7d-107d0a882bc9\") " Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.809019 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd-catalog-content\") pod \"200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd\" (UID: \"200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd\") " Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.809041 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd-utilities\") pod \"200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd\" (UID: \"200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd\") " Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.809077 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04ecdca8-2d28-4a23-9c7d-107d0a882bc9-utilities\") pod \"04ecdca8-2d28-4a23-9c7d-107d0a882bc9\" (UID: \"04ecdca8-2d28-4a23-9c7d-107d0a882bc9\") " Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.809108 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/38f0c117-4d15-4ac3-aece-8f0189d91bdb-marketplace-operator-metrics\") pod \"38f0c117-4d15-4ac3-aece-8f0189d91bdb\" (UID: \"38f0c117-4d15-4ac3-aece-8f0189d91bdb\") " Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.809138 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2g8m8\" (UniqueName: \"kubernetes.io/projected/200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd-kube-api-access-2g8m8\") pod \"200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd\" (UID: \"200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd\") " Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.809182 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd92d417-da27-4446-aade-0a17abc2eecf-utilities\") pod \"dd92d417-da27-4446-aade-0a17abc2eecf\" (UID: \"dd92d417-da27-4446-aade-0a17abc2eecf\") " Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.809212 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5grtb\" (UniqueName: \"kubernetes.io/projected/a4499c65-6038-409a-964e-4b00d5286518-kube-api-access-5grtb\") pod \"a4499c65-6038-409a-964e-4b00d5286518\" (UID: \"a4499c65-6038-409a-964e-4b00d5286518\") " Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.809240 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38f0c117-4d15-4ac3-aece-8f0189d91bdb-marketplace-trusted-ca\") pod \"38f0c117-4d15-4ac3-aece-8f0189d91bdb\" (UID: \"38f0c117-4d15-4ac3-aece-8f0189d91bdb\") " Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.809264 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04ecdca8-2d28-4a23-9c7d-107d0a882bc9-catalog-content\") pod \"04ecdca8-2d28-4a23-9c7d-107d0a882bc9\" (UID: \"04ecdca8-2d28-4a23-9c7d-107d0a882bc9\") " Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.809310 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd92d417-da27-4446-aade-0a17abc2eecf-catalog-content\") pod \"dd92d417-da27-4446-aade-0a17abc2eecf\" (UID: \"dd92d417-da27-4446-aade-0a17abc2eecf\") " Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.809334 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4srf\" (UniqueName: \"kubernetes.io/projected/38f0c117-4d15-4ac3-aece-8f0189d91bdb-kube-api-access-h4srf\") pod \"38f0c117-4d15-4ac3-aece-8f0189d91bdb\" (UID: \"38f0c117-4d15-4ac3-aece-8f0189d91bdb\") " Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.809364 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4499c65-6038-409a-964e-4b00d5286518-utilities\") pod \"a4499c65-6038-409a-964e-4b00d5286518\" (UID: \"a4499c65-6038-409a-964e-4b00d5286518\") " Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.812650 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd-utilities" (OuterVolumeSpecName: "utilities") pod "200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd" (UID: "200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.812822 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd92d417-da27-4446-aade-0a17abc2eecf-utilities" (OuterVolumeSpecName: "utilities") pod "dd92d417-da27-4446-aade-0a17abc2eecf" (UID: "dd92d417-da27-4446-aade-0a17abc2eecf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.813739 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04ecdca8-2d28-4a23-9c7d-107d0a882bc9-utilities" (OuterVolumeSpecName: "utilities") pod "04ecdca8-2d28-4a23-9c7d-107d0a882bc9" (UID: "04ecdca8-2d28-4a23-9c7d-107d0a882bc9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.826525 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38f0c117-4d15-4ac3-aece-8f0189d91bdb-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "38f0c117-4d15-4ac3-aece-8f0189d91bdb" (UID: "38f0c117-4d15-4ac3-aece-8f0189d91bdb"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.834465 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd92d417-da27-4446-aade-0a17abc2eecf-kube-api-access-nvf9l" (OuterVolumeSpecName: "kube-api-access-nvf9l") pod "dd92d417-da27-4446-aade-0a17abc2eecf" (UID: "dd92d417-da27-4446-aade-0a17abc2eecf"). InnerVolumeSpecName "kube-api-access-nvf9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.846008 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4499c65-6038-409a-964e-4b00d5286518-utilities" (OuterVolumeSpecName: "utilities") pod "a4499c65-6038-409a-964e-4b00d5286518" (UID: "a4499c65-6038-409a-964e-4b00d5286518"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.849992 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38f0c117-4d15-4ac3-aece-8f0189d91bdb-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "38f0c117-4d15-4ac3-aece-8f0189d91bdb" (UID: "38f0c117-4d15-4ac3-aece-8f0189d91bdb"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.852160 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38f0c117-4d15-4ac3-aece-8f0189d91bdb-kube-api-access-h4srf" (OuterVolumeSpecName: "kube-api-access-h4srf") pod "38f0c117-4d15-4ac3-aece-8f0189d91bdb" (UID: "38f0c117-4d15-4ac3-aece-8f0189d91bdb"). InnerVolumeSpecName "kube-api-access-h4srf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.858323 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd-kube-api-access-2g8m8" (OuterVolumeSpecName: "kube-api-access-2g8m8") pod "200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd" (UID: "200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd"). InnerVolumeSpecName "kube-api-access-2g8m8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.860005 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4499c65-6038-409a-964e-4b00d5286518-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4499c65-6038-409a-964e-4b00d5286518" (UID: "a4499c65-6038-409a-964e-4b00d5286518"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.860848 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04ecdca8-2d28-4a23-9c7d-107d0a882bc9-kube-api-access-skdmm" (OuterVolumeSpecName: "kube-api-access-skdmm") pod "04ecdca8-2d28-4a23-9c7d-107d0a882bc9" (UID: "04ecdca8-2d28-4a23-9c7d-107d0a882bc9"). InnerVolumeSpecName "kube-api-access-skdmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.865285 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4499c65-6038-409a-964e-4b00d5286518-kube-api-access-5grtb" (OuterVolumeSpecName: "kube-api-access-5grtb") pod "a4499c65-6038-409a-964e-4b00d5286518" (UID: "a4499c65-6038-409a-964e-4b00d5286518"). InnerVolumeSpecName "kube-api-access-5grtb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.898101 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd" (UID: "200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.903875 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04ecdca8-2d28-4a23-9c7d-107d0a882bc9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04ecdca8-2d28-4a23-9c7d-107d0a882bc9" (UID: "04ecdca8-2d28-4a23-9c7d-107d0a882bc9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.911180 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvf9l\" (UniqueName: \"kubernetes.io/projected/dd92d417-da27-4446-aade-0a17abc2eecf-kube-api-access-nvf9l\") on node \"crc\" DevicePath \"\"" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.911204 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skdmm\" (UniqueName: \"kubernetes.io/projected/04ecdca8-2d28-4a23-9c7d-107d0a882bc9-kube-api-access-skdmm\") on node \"crc\" DevicePath \"\"" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.911215 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.911224 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.911233 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04ecdca8-2d28-4a23-9c7d-107d0a882bc9-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.911242 4915 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/38f0c117-4d15-4ac3-aece-8f0189d91bdb-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.911252 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2g8m8\" (UniqueName: \"kubernetes.io/projected/200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd-kube-api-access-2g8m8\") on node \"crc\" DevicePath \"\"" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.911259 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd92d417-da27-4446-aade-0a17abc2eecf-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.911267 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5grtb\" (UniqueName: \"kubernetes.io/projected/a4499c65-6038-409a-964e-4b00d5286518-kube-api-access-5grtb\") on node \"crc\" DevicePath \"\"" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.911275 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04ecdca8-2d28-4a23-9c7d-107d0a882bc9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.911285 4915 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38f0c117-4d15-4ac3-aece-8f0189d91bdb-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.911293 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4srf\" (UniqueName: \"kubernetes.io/projected/38f0c117-4d15-4ac3-aece-8f0189d91bdb-kube-api-access-h4srf\") on node \"crc\" DevicePath \"\"" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.911301 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4499c65-6038-409a-964e-4b00d5286518-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.911308 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4499c65-6038-409a-964e-4b00d5286518-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:24:15 crc kubenswrapper[4915]: I1124 21:24:15.952836 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd92d417-da27-4446-aade-0a17abc2eecf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd92d417-da27-4446-aade-0a17abc2eecf" (UID: "dd92d417-da27-4446-aade-0a17abc2eecf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.012978 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd92d417-da27-4446-aade-0a17abc2eecf-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.145386 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-d489h"] Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.253842 4915 generic.go:334] "Generic (PLEG): container finished" podID="dd92d417-da27-4446-aade-0a17abc2eecf" containerID="2a36c36042c61b3950094923804384177640ab48da42471912298fb840416310" exitCode=0 Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.253930 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-br8zr" event={"ID":"dd92d417-da27-4446-aade-0a17abc2eecf","Type":"ContainerDied","Data":"2a36c36042c61b3950094923804384177640ab48da42471912298fb840416310"} Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.253982 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-br8zr" event={"ID":"dd92d417-da27-4446-aade-0a17abc2eecf","Type":"ContainerDied","Data":"b20f8ec2607998e4c976915ad7496c5a0ca0edeba79684dce8d98449633a61e2"} Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.254003 4915 scope.go:117] "RemoveContainer" containerID="2a36c36042c61b3950094923804384177640ab48da42471912298fb840416310" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.254001 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-br8zr" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.255806 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d489h" event={"ID":"f3048109-51ee-4326-827b-979dc4ec0481","Type":"ContainerStarted","Data":"267eea311425761ed4a582e8c7ae42b4920c3dc6a2e53d89931cf00d0d88f80d"} Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.258878 4915 generic.go:334] "Generic (PLEG): container finished" podID="04ecdca8-2d28-4a23-9c7d-107d0a882bc9" containerID="560b8398d00c30a3989c565fe6c3711f9dfd0ad7555ec122c7e3ee35679151d6" exitCode=0 Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.258934 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rwqx9" event={"ID":"04ecdca8-2d28-4a23-9c7d-107d0a882bc9","Type":"ContainerDied","Data":"560b8398d00c30a3989c565fe6c3711f9dfd0ad7555ec122c7e3ee35679151d6"} Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.258983 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rwqx9" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.259053 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rwqx9" event={"ID":"04ecdca8-2d28-4a23-9c7d-107d0a882bc9","Type":"ContainerDied","Data":"cde896f2ccc269ef487e18696dda8ba45ad5a2baf5303e139145c68d1903b671"} Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.262700 4915 generic.go:334] "Generic (PLEG): container finished" podID="200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd" containerID="dda6270f5d47452c44dc53908e74c3202a6b3ae97cc9cbf0f8af9badd1542a9f" exitCode=0 Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.262816 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prtlc" event={"ID":"200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd","Type":"ContainerDied","Data":"dda6270f5d47452c44dc53908e74c3202a6b3ae97cc9cbf0f8af9badd1542a9f"} Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.262862 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prtlc" event={"ID":"200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd","Type":"ContainerDied","Data":"06037f09690cbb286c4d17642b8a910684be392b93628eb0e85bf6a5c1717357"} Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.262824 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prtlc" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.267611 4915 generic.go:334] "Generic (PLEG): container finished" podID="a4499c65-6038-409a-964e-4b00d5286518" containerID="82c195af7d0e980f6af098fb8348a12a5cec23d3fbcbf254c9709a0312bb87cf" exitCode=0 Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.267674 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-khkz6" event={"ID":"a4499c65-6038-409a-964e-4b00d5286518","Type":"ContainerDied","Data":"82c195af7d0e980f6af098fb8348a12a5cec23d3fbcbf254c9709a0312bb87cf"} Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.267698 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-khkz6" event={"ID":"a4499c65-6038-409a-964e-4b00d5286518","Type":"ContainerDied","Data":"47c1298d348eec9ebbd737abd8f2f5ac7b9884f3d4b5de5b0bb7be0cfd769737"} Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.267760 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-khkz6" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.278297 4915 generic.go:334] "Generic (PLEG): container finished" podID="38f0c117-4d15-4ac3-aece-8f0189d91bdb" containerID="fcc6ba456b5973228b4b147ab3d4bbbbc4698a26856b640381f15b9f47caa322" exitCode=0 Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.278349 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zbtbl" event={"ID":"38f0c117-4d15-4ac3-aece-8f0189d91bdb","Type":"ContainerDied","Data":"fcc6ba456b5973228b4b147ab3d4bbbbc4698a26856b640381f15b9f47caa322"} Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.278356 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-zbtbl" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.278376 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-zbtbl" event={"ID":"38f0c117-4d15-4ac3-aece-8f0189d91bdb","Type":"ContainerDied","Data":"40339628c858c0f7593565cd65339b6a5a2bf62fb2f98b21abc4b89e3e62d790"} Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.281606 4915 scope.go:117] "RemoveContainer" containerID="cf7a08c1d4cd0788738dfd5e9f59e567d622912523c4edf22d5e3311774b0968" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.299320 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-br8zr"] Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.308003 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-br8zr"] Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.321091 4915 scope.go:117] "RemoveContainer" containerID="f1d385bc2c955e3cde30a00736ed2d1a46d99d7f1909251b9be501aa49ec8f0e" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.321093 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-khkz6"] Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.328347 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-khkz6"] Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.332637 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rwqx9"] Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.339412 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rwqx9"] Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.341674 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zbtbl"] Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.342083 4915 scope.go:117] "RemoveContainer" containerID="2a36c36042c61b3950094923804384177640ab48da42471912298fb840416310" Nov 24 21:24:16 crc kubenswrapper[4915]: E1124 21:24:16.345565 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a36c36042c61b3950094923804384177640ab48da42471912298fb840416310\": container with ID starting with 2a36c36042c61b3950094923804384177640ab48da42471912298fb840416310 not found: ID does not exist" containerID="2a36c36042c61b3950094923804384177640ab48da42471912298fb840416310" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.345627 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a36c36042c61b3950094923804384177640ab48da42471912298fb840416310"} err="failed to get container status \"2a36c36042c61b3950094923804384177640ab48da42471912298fb840416310\": rpc error: code = NotFound desc = could not find container \"2a36c36042c61b3950094923804384177640ab48da42471912298fb840416310\": container with ID starting with 2a36c36042c61b3950094923804384177640ab48da42471912298fb840416310 not found: ID does not exist" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.345659 4915 scope.go:117] "RemoveContainer" containerID="cf7a08c1d4cd0788738dfd5e9f59e567d622912523c4edf22d5e3311774b0968" Nov 24 21:24:16 crc kubenswrapper[4915]: E1124 21:24:16.346180 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf7a08c1d4cd0788738dfd5e9f59e567d622912523c4edf22d5e3311774b0968\": container with ID starting with cf7a08c1d4cd0788738dfd5e9f59e567d622912523c4edf22d5e3311774b0968 not found: ID does not exist" containerID="cf7a08c1d4cd0788738dfd5e9f59e567d622912523c4edf22d5e3311774b0968" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.346230 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf7a08c1d4cd0788738dfd5e9f59e567d622912523c4edf22d5e3311774b0968"} err="failed to get container status \"cf7a08c1d4cd0788738dfd5e9f59e567d622912523c4edf22d5e3311774b0968\": rpc error: code = NotFound desc = could not find container \"cf7a08c1d4cd0788738dfd5e9f59e567d622912523c4edf22d5e3311774b0968\": container with ID starting with cf7a08c1d4cd0788738dfd5e9f59e567d622912523c4edf22d5e3311774b0968 not found: ID does not exist" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.346257 4915 scope.go:117] "RemoveContainer" containerID="f1d385bc2c955e3cde30a00736ed2d1a46d99d7f1909251b9be501aa49ec8f0e" Nov 24 21:24:16 crc kubenswrapper[4915]: E1124 21:24:16.346556 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1d385bc2c955e3cde30a00736ed2d1a46d99d7f1909251b9be501aa49ec8f0e\": container with ID starting with f1d385bc2c955e3cde30a00736ed2d1a46d99d7f1909251b9be501aa49ec8f0e not found: ID does not exist" containerID="f1d385bc2c955e3cde30a00736ed2d1a46d99d7f1909251b9be501aa49ec8f0e" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.346583 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1d385bc2c955e3cde30a00736ed2d1a46d99d7f1909251b9be501aa49ec8f0e"} err="failed to get container status \"f1d385bc2c955e3cde30a00736ed2d1a46d99d7f1909251b9be501aa49ec8f0e\": rpc error: code = NotFound desc = could not find container \"f1d385bc2c955e3cde30a00736ed2d1a46d99d7f1909251b9be501aa49ec8f0e\": container with ID starting with f1d385bc2c955e3cde30a00736ed2d1a46d99d7f1909251b9be501aa49ec8f0e not found: ID does not exist" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.346605 4915 scope.go:117] "RemoveContainer" containerID="560b8398d00c30a3989c565fe6c3711f9dfd0ad7555ec122c7e3ee35679151d6" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.347225 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-zbtbl"] Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.353381 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-prtlc"] Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.358280 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-prtlc"] Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.389415 4915 scope.go:117] "RemoveContainer" containerID="f9518f07aa72dd5e935edbb660bca3daade6449bdd826135a51be1d3ecd6642e" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.406449 4915 scope.go:117] "RemoveContainer" containerID="fea4aa568a5ed6298c30c2e1427ce17bf59aa3db08ebe21b426a31a386b9c61a" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.424564 4915 scope.go:117] "RemoveContainer" containerID="560b8398d00c30a3989c565fe6c3711f9dfd0ad7555ec122c7e3ee35679151d6" Nov 24 21:24:16 crc kubenswrapper[4915]: E1124 21:24:16.425028 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"560b8398d00c30a3989c565fe6c3711f9dfd0ad7555ec122c7e3ee35679151d6\": container with ID starting with 560b8398d00c30a3989c565fe6c3711f9dfd0ad7555ec122c7e3ee35679151d6 not found: ID does not exist" containerID="560b8398d00c30a3989c565fe6c3711f9dfd0ad7555ec122c7e3ee35679151d6" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.425067 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"560b8398d00c30a3989c565fe6c3711f9dfd0ad7555ec122c7e3ee35679151d6"} err="failed to get container status \"560b8398d00c30a3989c565fe6c3711f9dfd0ad7555ec122c7e3ee35679151d6\": rpc error: code = NotFound desc = could not find container \"560b8398d00c30a3989c565fe6c3711f9dfd0ad7555ec122c7e3ee35679151d6\": container with ID starting with 560b8398d00c30a3989c565fe6c3711f9dfd0ad7555ec122c7e3ee35679151d6 not found: ID does not exist" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.425094 4915 scope.go:117] "RemoveContainer" containerID="f9518f07aa72dd5e935edbb660bca3daade6449bdd826135a51be1d3ecd6642e" Nov 24 21:24:16 crc kubenswrapper[4915]: E1124 21:24:16.425441 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9518f07aa72dd5e935edbb660bca3daade6449bdd826135a51be1d3ecd6642e\": container with ID starting with f9518f07aa72dd5e935edbb660bca3daade6449bdd826135a51be1d3ecd6642e not found: ID does not exist" containerID="f9518f07aa72dd5e935edbb660bca3daade6449bdd826135a51be1d3ecd6642e" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.425464 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9518f07aa72dd5e935edbb660bca3daade6449bdd826135a51be1d3ecd6642e"} err="failed to get container status \"f9518f07aa72dd5e935edbb660bca3daade6449bdd826135a51be1d3ecd6642e\": rpc error: code = NotFound desc = could not find container \"f9518f07aa72dd5e935edbb660bca3daade6449bdd826135a51be1d3ecd6642e\": container with ID starting with f9518f07aa72dd5e935edbb660bca3daade6449bdd826135a51be1d3ecd6642e not found: ID does not exist" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.425480 4915 scope.go:117] "RemoveContainer" containerID="fea4aa568a5ed6298c30c2e1427ce17bf59aa3db08ebe21b426a31a386b9c61a" Nov 24 21:24:16 crc kubenswrapper[4915]: E1124 21:24:16.425768 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fea4aa568a5ed6298c30c2e1427ce17bf59aa3db08ebe21b426a31a386b9c61a\": container with ID starting with fea4aa568a5ed6298c30c2e1427ce17bf59aa3db08ebe21b426a31a386b9c61a not found: ID does not exist" containerID="fea4aa568a5ed6298c30c2e1427ce17bf59aa3db08ebe21b426a31a386b9c61a" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.425811 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fea4aa568a5ed6298c30c2e1427ce17bf59aa3db08ebe21b426a31a386b9c61a"} err="failed to get container status \"fea4aa568a5ed6298c30c2e1427ce17bf59aa3db08ebe21b426a31a386b9c61a\": rpc error: code = NotFound desc = could not find container \"fea4aa568a5ed6298c30c2e1427ce17bf59aa3db08ebe21b426a31a386b9c61a\": container with ID starting with fea4aa568a5ed6298c30c2e1427ce17bf59aa3db08ebe21b426a31a386b9c61a not found: ID does not exist" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.425830 4915 scope.go:117] "RemoveContainer" containerID="dda6270f5d47452c44dc53908e74c3202a6b3ae97cc9cbf0f8af9badd1542a9f" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.432306 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04ecdca8-2d28-4a23-9c7d-107d0a882bc9" path="/var/lib/kubelet/pods/04ecdca8-2d28-4a23-9c7d-107d0a882bc9/volumes" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.433028 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd" path="/var/lib/kubelet/pods/200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd/volumes" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.433613 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38f0c117-4d15-4ac3-aece-8f0189d91bdb" path="/var/lib/kubelet/pods/38f0c117-4d15-4ac3-aece-8f0189d91bdb/volumes" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.434450 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4499c65-6038-409a-964e-4b00d5286518" path="/var/lib/kubelet/pods/a4499c65-6038-409a-964e-4b00d5286518/volumes" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.435013 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd92d417-da27-4446-aade-0a17abc2eecf" path="/var/lib/kubelet/pods/dd92d417-da27-4446-aade-0a17abc2eecf/volumes" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.439768 4915 scope.go:117] "RemoveContainer" containerID="89d4e35e7a4bd517ac92a18fdf4106e929a2ab204bcd3897c26a50cb1cf0b24e" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.455797 4915 scope.go:117] "RemoveContainer" containerID="38ca78ec8ca521c59a1d0c9f1100906e0876cf355309bb6d257c6305c2c35927" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.468193 4915 scope.go:117] "RemoveContainer" containerID="dda6270f5d47452c44dc53908e74c3202a6b3ae97cc9cbf0f8af9badd1542a9f" Nov 24 21:24:16 crc kubenswrapper[4915]: E1124 21:24:16.468608 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dda6270f5d47452c44dc53908e74c3202a6b3ae97cc9cbf0f8af9badd1542a9f\": container with ID starting with dda6270f5d47452c44dc53908e74c3202a6b3ae97cc9cbf0f8af9badd1542a9f not found: ID does not exist" containerID="dda6270f5d47452c44dc53908e74c3202a6b3ae97cc9cbf0f8af9badd1542a9f" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.468645 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dda6270f5d47452c44dc53908e74c3202a6b3ae97cc9cbf0f8af9badd1542a9f"} err="failed to get container status \"dda6270f5d47452c44dc53908e74c3202a6b3ae97cc9cbf0f8af9badd1542a9f\": rpc error: code = NotFound desc = could not find container \"dda6270f5d47452c44dc53908e74c3202a6b3ae97cc9cbf0f8af9badd1542a9f\": container with ID starting with dda6270f5d47452c44dc53908e74c3202a6b3ae97cc9cbf0f8af9badd1542a9f not found: ID does not exist" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.468666 4915 scope.go:117] "RemoveContainer" containerID="89d4e35e7a4bd517ac92a18fdf4106e929a2ab204bcd3897c26a50cb1cf0b24e" Nov 24 21:24:16 crc kubenswrapper[4915]: E1124 21:24:16.468921 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89d4e35e7a4bd517ac92a18fdf4106e929a2ab204bcd3897c26a50cb1cf0b24e\": container with ID starting with 89d4e35e7a4bd517ac92a18fdf4106e929a2ab204bcd3897c26a50cb1cf0b24e not found: ID does not exist" containerID="89d4e35e7a4bd517ac92a18fdf4106e929a2ab204bcd3897c26a50cb1cf0b24e" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.468953 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89d4e35e7a4bd517ac92a18fdf4106e929a2ab204bcd3897c26a50cb1cf0b24e"} err="failed to get container status \"89d4e35e7a4bd517ac92a18fdf4106e929a2ab204bcd3897c26a50cb1cf0b24e\": rpc error: code = NotFound desc = could not find container \"89d4e35e7a4bd517ac92a18fdf4106e929a2ab204bcd3897c26a50cb1cf0b24e\": container with ID starting with 89d4e35e7a4bd517ac92a18fdf4106e929a2ab204bcd3897c26a50cb1cf0b24e not found: ID does not exist" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.468972 4915 scope.go:117] "RemoveContainer" containerID="38ca78ec8ca521c59a1d0c9f1100906e0876cf355309bb6d257c6305c2c35927" Nov 24 21:24:16 crc kubenswrapper[4915]: E1124 21:24:16.469308 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38ca78ec8ca521c59a1d0c9f1100906e0876cf355309bb6d257c6305c2c35927\": container with ID starting with 38ca78ec8ca521c59a1d0c9f1100906e0876cf355309bb6d257c6305c2c35927 not found: ID does not exist" containerID="38ca78ec8ca521c59a1d0c9f1100906e0876cf355309bb6d257c6305c2c35927" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.469336 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38ca78ec8ca521c59a1d0c9f1100906e0876cf355309bb6d257c6305c2c35927"} err="failed to get container status \"38ca78ec8ca521c59a1d0c9f1100906e0876cf355309bb6d257c6305c2c35927\": rpc error: code = NotFound desc = could not find container \"38ca78ec8ca521c59a1d0c9f1100906e0876cf355309bb6d257c6305c2c35927\": container with ID starting with 38ca78ec8ca521c59a1d0c9f1100906e0876cf355309bb6d257c6305c2c35927 not found: ID does not exist" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.469350 4915 scope.go:117] "RemoveContainer" containerID="82c195af7d0e980f6af098fb8348a12a5cec23d3fbcbf254c9709a0312bb87cf" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.485934 4915 scope.go:117] "RemoveContainer" containerID="bc6647fdbbd07473aa1375d49fd255d9977f5811eb769162e82d982b469dc8dc" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.499120 4915 scope.go:117] "RemoveContainer" containerID="63513af541226991bcef3215f7f8cb4d7ecc02e0dbe2a21f655bf6ba8d68fe1c" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.511332 4915 scope.go:117] "RemoveContainer" containerID="82c195af7d0e980f6af098fb8348a12a5cec23d3fbcbf254c9709a0312bb87cf" Nov 24 21:24:16 crc kubenswrapper[4915]: E1124 21:24:16.511788 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82c195af7d0e980f6af098fb8348a12a5cec23d3fbcbf254c9709a0312bb87cf\": container with ID starting with 82c195af7d0e980f6af098fb8348a12a5cec23d3fbcbf254c9709a0312bb87cf not found: ID does not exist" containerID="82c195af7d0e980f6af098fb8348a12a5cec23d3fbcbf254c9709a0312bb87cf" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.512508 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82c195af7d0e980f6af098fb8348a12a5cec23d3fbcbf254c9709a0312bb87cf"} err="failed to get container status \"82c195af7d0e980f6af098fb8348a12a5cec23d3fbcbf254c9709a0312bb87cf\": rpc error: code = NotFound desc = could not find container \"82c195af7d0e980f6af098fb8348a12a5cec23d3fbcbf254c9709a0312bb87cf\": container with ID starting with 82c195af7d0e980f6af098fb8348a12a5cec23d3fbcbf254c9709a0312bb87cf not found: ID does not exist" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.512553 4915 scope.go:117] "RemoveContainer" containerID="bc6647fdbbd07473aa1375d49fd255d9977f5811eb769162e82d982b469dc8dc" Nov 24 21:24:16 crc kubenswrapper[4915]: E1124 21:24:16.513525 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc6647fdbbd07473aa1375d49fd255d9977f5811eb769162e82d982b469dc8dc\": container with ID starting with bc6647fdbbd07473aa1375d49fd255d9977f5811eb769162e82d982b469dc8dc not found: ID does not exist" containerID="bc6647fdbbd07473aa1375d49fd255d9977f5811eb769162e82d982b469dc8dc" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.513585 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc6647fdbbd07473aa1375d49fd255d9977f5811eb769162e82d982b469dc8dc"} err="failed to get container status \"bc6647fdbbd07473aa1375d49fd255d9977f5811eb769162e82d982b469dc8dc\": rpc error: code = NotFound desc = could not find container \"bc6647fdbbd07473aa1375d49fd255d9977f5811eb769162e82d982b469dc8dc\": container with ID starting with bc6647fdbbd07473aa1375d49fd255d9977f5811eb769162e82d982b469dc8dc not found: ID does not exist" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.513621 4915 scope.go:117] "RemoveContainer" containerID="63513af541226991bcef3215f7f8cb4d7ecc02e0dbe2a21f655bf6ba8d68fe1c" Nov 24 21:24:16 crc kubenswrapper[4915]: E1124 21:24:16.514028 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63513af541226991bcef3215f7f8cb4d7ecc02e0dbe2a21f655bf6ba8d68fe1c\": container with ID starting with 63513af541226991bcef3215f7f8cb4d7ecc02e0dbe2a21f655bf6ba8d68fe1c not found: ID does not exist" containerID="63513af541226991bcef3215f7f8cb4d7ecc02e0dbe2a21f655bf6ba8d68fe1c" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.514059 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63513af541226991bcef3215f7f8cb4d7ecc02e0dbe2a21f655bf6ba8d68fe1c"} err="failed to get container status \"63513af541226991bcef3215f7f8cb4d7ecc02e0dbe2a21f655bf6ba8d68fe1c\": rpc error: code = NotFound desc = could not find container \"63513af541226991bcef3215f7f8cb4d7ecc02e0dbe2a21f655bf6ba8d68fe1c\": container with ID starting with 63513af541226991bcef3215f7f8cb4d7ecc02e0dbe2a21f655bf6ba8d68fe1c not found: ID does not exist" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.514078 4915 scope.go:117] "RemoveContainer" containerID="fcc6ba456b5973228b4b147ab3d4bbbbc4698a26856b640381f15b9f47caa322" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.532760 4915 scope.go:117] "RemoveContainer" containerID="fcc6ba456b5973228b4b147ab3d4bbbbc4698a26856b640381f15b9f47caa322" Nov 24 21:24:16 crc kubenswrapper[4915]: E1124 21:24:16.533328 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcc6ba456b5973228b4b147ab3d4bbbbc4698a26856b640381f15b9f47caa322\": container with ID starting with fcc6ba456b5973228b4b147ab3d4bbbbc4698a26856b640381f15b9f47caa322 not found: ID does not exist" containerID="fcc6ba456b5973228b4b147ab3d4bbbbc4698a26856b640381f15b9f47caa322" Nov 24 21:24:16 crc kubenswrapper[4915]: I1124 21:24:16.533372 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcc6ba456b5973228b4b147ab3d4bbbbc4698a26856b640381f15b9f47caa322"} err="failed to get container status \"fcc6ba456b5973228b4b147ab3d4bbbbc4698a26856b640381f15b9f47caa322\": rpc error: code = NotFound desc = could not find container \"fcc6ba456b5973228b4b147ab3d4bbbbc4698a26856b640381f15b9f47caa322\": container with ID starting with fcc6ba456b5973228b4b147ab3d4bbbbc4698a26856b640381f15b9f47caa322 not found: ID does not exist" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.290895 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-d489h" event={"ID":"f3048109-51ee-4326-827b-979dc4ec0481","Type":"ContainerStarted","Data":"5e222a94addb7b300f8495bd5d47bd148831cafa9d19b11a04aa7053757c33fc"} Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.291345 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-d489h" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.304378 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-d489h" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.312194 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-d489h" podStartSLOduration=2.312176647 podStartE2EDuration="2.312176647s" podCreationTimestamp="2025-11-24 21:24:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:24:17.309916824 +0000 UTC m=+275.626168997" watchObservedRunningTime="2025-11-24 21:24:17.312176647 +0000 UTC m=+275.628428820" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.409543 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5hkdx"] Nov 24 21:24:17 crc kubenswrapper[4915]: E1124 21:24:17.409723 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd92d417-da27-4446-aade-0a17abc2eecf" containerName="registry-server" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.409734 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd92d417-da27-4446-aade-0a17abc2eecf" containerName="registry-server" Nov 24 21:24:17 crc kubenswrapper[4915]: E1124 21:24:17.409745 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ecdca8-2d28-4a23-9c7d-107d0a882bc9" containerName="extract-content" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.409750 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ecdca8-2d28-4a23-9c7d-107d0a882bc9" containerName="extract-content" Nov 24 21:24:17 crc kubenswrapper[4915]: E1124 21:24:17.409760 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd92d417-da27-4446-aade-0a17abc2eecf" containerName="extract-content" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.409767 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd92d417-da27-4446-aade-0a17abc2eecf" containerName="extract-content" Nov 24 21:24:17 crc kubenswrapper[4915]: E1124 21:24:17.409908 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4499c65-6038-409a-964e-4b00d5286518" containerName="registry-server" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.409942 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4499c65-6038-409a-964e-4b00d5286518" containerName="registry-server" Nov 24 21:24:17 crc kubenswrapper[4915]: E1124 21:24:17.409952 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd" containerName="registry-server" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.409957 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd" containerName="registry-server" Nov 24 21:24:17 crc kubenswrapper[4915]: E1124 21:24:17.409966 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4499c65-6038-409a-964e-4b00d5286518" containerName="extract-content" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.409972 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4499c65-6038-409a-964e-4b00d5286518" containerName="extract-content" Nov 24 21:24:17 crc kubenswrapper[4915]: E1124 21:24:17.409982 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd" containerName="extract-utilities" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.409988 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd" containerName="extract-utilities" Nov 24 21:24:17 crc kubenswrapper[4915]: E1124 21:24:17.410001 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ecdca8-2d28-4a23-9c7d-107d0a882bc9" containerName="extract-utilities" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.410007 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ecdca8-2d28-4a23-9c7d-107d0a882bc9" containerName="extract-utilities" Nov 24 21:24:17 crc kubenswrapper[4915]: E1124 21:24:17.410017 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4499c65-6038-409a-964e-4b00d5286518" containerName="extract-utilities" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.410024 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4499c65-6038-409a-964e-4b00d5286518" containerName="extract-utilities" Nov 24 21:24:17 crc kubenswrapper[4915]: E1124 21:24:17.410032 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd" containerName="extract-content" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.410037 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd" containerName="extract-content" Nov 24 21:24:17 crc kubenswrapper[4915]: E1124 21:24:17.410044 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ecdca8-2d28-4a23-9c7d-107d0a882bc9" containerName="registry-server" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.410049 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ecdca8-2d28-4a23-9c7d-107d0a882bc9" containerName="registry-server" Nov 24 21:24:17 crc kubenswrapper[4915]: E1124 21:24:17.410059 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd92d417-da27-4446-aade-0a17abc2eecf" containerName="extract-utilities" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.410065 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd92d417-da27-4446-aade-0a17abc2eecf" containerName="extract-utilities" Nov 24 21:24:17 crc kubenswrapper[4915]: E1124 21:24:17.410071 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38f0c117-4d15-4ac3-aece-8f0189d91bdb" containerName="marketplace-operator" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.410076 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="38f0c117-4d15-4ac3-aece-8f0189d91bdb" containerName="marketplace-operator" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.410161 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd92d417-da27-4446-aade-0a17abc2eecf" containerName="registry-server" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.410172 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="200cd8d9-f4f2-4c89-a648-8e0ed7adb9dd" containerName="registry-server" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.410180 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4499c65-6038-409a-964e-4b00d5286518" containerName="registry-server" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.410191 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="38f0c117-4d15-4ac3-aece-8f0189d91bdb" containerName="marketplace-operator" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.410198 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="04ecdca8-2d28-4a23-9c7d-107d0a882bc9" containerName="registry-server" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.410856 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5hkdx" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.413420 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.425623 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5hkdx"] Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.532715 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0ce54f3-6d64-4d55-85c4-1428aab6cad1-catalog-content\") pod \"certified-operators-5hkdx\" (UID: \"e0ce54f3-6d64-4d55-85c4-1428aab6cad1\") " pod="openshift-marketplace/certified-operators-5hkdx" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.532796 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0ce54f3-6d64-4d55-85c4-1428aab6cad1-utilities\") pod \"certified-operators-5hkdx\" (UID: \"e0ce54f3-6d64-4d55-85c4-1428aab6cad1\") " pod="openshift-marketplace/certified-operators-5hkdx" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.532912 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-276dt\" (UniqueName: \"kubernetes.io/projected/e0ce54f3-6d64-4d55-85c4-1428aab6cad1-kube-api-access-276dt\") pod \"certified-operators-5hkdx\" (UID: \"e0ce54f3-6d64-4d55-85c4-1428aab6cad1\") " pod="openshift-marketplace/certified-operators-5hkdx" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.612791 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hj6cr"] Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.613965 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hj6cr" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.616317 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.623794 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hj6cr"] Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.636169 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-276dt\" (UniqueName: \"kubernetes.io/projected/e0ce54f3-6d64-4d55-85c4-1428aab6cad1-kube-api-access-276dt\") pod \"certified-operators-5hkdx\" (UID: \"e0ce54f3-6d64-4d55-85c4-1428aab6cad1\") " pod="openshift-marketplace/certified-operators-5hkdx" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.636220 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0ce54f3-6d64-4d55-85c4-1428aab6cad1-catalog-content\") pod \"certified-operators-5hkdx\" (UID: \"e0ce54f3-6d64-4d55-85c4-1428aab6cad1\") " pod="openshift-marketplace/certified-operators-5hkdx" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.636354 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0ce54f3-6d64-4d55-85c4-1428aab6cad1-utilities\") pod \"certified-operators-5hkdx\" (UID: \"e0ce54f3-6d64-4d55-85c4-1428aab6cad1\") " pod="openshift-marketplace/certified-operators-5hkdx" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.636831 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0ce54f3-6d64-4d55-85c4-1428aab6cad1-catalog-content\") pod \"certified-operators-5hkdx\" (UID: \"e0ce54f3-6d64-4d55-85c4-1428aab6cad1\") " pod="openshift-marketplace/certified-operators-5hkdx" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.636853 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0ce54f3-6d64-4d55-85c4-1428aab6cad1-utilities\") pod \"certified-operators-5hkdx\" (UID: \"e0ce54f3-6d64-4d55-85c4-1428aab6cad1\") " pod="openshift-marketplace/certified-operators-5hkdx" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.658327 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-276dt\" (UniqueName: \"kubernetes.io/projected/e0ce54f3-6d64-4d55-85c4-1428aab6cad1-kube-api-access-276dt\") pod \"certified-operators-5hkdx\" (UID: \"e0ce54f3-6d64-4d55-85c4-1428aab6cad1\") " pod="openshift-marketplace/certified-operators-5hkdx" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.737259 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb4k7\" (UniqueName: \"kubernetes.io/projected/e7b7d384-453b-42c0-89ec-b967910ab508-kube-api-access-sb4k7\") pod \"redhat-marketplace-hj6cr\" (UID: \"e7b7d384-453b-42c0-89ec-b967910ab508\") " pod="openshift-marketplace/redhat-marketplace-hj6cr" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.737314 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7b7d384-453b-42c0-89ec-b967910ab508-catalog-content\") pod \"redhat-marketplace-hj6cr\" (UID: \"e7b7d384-453b-42c0-89ec-b967910ab508\") " pod="openshift-marketplace/redhat-marketplace-hj6cr" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.737375 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7b7d384-453b-42c0-89ec-b967910ab508-utilities\") pod \"redhat-marketplace-hj6cr\" (UID: \"e7b7d384-453b-42c0-89ec-b967910ab508\") " pod="openshift-marketplace/redhat-marketplace-hj6cr" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.776312 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5hkdx" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.838995 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7b7d384-453b-42c0-89ec-b967910ab508-catalog-content\") pod \"redhat-marketplace-hj6cr\" (UID: \"e7b7d384-453b-42c0-89ec-b967910ab508\") " pod="openshift-marketplace/redhat-marketplace-hj6cr" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.839072 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7b7d384-453b-42c0-89ec-b967910ab508-utilities\") pod \"redhat-marketplace-hj6cr\" (UID: \"e7b7d384-453b-42c0-89ec-b967910ab508\") " pod="openshift-marketplace/redhat-marketplace-hj6cr" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.839175 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb4k7\" (UniqueName: \"kubernetes.io/projected/e7b7d384-453b-42c0-89ec-b967910ab508-kube-api-access-sb4k7\") pod \"redhat-marketplace-hj6cr\" (UID: \"e7b7d384-453b-42c0-89ec-b967910ab508\") " pod="openshift-marketplace/redhat-marketplace-hj6cr" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.839808 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7b7d384-453b-42c0-89ec-b967910ab508-utilities\") pod \"redhat-marketplace-hj6cr\" (UID: \"e7b7d384-453b-42c0-89ec-b967910ab508\") " pod="openshift-marketplace/redhat-marketplace-hj6cr" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.839937 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7b7d384-453b-42c0-89ec-b967910ab508-catalog-content\") pod \"redhat-marketplace-hj6cr\" (UID: \"e7b7d384-453b-42c0-89ec-b967910ab508\") " pod="openshift-marketplace/redhat-marketplace-hj6cr" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.860364 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb4k7\" (UniqueName: \"kubernetes.io/projected/e7b7d384-453b-42c0-89ec-b967910ab508-kube-api-access-sb4k7\") pod \"redhat-marketplace-hj6cr\" (UID: \"e7b7d384-453b-42c0-89ec-b967910ab508\") " pod="openshift-marketplace/redhat-marketplace-hj6cr" Nov 24 21:24:17 crc kubenswrapper[4915]: I1124 21:24:17.943026 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hj6cr" Nov 24 21:24:18 crc kubenswrapper[4915]: I1124 21:24:18.116226 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hj6cr"] Nov 24 21:24:18 crc kubenswrapper[4915]: W1124 21:24:18.126497 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7b7d384_453b_42c0_89ec_b967910ab508.slice/crio-bb744ae41024e16ea6ae624a24ca582a4f935ee3edebfe85edc2cb5dbc20e7b8 WatchSource:0}: Error finding container bb744ae41024e16ea6ae624a24ca582a4f935ee3edebfe85edc2cb5dbc20e7b8: Status 404 returned error can't find the container with id bb744ae41024e16ea6ae624a24ca582a4f935ee3edebfe85edc2cb5dbc20e7b8 Nov 24 21:24:18 crc kubenswrapper[4915]: I1124 21:24:18.162464 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5hkdx"] Nov 24 21:24:18 crc kubenswrapper[4915]: W1124 21:24:18.167420 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0ce54f3_6d64_4d55_85c4_1428aab6cad1.slice/crio-f8e596e81dff3700cdcd035be0e490cb6c56ae7526da317cbeef0d85592fab18 WatchSource:0}: Error finding container f8e596e81dff3700cdcd035be0e490cb6c56ae7526da317cbeef0d85592fab18: Status 404 returned error can't find the container with id f8e596e81dff3700cdcd035be0e490cb6c56ae7526da317cbeef0d85592fab18 Nov 24 21:24:18 crc kubenswrapper[4915]: I1124 21:24:18.300476 4915 generic.go:334] "Generic (PLEG): container finished" podID="e7b7d384-453b-42c0-89ec-b967910ab508" containerID="9934f8ec11489f96112ea8957ae5fe391b961901715d4a3d4ed67d722807a3f8" exitCode=0 Nov 24 21:24:18 crc kubenswrapper[4915]: I1124 21:24:18.300714 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hj6cr" event={"ID":"e7b7d384-453b-42c0-89ec-b967910ab508","Type":"ContainerDied","Data":"9934f8ec11489f96112ea8957ae5fe391b961901715d4a3d4ed67d722807a3f8"} Nov 24 21:24:18 crc kubenswrapper[4915]: I1124 21:24:18.300921 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hj6cr" event={"ID":"e7b7d384-453b-42c0-89ec-b967910ab508","Type":"ContainerStarted","Data":"bb744ae41024e16ea6ae624a24ca582a4f935ee3edebfe85edc2cb5dbc20e7b8"} Nov 24 21:24:18 crc kubenswrapper[4915]: I1124 21:24:18.303646 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hkdx" event={"ID":"e0ce54f3-6d64-4d55-85c4-1428aab6cad1","Type":"ContainerStarted","Data":"0001a8fc4732fb7fd374f524706d68f6be6601f9242dcc3c1845d828d56d2afb"} Nov 24 21:24:18 crc kubenswrapper[4915]: I1124 21:24:18.303686 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hkdx" event={"ID":"e0ce54f3-6d64-4d55-85c4-1428aab6cad1","Type":"ContainerStarted","Data":"f8e596e81dff3700cdcd035be0e490cb6c56ae7526da317cbeef0d85592fab18"} Nov 24 21:24:19 crc kubenswrapper[4915]: I1124 21:24:19.311645 4915 generic.go:334] "Generic (PLEG): container finished" podID="e7b7d384-453b-42c0-89ec-b967910ab508" containerID="9704f110f2bb760e7f85435c9451c05dae5a5e36d2fba4656391c6b10f4d9546" exitCode=0 Nov 24 21:24:19 crc kubenswrapper[4915]: I1124 21:24:19.311733 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hj6cr" event={"ID":"e7b7d384-453b-42c0-89ec-b967910ab508","Type":"ContainerDied","Data":"9704f110f2bb760e7f85435c9451c05dae5a5e36d2fba4656391c6b10f4d9546"} Nov 24 21:24:19 crc kubenswrapper[4915]: I1124 21:24:19.317059 4915 generic.go:334] "Generic (PLEG): container finished" podID="e0ce54f3-6d64-4d55-85c4-1428aab6cad1" containerID="0001a8fc4732fb7fd374f524706d68f6be6601f9242dcc3c1845d828d56d2afb" exitCode=0 Nov 24 21:24:19 crc kubenswrapper[4915]: I1124 21:24:19.317170 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hkdx" event={"ID":"e0ce54f3-6d64-4d55-85c4-1428aab6cad1","Type":"ContainerDied","Data":"0001a8fc4732fb7fd374f524706d68f6be6601f9242dcc3c1845d828d56d2afb"} Nov 24 21:24:19 crc kubenswrapper[4915]: I1124 21:24:19.813119 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wvj54"] Nov 24 21:24:19 crc kubenswrapper[4915]: I1124 21:24:19.814275 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wvj54" Nov 24 21:24:19 crc kubenswrapper[4915]: I1124 21:24:19.816644 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 24 21:24:19 crc kubenswrapper[4915]: I1124 21:24:19.822445 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wvj54"] Nov 24 21:24:19 crc kubenswrapper[4915]: I1124 21:24:19.989556 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c95008b7-0cb9-4be3-bb4e-178ff7d25cc3-catalog-content\") pod \"community-operators-wvj54\" (UID: \"c95008b7-0cb9-4be3-bb4e-178ff7d25cc3\") " pod="openshift-marketplace/community-operators-wvj54" Nov 24 21:24:19 crc kubenswrapper[4915]: I1124 21:24:19.989613 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jqhx\" (UniqueName: \"kubernetes.io/projected/c95008b7-0cb9-4be3-bb4e-178ff7d25cc3-kube-api-access-9jqhx\") pod \"community-operators-wvj54\" (UID: \"c95008b7-0cb9-4be3-bb4e-178ff7d25cc3\") " pod="openshift-marketplace/community-operators-wvj54" Nov 24 21:24:19 crc kubenswrapper[4915]: I1124 21:24:19.989639 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c95008b7-0cb9-4be3-bb4e-178ff7d25cc3-utilities\") pod \"community-operators-wvj54\" (UID: \"c95008b7-0cb9-4be3-bb4e-178ff7d25cc3\") " pod="openshift-marketplace/community-operators-wvj54" Nov 24 21:24:20 crc kubenswrapper[4915]: I1124 21:24:20.016119 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4b2jj"] Nov 24 21:24:20 crc kubenswrapper[4915]: I1124 21:24:20.017650 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4b2jj" Nov 24 21:24:20 crc kubenswrapper[4915]: I1124 21:24:20.020967 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 24 21:24:20 crc kubenswrapper[4915]: I1124 21:24:20.029226 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4b2jj"] Nov 24 21:24:20 crc kubenswrapper[4915]: I1124 21:24:20.090532 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c95008b7-0cb9-4be3-bb4e-178ff7d25cc3-catalog-content\") pod \"community-operators-wvj54\" (UID: \"c95008b7-0cb9-4be3-bb4e-178ff7d25cc3\") " pod="openshift-marketplace/community-operators-wvj54" Nov 24 21:24:20 crc kubenswrapper[4915]: I1124 21:24:20.090582 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a466a23-e349-49af-b513-e6e96cab17c7-catalog-content\") pod \"redhat-operators-4b2jj\" (UID: \"2a466a23-e349-49af-b513-e6e96cab17c7\") " pod="openshift-marketplace/redhat-operators-4b2jj" Nov 24 21:24:20 crc kubenswrapper[4915]: I1124 21:24:20.090611 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jqhx\" (UniqueName: \"kubernetes.io/projected/c95008b7-0cb9-4be3-bb4e-178ff7d25cc3-kube-api-access-9jqhx\") pod \"community-operators-wvj54\" (UID: \"c95008b7-0cb9-4be3-bb4e-178ff7d25cc3\") " pod="openshift-marketplace/community-operators-wvj54" Nov 24 21:24:20 crc kubenswrapper[4915]: I1124 21:24:20.090634 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c95008b7-0cb9-4be3-bb4e-178ff7d25cc3-utilities\") pod \"community-operators-wvj54\" (UID: \"c95008b7-0cb9-4be3-bb4e-178ff7d25cc3\") " pod="openshift-marketplace/community-operators-wvj54" Nov 24 21:24:20 crc kubenswrapper[4915]: I1124 21:24:20.090670 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a466a23-e349-49af-b513-e6e96cab17c7-utilities\") pod \"redhat-operators-4b2jj\" (UID: \"2a466a23-e349-49af-b513-e6e96cab17c7\") " pod="openshift-marketplace/redhat-operators-4b2jj" Nov 24 21:24:20 crc kubenswrapper[4915]: I1124 21:24:20.090750 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj67k\" (UniqueName: \"kubernetes.io/projected/2a466a23-e349-49af-b513-e6e96cab17c7-kube-api-access-wj67k\") pod \"redhat-operators-4b2jj\" (UID: \"2a466a23-e349-49af-b513-e6e96cab17c7\") " pod="openshift-marketplace/redhat-operators-4b2jj" Nov 24 21:24:20 crc kubenswrapper[4915]: I1124 21:24:20.091946 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c95008b7-0cb9-4be3-bb4e-178ff7d25cc3-catalog-content\") pod \"community-operators-wvj54\" (UID: \"c95008b7-0cb9-4be3-bb4e-178ff7d25cc3\") " pod="openshift-marketplace/community-operators-wvj54" Nov 24 21:24:20 crc kubenswrapper[4915]: I1124 21:24:20.092322 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c95008b7-0cb9-4be3-bb4e-178ff7d25cc3-utilities\") pod \"community-operators-wvj54\" (UID: \"c95008b7-0cb9-4be3-bb4e-178ff7d25cc3\") " pod="openshift-marketplace/community-operators-wvj54" Nov 24 21:24:20 crc kubenswrapper[4915]: I1124 21:24:20.109731 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jqhx\" (UniqueName: \"kubernetes.io/projected/c95008b7-0cb9-4be3-bb4e-178ff7d25cc3-kube-api-access-9jqhx\") pod \"community-operators-wvj54\" (UID: \"c95008b7-0cb9-4be3-bb4e-178ff7d25cc3\") " pod="openshift-marketplace/community-operators-wvj54" Nov 24 21:24:20 crc kubenswrapper[4915]: I1124 21:24:20.139659 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wvj54" Nov 24 21:24:20 crc kubenswrapper[4915]: I1124 21:24:20.192076 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj67k\" (UniqueName: \"kubernetes.io/projected/2a466a23-e349-49af-b513-e6e96cab17c7-kube-api-access-wj67k\") pod \"redhat-operators-4b2jj\" (UID: \"2a466a23-e349-49af-b513-e6e96cab17c7\") " pod="openshift-marketplace/redhat-operators-4b2jj" Nov 24 21:24:20 crc kubenswrapper[4915]: I1124 21:24:20.193114 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a466a23-e349-49af-b513-e6e96cab17c7-catalog-content\") pod \"redhat-operators-4b2jj\" (UID: \"2a466a23-e349-49af-b513-e6e96cab17c7\") " pod="openshift-marketplace/redhat-operators-4b2jj" Nov 24 21:24:20 crc kubenswrapper[4915]: I1124 21:24:20.193242 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a466a23-e349-49af-b513-e6e96cab17c7-utilities\") pod \"redhat-operators-4b2jj\" (UID: \"2a466a23-e349-49af-b513-e6e96cab17c7\") " pod="openshift-marketplace/redhat-operators-4b2jj" Nov 24 21:24:20 crc kubenswrapper[4915]: I1124 21:24:20.193840 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a466a23-e349-49af-b513-e6e96cab17c7-utilities\") pod \"redhat-operators-4b2jj\" (UID: \"2a466a23-e349-49af-b513-e6e96cab17c7\") " pod="openshift-marketplace/redhat-operators-4b2jj" Nov 24 21:24:20 crc kubenswrapper[4915]: I1124 21:24:20.194108 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a466a23-e349-49af-b513-e6e96cab17c7-catalog-content\") pod \"redhat-operators-4b2jj\" (UID: \"2a466a23-e349-49af-b513-e6e96cab17c7\") " pod="openshift-marketplace/redhat-operators-4b2jj" Nov 24 21:24:20 crc kubenswrapper[4915]: I1124 21:24:20.215628 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj67k\" (UniqueName: \"kubernetes.io/projected/2a466a23-e349-49af-b513-e6e96cab17c7-kube-api-access-wj67k\") pod \"redhat-operators-4b2jj\" (UID: \"2a466a23-e349-49af-b513-e6e96cab17c7\") " pod="openshift-marketplace/redhat-operators-4b2jj" Nov 24 21:24:20 crc kubenswrapper[4915]: I1124 21:24:20.328079 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hj6cr" event={"ID":"e7b7d384-453b-42c0-89ec-b967910ab508","Type":"ContainerStarted","Data":"efab39781e6cf732a2c8a6979a026e3874ff5951e0a3ba07f750db996c1643f0"} Nov 24 21:24:20 crc kubenswrapper[4915]: I1124 21:24:20.346546 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hj6cr" podStartSLOduration=1.708587306 podStartE2EDuration="3.346527649s" podCreationTimestamp="2025-11-24 21:24:17 +0000 UTC" firstStartedPulling="2025-11-24 21:24:18.302234591 +0000 UTC m=+276.618486764" lastFinishedPulling="2025-11-24 21:24:19.940174934 +0000 UTC m=+278.256427107" observedRunningTime="2025-11-24 21:24:20.344698159 +0000 UTC m=+278.660950352" watchObservedRunningTime="2025-11-24 21:24:20.346527649 +0000 UTC m=+278.662779822" Nov 24 21:24:20 crc kubenswrapper[4915]: I1124 21:24:20.354266 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4b2jj" Nov 24 21:24:20 crc kubenswrapper[4915]: I1124 21:24:20.398460 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wvj54"] Nov 24 21:24:20 crc kubenswrapper[4915]: W1124 21:24:20.406058 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc95008b7_0cb9_4be3_bb4e_178ff7d25cc3.slice/crio-81942af2c9ec93015799959de4f4eae26939933a4720cabca017bb61a8212d52 WatchSource:0}: Error finding container 81942af2c9ec93015799959de4f4eae26939933a4720cabca017bb61a8212d52: Status 404 returned error can't find the container with id 81942af2c9ec93015799959de4f4eae26939933a4720cabca017bb61a8212d52 Nov 24 21:24:20 crc kubenswrapper[4915]: I1124 21:24:20.592622 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4b2jj"] Nov 24 21:24:20 crc kubenswrapper[4915]: W1124 21:24:20.702643 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a466a23_e349_49af_b513_e6e96cab17c7.slice/crio-466de29c284b226182bf839b5c1e73a03196e08562f3c99fd64c84b5f54e7043 WatchSource:0}: Error finding container 466de29c284b226182bf839b5c1e73a03196e08562f3c99fd64c84b5f54e7043: Status 404 returned error can't find the container with id 466de29c284b226182bf839b5c1e73a03196e08562f3c99fd64c84b5f54e7043 Nov 24 21:24:21 crc kubenswrapper[4915]: I1124 21:24:21.339552 4915 generic.go:334] "Generic (PLEG): container finished" podID="e0ce54f3-6d64-4d55-85c4-1428aab6cad1" containerID="c06fd06629e316023ba83d66d941ebd5014a19c4210ece5809b7e82a24de9bb3" exitCode=0 Nov 24 21:24:21 crc kubenswrapper[4915]: I1124 21:24:21.339657 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hkdx" event={"ID":"e0ce54f3-6d64-4d55-85c4-1428aab6cad1","Type":"ContainerDied","Data":"c06fd06629e316023ba83d66d941ebd5014a19c4210ece5809b7e82a24de9bb3"} Nov 24 21:24:21 crc kubenswrapper[4915]: I1124 21:24:21.343748 4915 generic.go:334] "Generic (PLEG): container finished" podID="c95008b7-0cb9-4be3-bb4e-178ff7d25cc3" containerID="921b15fc7dd955f7592f1578aefc7ecbe1e7b2a7602ab41a8ab60d5bb83f7f59" exitCode=0 Nov 24 21:24:21 crc kubenswrapper[4915]: I1124 21:24:21.343857 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvj54" event={"ID":"c95008b7-0cb9-4be3-bb4e-178ff7d25cc3","Type":"ContainerDied","Data":"921b15fc7dd955f7592f1578aefc7ecbe1e7b2a7602ab41a8ab60d5bb83f7f59"} Nov 24 21:24:21 crc kubenswrapper[4915]: I1124 21:24:21.343892 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvj54" event={"ID":"c95008b7-0cb9-4be3-bb4e-178ff7d25cc3","Type":"ContainerStarted","Data":"81942af2c9ec93015799959de4f4eae26939933a4720cabca017bb61a8212d52"} Nov 24 21:24:21 crc kubenswrapper[4915]: I1124 21:24:21.347704 4915 generic.go:334] "Generic (PLEG): container finished" podID="2a466a23-e349-49af-b513-e6e96cab17c7" containerID="963ce93d2624d01fd654d3203bd4d85cb8e58ffb73cfcc849ac2cfb645d5e1b5" exitCode=0 Nov 24 21:24:21 crc kubenswrapper[4915]: I1124 21:24:21.347913 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4b2jj" event={"ID":"2a466a23-e349-49af-b513-e6e96cab17c7","Type":"ContainerDied","Data":"963ce93d2624d01fd654d3203bd4d85cb8e58ffb73cfcc849ac2cfb645d5e1b5"} Nov 24 21:24:21 crc kubenswrapper[4915]: I1124 21:24:21.347954 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4b2jj" event={"ID":"2a466a23-e349-49af-b513-e6e96cab17c7","Type":"ContainerStarted","Data":"466de29c284b226182bf839b5c1e73a03196e08562f3c99fd64c84b5f54e7043"} Nov 24 21:24:22 crc kubenswrapper[4915]: I1124 21:24:22.354907 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hkdx" event={"ID":"e0ce54f3-6d64-4d55-85c4-1428aab6cad1","Type":"ContainerStarted","Data":"9426ef1ff5a92d2b038745718256dfe5038c364c3e8ea26478f0cee864caefb9"} Nov 24 21:24:22 crc kubenswrapper[4915]: I1124 21:24:22.356574 4915 generic.go:334] "Generic (PLEG): container finished" podID="c95008b7-0cb9-4be3-bb4e-178ff7d25cc3" containerID="83ef276c3fa766cce29ad0c88402ce06a8b2c19ea6e98b6cf68ce6df63a9f481" exitCode=0 Nov 24 21:24:22 crc kubenswrapper[4915]: I1124 21:24:22.356659 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvj54" event={"ID":"c95008b7-0cb9-4be3-bb4e-178ff7d25cc3","Type":"ContainerDied","Data":"83ef276c3fa766cce29ad0c88402ce06a8b2c19ea6e98b6cf68ce6df63a9f481"} Nov 24 21:24:22 crc kubenswrapper[4915]: I1124 21:24:22.362244 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4b2jj" event={"ID":"2a466a23-e349-49af-b513-e6e96cab17c7","Type":"ContainerStarted","Data":"8ff905a9784300a82e4e4085fd77055bb9ad9ed949f4b41e86553f3a9a10409a"} Nov 24 21:24:22 crc kubenswrapper[4915]: I1124 21:24:22.373827 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5hkdx" podStartSLOduration=2.921099199 podStartE2EDuration="5.373815729s" podCreationTimestamp="2025-11-24 21:24:17 +0000 UTC" firstStartedPulling="2025-11-24 21:24:19.319712051 +0000 UTC m=+277.635964224" lastFinishedPulling="2025-11-24 21:24:21.772428571 +0000 UTC m=+280.088680754" observedRunningTime="2025-11-24 21:24:22.373057528 +0000 UTC m=+280.689309711" watchObservedRunningTime="2025-11-24 21:24:22.373815729 +0000 UTC m=+280.690067912" Nov 24 21:24:23 crc kubenswrapper[4915]: I1124 21:24:23.369731 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvj54" event={"ID":"c95008b7-0cb9-4be3-bb4e-178ff7d25cc3","Type":"ContainerStarted","Data":"04d1ccf4d5bdb566c2bcade4c4b42432f92f79da1380cf5f3f19a3d3493409ad"} Nov 24 21:24:23 crc kubenswrapper[4915]: I1124 21:24:23.372280 4915 generic.go:334] "Generic (PLEG): container finished" podID="2a466a23-e349-49af-b513-e6e96cab17c7" containerID="8ff905a9784300a82e4e4085fd77055bb9ad9ed949f4b41e86553f3a9a10409a" exitCode=0 Nov 24 21:24:23 crc kubenswrapper[4915]: I1124 21:24:23.372376 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4b2jj" event={"ID":"2a466a23-e349-49af-b513-e6e96cab17c7","Type":"ContainerDied","Data":"8ff905a9784300a82e4e4085fd77055bb9ad9ed949f4b41e86553f3a9a10409a"} Nov 24 21:24:23 crc kubenswrapper[4915]: I1124 21:24:23.391025 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wvj54" podStartSLOduration=2.839516943 podStartE2EDuration="4.391005111s" podCreationTimestamp="2025-11-24 21:24:19 +0000 UTC" firstStartedPulling="2025-11-24 21:24:21.346469816 +0000 UTC m=+279.662721989" lastFinishedPulling="2025-11-24 21:24:22.897957984 +0000 UTC m=+281.214210157" observedRunningTime="2025-11-24 21:24:23.388498622 +0000 UTC m=+281.704750805" watchObservedRunningTime="2025-11-24 21:24:23.391005111 +0000 UTC m=+281.707257284" Nov 24 21:24:24 crc kubenswrapper[4915]: I1124 21:24:24.396313 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4b2jj" event={"ID":"2a466a23-e349-49af-b513-e6e96cab17c7","Type":"ContainerStarted","Data":"1a512b460158ce2fd5bc413703b53dd36da8064e4f52ecc2dd96888b7aa63278"} Nov 24 21:24:27 crc kubenswrapper[4915]: I1124 21:24:27.776662 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5hkdx" Nov 24 21:24:27 crc kubenswrapper[4915]: I1124 21:24:27.777026 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5hkdx" Nov 24 21:24:27 crc kubenswrapper[4915]: I1124 21:24:27.829786 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5hkdx" Nov 24 21:24:27 crc kubenswrapper[4915]: I1124 21:24:27.848887 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4b2jj" podStartSLOduration=6.183086258 podStartE2EDuration="8.84886821s" podCreationTimestamp="2025-11-24 21:24:19 +0000 UTC" firstStartedPulling="2025-11-24 21:24:21.348887632 +0000 UTC m=+279.665139825" lastFinishedPulling="2025-11-24 21:24:24.014669604 +0000 UTC m=+282.330921777" observedRunningTime="2025-11-24 21:24:24.417615184 +0000 UTC m=+282.733867357" watchObservedRunningTime="2025-11-24 21:24:27.84886821 +0000 UTC m=+286.165120383" Nov 24 21:24:27 crc kubenswrapper[4915]: I1124 21:24:27.943688 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hj6cr" Nov 24 21:24:27 crc kubenswrapper[4915]: I1124 21:24:27.943754 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hj6cr" Nov 24 21:24:27 crc kubenswrapper[4915]: I1124 21:24:27.986743 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hj6cr" Nov 24 21:24:28 crc kubenswrapper[4915]: I1124 21:24:28.460091 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hj6cr" Nov 24 21:24:28 crc kubenswrapper[4915]: I1124 21:24:28.471074 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5hkdx" Nov 24 21:24:30 crc kubenswrapper[4915]: I1124 21:24:30.140347 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wvj54" Nov 24 21:24:30 crc kubenswrapper[4915]: I1124 21:24:30.140680 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wvj54" Nov 24 21:24:30 crc kubenswrapper[4915]: I1124 21:24:30.190479 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wvj54" Nov 24 21:24:30 crc kubenswrapper[4915]: I1124 21:24:30.354622 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4b2jj" Nov 24 21:24:30 crc kubenswrapper[4915]: I1124 21:24:30.354684 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4b2jj" Nov 24 21:24:30 crc kubenswrapper[4915]: I1124 21:24:30.407310 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4b2jj" Nov 24 21:24:30 crc kubenswrapper[4915]: I1124 21:24:30.485072 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wvj54" Nov 24 21:24:30 crc kubenswrapper[4915]: I1124 21:24:30.503539 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4b2jj" Nov 24 21:24:45 crc kubenswrapper[4915]: I1124 21:24:45.589975 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-r5b85"] Nov 24 21:24:45 crc kubenswrapper[4915]: I1124 21:24:45.591811 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-r5b85" Nov 24 21:24:45 crc kubenswrapper[4915]: I1124 21:24:45.596116 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Nov 24 21:24:45 crc kubenswrapper[4915]: I1124 21:24:45.596214 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Nov 24 21:24:45 crc kubenswrapper[4915]: I1124 21:24:45.596589 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-r5b85"] Nov 24 21:24:45 crc kubenswrapper[4915]: I1124 21:24:45.598218 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Nov 24 21:24:45 crc kubenswrapper[4915]: I1124 21:24:45.598279 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Nov 24 21:24:45 crc kubenswrapper[4915]: I1124 21:24:45.598580 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Nov 24 21:24:45 crc kubenswrapper[4915]: I1124 21:24:45.689793 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/2cabad59-5958-4ded-9cb1-94e260dd31f8-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-r5b85\" (UID: \"2cabad59-5958-4ded-9cb1-94e260dd31f8\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-r5b85" Nov 24 21:24:45 crc kubenswrapper[4915]: I1124 21:24:45.689855 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/2cabad59-5958-4ded-9cb1-94e260dd31f8-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-r5b85\" (UID: \"2cabad59-5958-4ded-9cb1-94e260dd31f8\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-r5b85" Nov 24 21:24:45 crc kubenswrapper[4915]: I1124 21:24:45.690174 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdf2v\" (UniqueName: \"kubernetes.io/projected/2cabad59-5958-4ded-9cb1-94e260dd31f8-kube-api-access-tdf2v\") pod \"cluster-monitoring-operator-6d5b84845-r5b85\" (UID: \"2cabad59-5958-4ded-9cb1-94e260dd31f8\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-r5b85" Nov 24 21:24:45 crc kubenswrapper[4915]: I1124 21:24:45.791762 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdf2v\" (UniqueName: \"kubernetes.io/projected/2cabad59-5958-4ded-9cb1-94e260dd31f8-kube-api-access-tdf2v\") pod \"cluster-monitoring-operator-6d5b84845-r5b85\" (UID: \"2cabad59-5958-4ded-9cb1-94e260dd31f8\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-r5b85" Nov 24 21:24:45 crc kubenswrapper[4915]: I1124 21:24:45.791860 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/2cabad59-5958-4ded-9cb1-94e260dd31f8-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-r5b85\" (UID: \"2cabad59-5958-4ded-9cb1-94e260dd31f8\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-r5b85" Nov 24 21:24:45 crc kubenswrapper[4915]: I1124 21:24:45.791894 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/2cabad59-5958-4ded-9cb1-94e260dd31f8-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-r5b85\" (UID: \"2cabad59-5958-4ded-9cb1-94e260dd31f8\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-r5b85" Nov 24 21:24:45 crc kubenswrapper[4915]: I1124 21:24:45.792839 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/2cabad59-5958-4ded-9cb1-94e260dd31f8-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-r5b85\" (UID: \"2cabad59-5958-4ded-9cb1-94e260dd31f8\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-r5b85" Nov 24 21:24:45 crc kubenswrapper[4915]: I1124 21:24:45.800510 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/2cabad59-5958-4ded-9cb1-94e260dd31f8-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-r5b85\" (UID: \"2cabad59-5958-4ded-9cb1-94e260dd31f8\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-r5b85" Nov 24 21:24:45 crc kubenswrapper[4915]: I1124 21:24:45.814631 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdf2v\" (UniqueName: \"kubernetes.io/projected/2cabad59-5958-4ded-9cb1-94e260dd31f8-kube-api-access-tdf2v\") pod \"cluster-monitoring-operator-6d5b84845-r5b85\" (UID: \"2cabad59-5958-4ded-9cb1-94e260dd31f8\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-r5b85" Nov 24 21:24:45 crc kubenswrapper[4915]: I1124 21:24:45.916353 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-r5b85" Nov 24 21:24:46 crc kubenswrapper[4915]: I1124 21:24:46.153798 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-r5b85"] Nov 24 21:24:46 crc kubenswrapper[4915]: W1124 21:24:46.167147 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cabad59_5958_4ded_9cb1_94e260dd31f8.slice/crio-1021fc1c9732390fa3ed2b7c4304bb23744d0cc09461808bb183d38235ecc1b8 WatchSource:0}: Error finding container 1021fc1c9732390fa3ed2b7c4304bb23744d0cc09461808bb183d38235ecc1b8: Status 404 returned error can't find the container with id 1021fc1c9732390fa3ed2b7c4304bb23744d0cc09461808bb183d38235ecc1b8 Nov 24 21:24:46 crc kubenswrapper[4915]: I1124 21:24:46.523307 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-r5b85" event={"ID":"2cabad59-5958-4ded-9cb1-94e260dd31f8","Type":"ContainerStarted","Data":"1021fc1c9732390fa3ed2b7c4304bb23744d0cc09461808bb183d38235ecc1b8"} Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.187791 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-h77kv"] Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.188796 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-h77kv" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.208206 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-h77kv"] Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.293429 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-njb65"] Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.294105 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-njb65" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.295705 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.295831 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-m2ssf" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.303217 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-njb65"] Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.326390 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7cpt\" (UniqueName: \"kubernetes.io/projected/b2c855b0-5649-4f72-be9c-5b43eb33dd91-kube-api-access-z7cpt\") pod \"image-registry-66df7c8f76-h77kv\" (UID: \"b2c855b0-5649-4f72-be9c-5b43eb33dd91\") " pod="openshift-image-registry/image-registry-66df7c8f76-h77kv" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.326435 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b2c855b0-5649-4f72-be9c-5b43eb33dd91-installation-pull-secrets\") pod \"image-registry-66df7c8f76-h77kv\" (UID: \"b2c855b0-5649-4f72-be9c-5b43eb33dd91\") " pod="openshift-image-registry/image-registry-66df7c8f76-h77kv" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.326469 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b2c855b0-5649-4f72-be9c-5b43eb33dd91-registry-tls\") pod \"image-registry-66df7c8f76-h77kv\" (UID: \"b2c855b0-5649-4f72-be9c-5b43eb33dd91\") " pod="openshift-image-registry/image-registry-66df7c8f76-h77kv" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.326532 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b2c855b0-5649-4f72-be9c-5b43eb33dd91-trusted-ca\") pod \"image-registry-66df7c8f76-h77kv\" (UID: \"b2c855b0-5649-4f72-be9c-5b43eb33dd91\") " pod="openshift-image-registry/image-registry-66df7c8f76-h77kv" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.326554 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b2c855b0-5649-4f72-be9c-5b43eb33dd91-ca-trust-extracted\") pod \"image-registry-66df7c8f76-h77kv\" (UID: \"b2c855b0-5649-4f72-be9c-5b43eb33dd91\") " pod="openshift-image-registry/image-registry-66df7c8f76-h77kv" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.326584 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-h77kv\" (UID: \"b2c855b0-5649-4f72-be9c-5b43eb33dd91\") " pod="openshift-image-registry/image-registry-66df7c8f76-h77kv" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.326606 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b2c855b0-5649-4f72-be9c-5b43eb33dd91-registry-certificates\") pod \"image-registry-66df7c8f76-h77kv\" (UID: \"b2c855b0-5649-4f72-be9c-5b43eb33dd91\") " pod="openshift-image-registry/image-registry-66df7c8f76-h77kv" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.326627 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b2c855b0-5649-4f72-be9c-5b43eb33dd91-bound-sa-token\") pod \"image-registry-66df7c8f76-h77kv\" (UID: \"b2c855b0-5649-4f72-be9c-5b43eb33dd91\") " pod="openshift-image-registry/image-registry-66df7c8f76-h77kv" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.346905 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-h77kv\" (UID: \"b2c855b0-5649-4f72-be9c-5b43eb33dd91\") " pod="openshift-image-registry/image-registry-66df7c8f76-h77kv" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.427698 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7cpt\" (UniqueName: \"kubernetes.io/projected/b2c855b0-5649-4f72-be9c-5b43eb33dd91-kube-api-access-z7cpt\") pod \"image-registry-66df7c8f76-h77kv\" (UID: \"b2c855b0-5649-4f72-be9c-5b43eb33dd91\") " pod="openshift-image-registry/image-registry-66df7c8f76-h77kv" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.427751 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b2c855b0-5649-4f72-be9c-5b43eb33dd91-installation-pull-secrets\") pod \"image-registry-66df7c8f76-h77kv\" (UID: \"b2c855b0-5649-4f72-be9c-5b43eb33dd91\") " pod="openshift-image-registry/image-registry-66df7c8f76-h77kv" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.427810 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b2c855b0-5649-4f72-be9c-5b43eb33dd91-registry-tls\") pod \"image-registry-66df7c8f76-h77kv\" (UID: \"b2c855b0-5649-4f72-be9c-5b43eb33dd91\") " pod="openshift-image-registry/image-registry-66df7c8f76-h77kv" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.427850 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b2c855b0-5649-4f72-be9c-5b43eb33dd91-trusted-ca\") pod \"image-registry-66df7c8f76-h77kv\" (UID: \"b2c855b0-5649-4f72-be9c-5b43eb33dd91\") " pod="openshift-image-registry/image-registry-66df7c8f76-h77kv" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.427881 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b2c855b0-5649-4f72-be9c-5b43eb33dd91-ca-trust-extracted\") pod \"image-registry-66df7c8f76-h77kv\" (UID: \"b2c855b0-5649-4f72-be9c-5b43eb33dd91\") " pod="openshift-image-registry/image-registry-66df7c8f76-h77kv" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.427915 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b2c855b0-5649-4f72-be9c-5b43eb33dd91-registry-certificates\") pod \"image-registry-66df7c8f76-h77kv\" (UID: \"b2c855b0-5649-4f72-be9c-5b43eb33dd91\") " pod="openshift-image-registry/image-registry-66df7c8f76-h77kv" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.427945 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9fc68880-8c9d-4804-acc7-838dcd3987ef-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-njb65\" (UID: \"9fc68880-8c9d-4804-acc7-838dcd3987ef\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-njb65" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.427983 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b2c855b0-5649-4f72-be9c-5b43eb33dd91-bound-sa-token\") pod \"image-registry-66df7c8f76-h77kv\" (UID: \"b2c855b0-5649-4f72-be9c-5b43eb33dd91\") " pod="openshift-image-registry/image-registry-66df7c8f76-h77kv" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.433610 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b2c855b0-5649-4f72-be9c-5b43eb33dd91-ca-trust-extracted\") pod \"image-registry-66df7c8f76-h77kv\" (UID: \"b2c855b0-5649-4f72-be9c-5b43eb33dd91\") " pod="openshift-image-registry/image-registry-66df7c8f76-h77kv" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.434528 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b2c855b0-5649-4f72-be9c-5b43eb33dd91-trusted-ca\") pod \"image-registry-66df7c8f76-h77kv\" (UID: \"b2c855b0-5649-4f72-be9c-5b43eb33dd91\") " pod="openshift-image-registry/image-registry-66df7c8f76-h77kv" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.435323 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b2c855b0-5649-4f72-be9c-5b43eb33dd91-registry-certificates\") pod \"image-registry-66df7c8f76-h77kv\" (UID: \"b2c855b0-5649-4f72-be9c-5b43eb33dd91\") " pod="openshift-image-registry/image-registry-66df7c8f76-h77kv" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.444251 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b2c855b0-5649-4f72-be9c-5b43eb33dd91-installation-pull-secrets\") pod \"image-registry-66df7c8f76-h77kv\" (UID: \"b2c855b0-5649-4f72-be9c-5b43eb33dd91\") " pod="openshift-image-registry/image-registry-66df7c8f76-h77kv" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.446236 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b2c855b0-5649-4f72-be9c-5b43eb33dd91-registry-tls\") pod \"image-registry-66df7c8f76-h77kv\" (UID: \"b2c855b0-5649-4f72-be9c-5b43eb33dd91\") " pod="openshift-image-registry/image-registry-66df7c8f76-h77kv" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.452504 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7cpt\" (UniqueName: \"kubernetes.io/projected/b2c855b0-5649-4f72-be9c-5b43eb33dd91-kube-api-access-z7cpt\") pod \"image-registry-66df7c8f76-h77kv\" (UID: \"b2c855b0-5649-4f72-be9c-5b43eb33dd91\") " pod="openshift-image-registry/image-registry-66df7c8f76-h77kv" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.460128 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b2c855b0-5649-4f72-be9c-5b43eb33dd91-bound-sa-token\") pod \"image-registry-66df7c8f76-h77kv\" (UID: \"b2c855b0-5649-4f72-be9c-5b43eb33dd91\") " pod="openshift-image-registry/image-registry-66df7c8f76-h77kv" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.505170 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-h77kv" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.533981 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9fc68880-8c9d-4804-acc7-838dcd3987ef-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-njb65\" (UID: \"9fc68880-8c9d-4804-acc7-838dcd3987ef\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-njb65" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.536747 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-r5b85" event={"ID":"2cabad59-5958-4ded-9cb1-94e260dd31f8","Type":"ContainerStarted","Data":"385baee6de58a7a7a2313ef36c8c3e58924aba80aef9ee4ebc9c1511a6dfb607"} Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.539354 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/9fc68880-8c9d-4804-acc7-838dcd3987ef-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-njb65\" (UID: \"9fc68880-8c9d-4804-acc7-838dcd3987ef\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-njb65" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.554647 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-r5b85" podStartSLOduration=2.036629498 podStartE2EDuration="3.554628872s" podCreationTimestamp="2025-11-24 21:24:45 +0000 UTC" firstStartedPulling="2025-11-24 21:24:46.171241455 +0000 UTC m=+304.487493628" lastFinishedPulling="2025-11-24 21:24:47.689240829 +0000 UTC m=+306.005493002" observedRunningTime="2025-11-24 21:24:48.552436851 +0000 UTC m=+306.868689024" watchObservedRunningTime="2025-11-24 21:24:48.554628872 +0000 UTC m=+306.870881045" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.608293 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-njb65" Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.700724 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-h77kv"] Nov 24 21:24:48 crc kubenswrapper[4915]: I1124 21:24:48.828063 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-njb65"] Nov 24 21:24:49 crc kubenswrapper[4915]: I1124 21:24:49.544551 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-h77kv" event={"ID":"b2c855b0-5649-4f72-be9c-5b43eb33dd91","Type":"ContainerStarted","Data":"b6102e0ad46326f23df459aa475e16c0a026fe3ac658a8a0246547be92e2ce91"} Nov 24 21:24:49 crc kubenswrapper[4915]: I1124 21:24:49.544619 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-h77kv" event={"ID":"b2c855b0-5649-4f72-be9c-5b43eb33dd91","Type":"ContainerStarted","Data":"a56ae939cbf78d6be99995863255c691785e72934e320ac48d84674c68636756"} Nov 24 21:24:49 crc kubenswrapper[4915]: I1124 21:24:49.544695 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-h77kv" Nov 24 21:24:49 crc kubenswrapper[4915]: I1124 21:24:49.549182 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-njb65" event={"ID":"9fc68880-8c9d-4804-acc7-838dcd3987ef","Type":"ContainerStarted","Data":"f7353ff48d2e887a41665584b7d7b79e4c5517c852ee685bcf22b81a366d48bf"} Nov 24 21:24:49 crc kubenswrapper[4915]: I1124 21:24:49.583106 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-h77kv" podStartSLOduration=1.5830676750000001 podStartE2EDuration="1.583067675s" podCreationTimestamp="2025-11-24 21:24:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:24:49.577037139 +0000 UTC m=+307.893289352" watchObservedRunningTime="2025-11-24 21:24:49.583067675 +0000 UTC m=+307.899319928" Nov 24 21:24:50 crc kubenswrapper[4915]: I1124 21:24:50.562265 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-njb65" event={"ID":"9fc68880-8c9d-4804-acc7-838dcd3987ef","Type":"ContainerStarted","Data":"d06c3ff3bc7406e5fc7c7db999bed344643e1ff3941ceeeddd8e895fb34d8681"} Nov 24 21:24:50 crc kubenswrapper[4915]: I1124 21:24:50.562501 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-njb65" Nov 24 21:24:50 crc kubenswrapper[4915]: I1124 21:24:50.570577 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-njb65" Nov 24 21:24:50 crc kubenswrapper[4915]: I1124 21:24:50.583016 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-njb65" podStartSLOduration=1.118147344 podStartE2EDuration="2.582997861s" podCreationTimestamp="2025-11-24 21:24:48 +0000 UTC" firstStartedPulling="2025-11-24 21:24:48.837309314 +0000 UTC m=+307.153561507" lastFinishedPulling="2025-11-24 21:24:50.302159851 +0000 UTC m=+308.618412024" observedRunningTime="2025-11-24 21:24:50.580236865 +0000 UTC m=+308.896489038" watchObservedRunningTime="2025-11-24 21:24:50.582997861 +0000 UTC m=+308.899250034" Nov 24 21:24:51 crc kubenswrapper[4915]: I1124 21:24:51.371916 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-2ttpz"] Nov 24 21:24:51 crc kubenswrapper[4915]: I1124 21:24:51.372847 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-2ttpz" Nov 24 21:24:51 crc kubenswrapper[4915]: I1124 21:24:51.375759 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Nov 24 21:24:51 crc kubenswrapper[4915]: I1124 21:24:51.375844 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Nov 24 21:24:51 crc kubenswrapper[4915]: I1124 21:24:51.375763 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-pv4vx" Nov 24 21:24:51 crc kubenswrapper[4915]: I1124 21:24:51.377096 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Nov 24 21:24:51 crc kubenswrapper[4915]: I1124 21:24:51.405856 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-2ttpz"] Nov 24 21:24:51 crc kubenswrapper[4915]: I1124 21:24:51.479680 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlkx7\" (UniqueName: \"kubernetes.io/projected/0e8df346-a220-44fd-98d3-f043ab57944f-kube-api-access-vlkx7\") pod \"prometheus-operator-db54df47d-2ttpz\" (UID: \"0e8df346-a220-44fd-98d3-f043ab57944f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-2ttpz" Nov 24 21:24:51 crc kubenswrapper[4915]: I1124 21:24:51.480042 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0e8df346-a220-44fd-98d3-f043ab57944f-metrics-client-ca\") pod \"prometheus-operator-db54df47d-2ttpz\" (UID: \"0e8df346-a220-44fd-98d3-f043ab57944f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-2ttpz" Nov 24 21:24:51 crc kubenswrapper[4915]: I1124 21:24:51.480210 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0e8df346-a220-44fd-98d3-f043ab57944f-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-2ttpz\" (UID: \"0e8df346-a220-44fd-98d3-f043ab57944f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-2ttpz" Nov 24 21:24:51 crc kubenswrapper[4915]: I1124 21:24:51.480338 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/0e8df346-a220-44fd-98d3-f043ab57944f-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-2ttpz\" (UID: \"0e8df346-a220-44fd-98d3-f043ab57944f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-2ttpz" Nov 24 21:24:51 crc kubenswrapper[4915]: I1124 21:24:51.581341 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlkx7\" (UniqueName: \"kubernetes.io/projected/0e8df346-a220-44fd-98d3-f043ab57944f-kube-api-access-vlkx7\") pod \"prometheus-operator-db54df47d-2ttpz\" (UID: \"0e8df346-a220-44fd-98d3-f043ab57944f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-2ttpz" Nov 24 21:24:51 crc kubenswrapper[4915]: I1124 21:24:51.581423 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0e8df346-a220-44fd-98d3-f043ab57944f-metrics-client-ca\") pod \"prometheus-operator-db54df47d-2ttpz\" (UID: \"0e8df346-a220-44fd-98d3-f043ab57944f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-2ttpz" Nov 24 21:24:51 crc kubenswrapper[4915]: I1124 21:24:51.581481 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0e8df346-a220-44fd-98d3-f043ab57944f-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-2ttpz\" (UID: \"0e8df346-a220-44fd-98d3-f043ab57944f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-2ttpz" Nov 24 21:24:51 crc kubenswrapper[4915]: I1124 21:24:51.581510 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/0e8df346-a220-44fd-98d3-f043ab57944f-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-2ttpz\" (UID: \"0e8df346-a220-44fd-98d3-f043ab57944f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-2ttpz" Nov 24 21:24:51 crc kubenswrapper[4915]: I1124 21:24:51.584919 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0e8df346-a220-44fd-98d3-f043ab57944f-metrics-client-ca\") pod \"prometheus-operator-db54df47d-2ttpz\" (UID: \"0e8df346-a220-44fd-98d3-f043ab57944f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-2ttpz" Nov 24 21:24:51 crc kubenswrapper[4915]: I1124 21:24:51.591546 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0e8df346-a220-44fd-98d3-f043ab57944f-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-2ttpz\" (UID: \"0e8df346-a220-44fd-98d3-f043ab57944f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-2ttpz" Nov 24 21:24:51 crc kubenswrapper[4915]: I1124 21:24:51.595232 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/0e8df346-a220-44fd-98d3-f043ab57944f-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-2ttpz\" (UID: \"0e8df346-a220-44fd-98d3-f043ab57944f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-2ttpz" Nov 24 21:24:51 crc kubenswrapper[4915]: I1124 21:24:51.600315 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlkx7\" (UniqueName: \"kubernetes.io/projected/0e8df346-a220-44fd-98d3-f043ab57944f-kube-api-access-vlkx7\") pod \"prometheus-operator-db54df47d-2ttpz\" (UID: \"0e8df346-a220-44fd-98d3-f043ab57944f\") " pod="openshift-monitoring/prometheus-operator-db54df47d-2ttpz" Nov 24 21:24:51 crc kubenswrapper[4915]: I1124 21:24:51.706970 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-2ttpz" Nov 24 21:24:51 crc kubenswrapper[4915]: I1124 21:24:51.979983 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-2ttpz"] Nov 24 21:24:52 crc kubenswrapper[4915]: I1124 21:24:52.575093 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-2ttpz" event={"ID":"0e8df346-a220-44fd-98d3-f043ab57944f","Type":"ContainerStarted","Data":"b8ce74e3395ca5eaea786e38df3b79a6be460359223e52db02a9173096b5c857"} Nov 24 21:24:54 crc kubenswrapper[4915]: I1124 21:24:54.590597 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-2ttpz" event={"ID":"0e8df346-a220-44fd-98d3-f043ab57944f","Type":"ContainerStarted","Data":"5513fad1bec7727c4d65944a8e328118788d1e08250ef99d1bf04b6fac79c16d"} Nov 24 21:24:54 crc kubenswrapper[4915]: I1124 21:24:54.591334 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-2ttpz" event={"ID":"0e8df346-a220-44fd-98d3-f043ab57944f","Type":"ContainerStarted","Data":"f49481dc3eb75b4ac360eb08d93e0b311907935fc697b759750562d9552bd338"} Nov 24 21:24:54 crc kubenswrapper[4915]: I1124 21:24:54.611522 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-2ttpz" podStartSLOduration=1.852449794 podStartE2EDuration="3.611501001s" podCreationTimestamp="2025-11-24 21:24:51 +0000 UTC" firstStartedPulling="2025-11-24 21:24:51.992797449 +0000 UTC m=+310.309049622" lastFinishedPulling="2025-11-24 21:24:53.751848636 +0000 UTC m=+312.068100829" observedRunningTime="2025-11-24 21:24:54.605947667 +0000 UTC m=+312.922199860" watchObservedRunningTime="2025-11-24 21:24:54.611501001 +0000 UTC m=+312.927753184" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.777860 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-krrg6"] Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.780272 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-krrg6" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.781914 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-99dlp"] Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.782463 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-glqdd" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.783189 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-99dlp" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.783346 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.784251 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.784574 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.785248 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-pqskm" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.792651 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-krrg6"] Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.794196 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.833965 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-zbx7q"] Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.835357 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-zbx7q" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.838977 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Nov 24 21:24:56 crc kubenswrapper[4915]: W1124 21:24:56.839185 4915 reflector.go:561] object-"openshift-monitoring"/"kube-state-metrics-dockercfg-npj8k": failed to list *v1.Secret: secrets "kube-state-metrics-dockercfg-npj8k" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-monitoring": no relationship found between node 'crc' and this object Nov 24 21:24:56 crc kubenswrapper[4915]: E1124 21:24:56.839225 4915 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"kube-state-metrics-dockercfg-npj8k\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"kube-state-metrics-dockercfg-npj8k\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-monitoring\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.839327 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.840792 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.853347 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/45592d0e-de97-4ea1-9f82-9980c8cdd3c1-metrics-client-ca\") pod \"node-exporter-99dlp\" (UID: \"45592d0e-de97-4ea1-9f82-9980c8cdd3c1\") " pod="openshift-monitoring/node-exporter-99dlp" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.853406 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/45592d0e-de97-4ea1-9f82-9980c8cdd3c1-node-exporter-textfile\") pod \"node-exporter-99dlp\" (UID: \"45592d0e-de97-4ea1-9f82-9980c8cdd3c1\") " pod="openshift-monitoring/node-exporter-99dlp" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.853476 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/45592d0e-de97-4ea1-9f82-9980c8cdd3c1-node-exporter-wtmp\") pod \"node-exporter-99dlp\" (UID: \"45592d0e-de97-4ea1-9f82-9980c8cdd3c1\") " pod="openshift-monitoring/node-exporter-99dlp" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.853536 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g2nk\" (UniqueName: \"kubernetes.io/projected/b1a3faaf-a06f-4214-9a37-3b5893ef8558-kube-api-access-4g2nk\") pod \"openshift-state-metrics-566fddb674-krrg6\" (UID: \"b1a3faaf-a06f-4214-9a37-3b5893ef8558\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-krrg6" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.853577 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/45592d0e-de97-4ea1-9f82-9980c8cdd3c1-node-exporter-tls\") pod \"node-exporter-99dlp\" (UID: \"45592d0e-de97-4ea1-9f82-9980c8cdd3c1\") " pod="openshift-monitoring/node-exporter-99dlp" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.853608 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rbd8\" (UniqueName: \"kubernetes.io/projected/45592d0e-de97-4ea1-9f82-9980c8cdd3c1-kube-api-access-8rbd8\") pod \"node-exporter-99dlp\" (UID: \"45592d0e-de97-4ea1-9f82-9980c8cdd3c1\") " pod="openshift-monitoring/node-exporter-99dlp" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.853664 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b1a3faaf-a06f-4214-9a37-3b5893ef8558-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-krrg6\" (UID: \"b1a3faaf-a06f-4214-9a37-3b5893ef8558\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-krrg6" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.853727 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/45592d0e-de97-4ea1-9f82-9980c8cdd3c1-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-99dlp\" (UID: \"45592d0e-de97-4ea1-9f82-9980c8cdd3c1\") " pod="openshift-monitoring/node-exporter-99dlp" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.853752 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/b1a3faaf-a06f-4214-9a37-3b5893ef8558-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-krrg6\" (UID: \"b1a3faaf-a06f-4214-9a37-3b5893ef8558\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-krrg6" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.853782 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/45592d0e-de97-4ea1-9f82-9980c8cdd3c1-sys\") pod \"node-exporter-99dlp\" (UID: \"45592d0e-de97-4ea1-9f82-9980c8cdd3c1\") " pod="openshift-monitoring/node-exporter-99dlp" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.853807 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b1a3faaf-a06f-4214-9a37-3b5893ef8558-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-krrg6\" (UID: \"b1a3faaf-a06f-4214-9a37-3b5893ef8558\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-krrg6" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.853975 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/45592d0e-de97-4ea1-9f82-9980c8cdd3c1-root\") pod \"node-exporter-99dlp\" (UID: \"45592d0e-de97-4ea1-9f82-9980c8cdd3c1\") " pod="openshift-monitoring/node-exporter-99dlp" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.858472 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-zbx7q"] Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.955149 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/528e2728-9273-480d-b946-8e5f92d87491-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-zbx7q\" (UID: \"528e2728-9273-480d-b946-8e5f92d87491\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-zbx7q" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.955212 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bgt4\" (UniqueName: \"kubernetes.io/projected/528e2728-9273-480d-b946-8e5f92d87491-kube-api-access-6bgt4\") pod \"kube-state-metrics-777cb5bd5d-zbx7q\" (UID: \"528e2728-9273-480d-b946-8e5f92d87491\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-zbx7q" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.955256 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g2nk\" (UniqueName: \"kubernetes.io/projected/b1a3faaf-a06f-4214-9a37-3b5893ef8558-kube-api-access-4g2nk\") pod \"openshift-state-metrics-566fddb674-krrg6\" (UID: \"b1a3faaf-a06f-4214-9a37-3b5893ef8558\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-krrg6" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.955291 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/45592d0e-de97-4ea1-9f82-9980c8cdd3c1-node-exporter-tls\") pod \"node-exporter-99dlp\" (UID: \"45592d0e-de97-4ea1-9f82-9980c8cdd3c1\") " pod="openshift-monitoring/node-exporter-99dlp" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.955320 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/528e2728-9273-480d-b946-8e5f92d87491-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-zbx7q\" (UID: \"528e2728-9273-480d-b946-8e5f92d87491\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-zbx7q" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.955342 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rbd8\" (UniqueName: \"kubernetes.io/projected/45592d0e-de97-4ea1-9f82-9980c8cdd3c1-kube-api-access-8rbd8\") pod \"node-exporter-99dlp\" (UID: \"45592d0e-de97-4ea1-9f82-9980c8cdd3c1\") " pod="openshift-monitoring/node-exporter-99dlp" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.955363 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/528e2728-9273-480d-b946-8e5f92d87491-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-zbx7q\" (UID: \"528e2728-9273-480d-b946-8e5f92d87491\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-zbx7q" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.955385 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b1a3faaf-a06f-4214-9a37-3b5893ef8558-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-krrg6\" (UID: \"b1a3faaf-a06f-4214-9a37-3b5893ef8558\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-krrg6" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.955415 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/45592d0e-de97-4ea1-9f82-9980c8cdd3c1-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-99dlp\" (UID: \"45592d0e-de97-4ea1-9f82-9980c8cdd3c1\") " pod="openshift-monitoring/node-exporter-99dlp" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.955439 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/b1a3faaf-a06f-4214-9a37-3b5893ef8558-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-krrg6\" (UID: \"b1a3faaf-a06f-4214-9a37-3b5893ef8558\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-krrg6" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.955472 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/45592d0e-de97-4ea1-9f82-9980c8cdd3c1-sys\") pod \"node-exporter-99dlp\" (UID: \"45592d0e-de97-4ea1-9f82-9980c8cdd3c1\") " pod="openshift-monitoring/node-exporter-99dlp" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.955498 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b1a3faaf-a06f-4214-9a37-3b5893ef8558-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-krrg6\" (UID: \"b1a3faaf-a06f-4214-9a37-3b5893ef8558\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-krrg6" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.955521 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/528e2728-9273-480d-b946-8e5f92d87491-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-zbx7q\" (UID: \"528e2728-9273-480d-b946-8e5f92d87491\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-zbx7q" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.955543 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/45592d0e-de97-4ea1-9f82-9980c8cdd3c1-root\") pod \"node-exporter-99dlp\" (UID: \"45592d0e-de97-4ea1-9f82-9980c8cdd3c1\") " pod="openshift-monitoring/node-exporter-99dlp" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.955568 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/528e2728-9273-480d-b946-8e5f92d87491-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-zbx7q\" (UID: \"528e2728-9273-480d-b946-8e5f92d87491\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-zbx7q" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.955601 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/45592d0e-de97-4ea1-9f82-9980c8cdd3c1-metrics-client-ca\") pod \"node-exporter-99dlp\" (UID: \"45592d0e-de97-4ea1-9f82-9980c8cdd3c1\") " pod="openshift-monitoring/node-exporter-99dlp" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.955628 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/45592d0e-de97-4ea1-9f82-9980c8cdd3c1-node-exporter-textfile\") pod \"node-exporter-99dlp\" (UID: \"45592d0e-de97-4ea1-9f82-9980c8cdd3c1\") " pod="openshift-monitoring/node-exporter-99dlp" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.955649 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/45592d0e-de97-4ea1-9f82-9980c8cdd3c1-node-exporter-wtmp\") pod \"node-exporter-99dlp\" (UID: \"45592d0e-de97-4ea1-9f82-9980c8cdd3c1\") " pod="openshift-monitoring/node-exporter-99dlp" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.955876 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/45592d0e-de97-4ea1-9f82-9980c8cdd3c1-node-exporter-wtmp\") pod \"node-exporter-99dlp\" (UID: \"45592d0e-de97-4ea1-9f82-9980c8cdd3c1\") " pod="openshift-monitoring/node-exporter-99dlp" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.956401 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/45592d0e-de97-4ea1-9f82-9980c8cdd3c1-sys\") pod \"node-exporter-99dlp\" (UID: \"45592d0e-de97-4ea1-9f82-9980c8cdd3c1\") " pod="openshift-monitoring/node-exporter-99dlp" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.957077 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/45592d0e-de97-4ea1-9f82-9980c8cdd3c1-root\") pod \"node-exporter-99dlp\" (UID: \"45592d0e-de97-4ea1-9f82-9980c8cdd3c1\") " pod="openshift-monitoring/node-exporter-99dlp" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.957882 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b1a3faaf-a06f-4214-9a37-3b5893ef8558-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-krrg6\" (UID: \"b1a3faaf-a06f-4214-9a37-3b5893ef8558\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-krrg6" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.958348 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/45592d0e-de97-4ea1-9f82-9980c8cdd3c1-node-exporter-textfile\") pod \"node-exporter-99dlp\" (UID: \"45592d0e-de97-4ea1-9f82-9980c8cdd3c1\") " pod="openshift-monitoring/node-exporter-99dlp" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.958481 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/45592d0e-de97-4ea1-9f82-9980c8cdd3c1-metrics-client-ca\") pod \"node-exporter-99dlp\" (UID: \"45592d0e-de97-4ea1-9f82-9980c8cdd3c1\") " pod="openshift-monitoring/node-exporter-99dlp" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.962225 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/45592d0e-de97-4ea1-9f82-9980c8cdd3c1-node-exporter-tls\") pod \"node-exporter-99dlp\" (UID: \"45592d0e-de97-4ea1-9f82-9980c8cdd3c1\") " pod="openshift-monitoring/node-exporter-99dlp" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.962574 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/b1a3faaf-a06f-4214-9a37-3b5893ef8558-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-krrg6\" (UID: \"b1a3faaf-a06f-4214-9a37-3b5893ef8558\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-krrg6" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.963496 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/b1a3faaf-a06f-4214-9a37-3b5893ef8558-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-krrg6\" (UID: \"b1a3faaf-a06f-4214-9a37-3b5893ef8558\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-krrg6" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.964126 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/45592d0e-de97-4ea1-9f82-9980c8cdd3c1-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-99dlp\" (UID: \"45592d0e-de97-4ea1-9f82-9980c8cdd3c1\") " pod="openshift-monitoring/node-exporter-99dlp" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.975250 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g2nk\" (UniqueName: \"kubernetes.io/projected/b1a3faaf-a06f-4214-9a37-3b5893ef8558-kube-api-access-4g2nk\") pod \"openshift-state-metrics-566fddb674-krrg6\" (UID: \"b1a3faaf-a06f-4214-9a37-3b5893ef8558\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-krrg6" Nov 24 21:24:56 crc kubenswrapper[4915]: I1124 21:24:56.975677 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rbd8\" (UniqueName: \"kubernetes.io/projected/45592d0e-de97-4ea1-9f82-9980c8cdd3c1-kube-api-access-8rbd8\") pod \"node-exporter-99dlp\" (UID: \"45592d0e-de97-4ea1-9f82-9980c8cdd3c1\") " pod="openshift-monitoring/node-exporter-99dlp" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.056820 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bgt4\" (UniqueName: \"kubernetes.io/projected/528e2728-9273-480d-b946-8e5f92d87491-kube-api-access-6bgt4\") pod \"kube-state-metrics-777cb5bd5d-zbx7q\" (UID: \"528e2728-9273-480d-b946-8e5f92d87491\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-zbx7q" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.056909 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/528e2728-9273-480d-b946-8e5f92d87491-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-zbx7q\" (UID: \"528e2728-9273-480d-b946-8e5f92d87491\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-zbx7q" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.056949 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/528e2728-9273-480d-b946-8e5f92d87491-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-zbx7q\" (UID: \"528e2728-9273-480d-b946-8e5f92d87491\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-zbx7q" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.056987 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/528e2728-9273-480d-b946-8e5f92d87491-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-zbx7q\" (UID: \"528e2728-9273-480d-b946-8e5f92d87491\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-zbx7q" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.057034 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/528e2728-9273-480d-b946-8e5f92d87491-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-zbx7q\" (UID: \"528e2728-9273-480d-b946-8e5f92d87491\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-zbx7q" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.057077 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/528e2728-9273-480d-b946-8e5f92d87491-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-zbx7q\" (UID: \"528e2728-9273-480d-b946-8e5f92d87491\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-zbx7q" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.057575 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/528e2728-9273-480d-b946-8e5f92d87491-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-zbx7q\" (UID: \"528e2728-9273-480d-b946-8e5f92d87491\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-zbx7q" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.058157 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/528e2728-9273-480d-b946-8e5f92d87491-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-zbx7q\" (UID: \"528e2728-9273-480d-b946-8e5f92d87491\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-zbx7q" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.058396 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/528e2728-9273-480d-b946-8e5f92d87491-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-zbx7q\" (UID: \"528e2728-9273-480d-b946-8e5f92d87491\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-zbx7q" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.060141 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/528e2728-9273-480d-b946-8e5f92d87491-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-zbx7q\" (UID: \"528e2728-9273-480d-b946-8e5f92d87491\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-zbx7q" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.061269 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/528e2728-9273-480d-b946-8e5f92d87491-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-zbx7q\" (UID: \"528e2728-9273-480d-b946-8e5f92d87491\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-zbx7q" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.075803 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bgt4\" (UniqueName: \"kubernetes.io/projected/528e2728-9273-480d-b946-8e5f92d87491-kube-api-access-6bgt4\") pod \"kube-state-metrics-777cb5bd5d-zbx7q\" (UID: \"528e2728-9273-480d-b946-8e5f92d87491\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-zbx7q" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.101760 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-krrg6" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.109192 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-99dlp" Nov 24 21:24:57 crc kubenswrapper[4915]: W1124 21:24:57.131974 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45592d0e_de97_4ea1_9f82_9980c8cdd3c1.slice/crio-ada7b71364cf4547b27b0c5252c98d65b97816986cbf48e483bbc498aca4918d WatchSource:0}: Error finding container ada7b71364cf4547b27b0c5252c98d65b97816986cbf48e483bbc498aca4918d: Status 404 returned error can't find the container with id ada7b71364cf4547b27b0c5252c98d65b97816986cbf48e483bbc498aca4918d Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.304765 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-krrg6"] Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.604168 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-99dlp" event={"ID":"45592d0e-de97-4ea1-9f82-9980c8cdd3c1","Type":"ContainerStarted","Data":"ada7b71364cf4547b27b0c5252c98d65b97816986cbf48e483bbc498aca4918d"} Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.605761 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-krrg6" event={"ID":"b1a3faaf-a06f-4214-9a37-3b5893ef8558","Type":"ContainerStarted","Data":"caaad87b67d356cb9e1996741fb11ab478eb0e9c8d7298fa00df5cbafe0681cc"} Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.605829 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-krrg6" event={"ID":"b1a3faaf-a06f-4214-9a37-3b5893ef8558","Type":"ContainerStarted","Data":"df1aa2b66f9379e9988aab962309fd0f06271f9adafc96b9ab32dd5de8601cb8"} Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.605842 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-krrg6" event={"ID":"b1a3faaf-a06f-4214-9a37-3b5893ef8558","Type":"ContainerStarted","Data":"6f4f8de1b3def8eb2650e944f1cc01767186d6c62bb5d099c14e4a8cf7331b86"} Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.878830 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.882121 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.884396 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.885386 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.885876 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.886245 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.886411 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.886523 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-pztqj" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.886627 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.887349 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.899475 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.915718 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.969494 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2aecaa08-c5f2-452a-95c3-9450177b243f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.969557 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/2aecaa08-c5f2-452a-95c3-9450177b243f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.969588 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqzl6\" (UniqueName: \"kubernetes.io/projected/2aecaa08-c5f2-452a-95c3-9450177b243f-kube-api-access-gqzl6\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.969607 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2aecaa08-c5f2-452a-95c3-9450177b243f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.969654 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/2aecaa08-c5f2-452a-95c3-9450177b243f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.969711 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/2aecaa08-c5f2-452a-95c3-9450177b243f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.969733 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2aecaa08-c5f2-452a-95c3-9450177b243f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.969796 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2aecaa08-c5f2-452a-95c3-9450177b243f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.969827 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2aecaa08-c5f2-452a-95c3-9450177b243f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.969860 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2aecaa08-c5f2-452a-95c3-9450177b243f-web-config\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.969881 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2aecaa08-c5f2-452a-95c3-9450177b243f-config-volume\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:57 crc kubenswrapper[4915]: I1124 21:24:57.969904 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2aecaa08-c5f2-452a-95c3-9450177b243f-config-out\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.005158 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-npj8k" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.014204 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-zbx7q" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.070824 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2aecaa08-c5f2-452a-95c3-9450177b243f-config-volume\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.070872 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2aecaa08-c5f2-452a-95c3-9450177b243f-config-out\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.070906 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2aecaa08-c5f2-452a-95c3-9450177b243f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.070934 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/2aecaa08-c5f2-452a-95c3-9450177b243f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.070956 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqzl6\" (UniqueName: \"kubernetes.io/projected/2aecaa08-c5f2-452a-95c3-9450177b243f-kube-api-access-gqzl6\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.070974 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2aecaa08-c5f2-452a-95c3-9450177b243f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.070995 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/2aecaa08-c5f2-452a-95c3-9450177b243f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.071012 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/2aecaa08-c5f2-452a-95c3-9450177b243f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.071025 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2aecaa08-c5f2-452a-95c3-9450177b243f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.071041 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2aecaa08-c5f2-452a-95c3-9450177b243f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.071070 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2aecaa08-c5f2-452a-95c3-9450177b243f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.071087 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2aecaa08-c5f2-452a-95c3-9450177b243f-web-config\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:58 crc kubenswrapper[4915]: E1124 21:24:58.072285 4915 secret.go:188] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Nov 24 21:24:58 crc kubenswrapper[4915]: E1124 21:24:58.072370 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2aecaa08-c5f2-452a-95c3-9450177b243f-secret-alertmanager-main-tls podName:2aecaa08-c5f2-452a-95c3-9450177b243f nodeName:}" failed. No retries permitted until 2025-11-24 21:24:58.572346173 +0000 UTC m=+316.888598396 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/2aecaa08-c5f2-452a-95c3-9450177b243f-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "2aecaa08-c5f2-452a-95c3-9450177b243f") : secret "alertmanager-main-tls" not found Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.073026 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/2aecaa08-c5f2-452a-95c3-9450177b243f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.073423 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2aecaa08-c5f2-452a-95c3-9450177b243f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.074233 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2aecaa08-c5f2-452a-95c3-9450177b243f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.076871 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2aecaa08-c5f2-452a-95c3-9450177b243f-config-out\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.077469 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/2aecaa08-c5f2-452a-95c3-9450177b243f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.078102 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2aecaa08-c5f2-452a-95c3-9450177b243f-config-volume\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.079213 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2aecaa08-c5f2-452a-95c3-9450177b243f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.079663 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2aecaa08-c5f2-452a-95c3-9450177b243f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.098240 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2aecaa08-c5f2-452a-95c3-9450177b243f-web-config\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.099018 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2aecaa08-c5f2-452a-95c3-9450177b243f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.108737 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqzl6\" (UniqueName: \"kubernetes.io/projected/2aecaa08-c5f2-452a-95c3-9450177b243f-kube-api-access-gqzl6\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.449243 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-zbx7q"] Nov 24 21:24:58 crc kubenswrapper[4915]: W1124 21:24:58.466270 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod528e2728_9273_480d_b946_8e5f92d87491.slice/crio-17bc908a2500de4adf550f91681540a5329cb4bdc7f3df5fc44a3484e4a2c46f WatchSource:0}: Error finding container 17bc908a2500de4adf550f91681540a5329cb4bdc7f3df5fc44a3484e4a2c46f: Status 404 returned error can't find the container with id 17bc908a2500de4adf550f91681540a5329cb4bdc7f3df5fc44a3484e4a2c46f Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.578646 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/2aecaa08-c5f2-452a-95c3-9450177b243f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.583008 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/2aecaa08-c5f2-452a-95c3-9450177b243f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"2aecaa08-c5f2-452a-95c3-9450177b243f\") " pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.611808 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-99dlp" event={"ID":"45592d0e-de97-4ea1-9f82-9980c8cdd3c1","Type":"ContainerStarted","Data":"b2c24dbe75a5cd83266ea5387808857ab9a3545e9823dfdca21b35ef21a035be"} Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.612878 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-zbx7q" event={"ID":"528e2728-9273-480d-b946-8e5f92d87491","Type":"ContainerStarted","Data":"17bc908a2500de4adf550f91681540a5329cb4bdc7f3df5fc44a3484e4a2c46f"} Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.782438 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-759576649c-dxbqx"] Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.783938 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.786733 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.790254 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.791033 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-flks7" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.791545 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.791703 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.792194 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.793607 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-cu905qdt590ls" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.796561 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.804884 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-759576649c-dxbqx"] Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.881674 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/7c402413-6f64-484b-86cd-86c462006072-secret-grpc-tls\") pod \"thanos-querier-759576649c-dxbqx\" (UID: \"7c402413-6f64-484b-86cd-86c462006072\") " pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.882048 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/7c402413-6f64-484b-86cd-86c462006072-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-759576649c-dxbqx\" (UID: \"7c402413-6f64-484b-86cd-86c462006072\") " pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.882085 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/7c402413-6f64-484b-86cd-86c462006072-secret-thanos-querier-tls\") pod \"thanos-querier-759576649c-dxbqx\" (UID: \"7c402413-6f64-484b-86cd-86c462006072\") " pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.882120 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7c402413-6f64-484b-86cd-86c462006072-metrics-client-ca\") pod \"thanos-querier-759576649c-dxbqx\" (UID: \"7c402413-6f64-484b-86cd-86c462006072\") " pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.882189 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/7c402413-6f64-484b-86cd-86c462006072-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-759576649c-dxbqx\" (UID: \"7c402413-6f64-484b-86cd-86c462006072\") " pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.882223 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/7c402413-6f64-484b-86cd-86c462006072-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-759576649c-dxbqx\" (UID: \"7c402413-6f64-484b-86cd-86c462006072\") " pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.882256 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gstgw\" (UniqueName: \"kubernetes.io/projected/7c402413-6f64-484b-86cd-86c462006072-kube-api-access-gstgw\") pod \"thanos-querier-759576649c-dxbqx\" (UID: \"7c402413-6f64-484b-86cd-86c462006072\") " pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.882278 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/7c402413-6f64-484b-86cd-86c462006072-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-759576649c-dxbqx\" (UID: \"7c402413-6f64-484b-86cd-86c462006072\") " pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.983715 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/7c402413-6f64-484b-86cd-86c462006072-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-759576649c-dxbqx\" (UID: \"7c402413-6f64-484b-86cd-86c462006072\") " pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.983793 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gstgw\" (UniqueName: \"kubernetes.io/projected/7c402413-6f64-484b-86cd-86c462006072-kube-api-access-gstgw\") pod \"thanos-querier-759576649c-dxbqx\" (UID: \"7c402413-6f64-484b-86cd-86c462006072\") " pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.983878 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/7c402413-6f64-484b-86cd-86c462006072-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-759576649c-dxbqx\" (UID: \"7c402413-6f64-484b-86cd-86c462006072\") " pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.984072 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/7c402413-6f64-484b-86cd-86c462006072-secret-grpc-tls\") pod \"thanos-querier-759576649c-dxbqx\" (UID: \"7c402413-6f64-484b-86cd-86c462006072\") " pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.984108 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/7c402413-6f64-484b-86cd-86c462006072-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-759576649c-dxbqx\" (UID: \"7c402413-6f64-484b-86cd-86c462006072\") " pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.984155 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/7c402413-6f64-484b-86cd-86c462006072-secret-thanos-querier-tls\") pod \"thanos-querier-759576649c-dxbqx\" (UID: \"7c402413-6f64-484b-86cd-86c462006072\") " pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.984191 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7c402413-6f64-484b-86cd-86c462006072-metrics-client-ca\") pod \"thanos-querier-759576649c-dxbqx\" (UID: \"7c402413-6f64-484b-86cd-86c462006072\") " pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.984222 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/7c402413-6f64-484b-86cd-86c462006072-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-759576649c-dxbqx\" (UID: \"7c402413-6f64-484b-86cd-86c462006072\") " pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.986090 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7c402413-6f64-484b-86cd-86c462006072-metrics-client-ca\") pod \"thanos-querier-759576649c-dxbqx\" (UID: \"7c402413-6f64-484b-86cd-86c462006072\") " pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.989682 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/7c402413-6f64-484b-86cd-86c462006072-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-759576649c-dxbqx\" (UID: \"7c402413-6f64-484b-86cd-86c462006072\") " pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.991019 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/7c402413-6f64-484b-86cd-86c462006072-secret-grpc-tls\") pod \"thanos-querier-759576649c-dxbqx\" (UID: \"7c402413-6f64-484b-86cd-86c462006072\") " pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.991179 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/7c402413-6f64-484b-86cd-86c462006072-secret-thanos-querier-tls\") pod \"thanos-querier-759576649c-dxbqx\" (UID: \"7c402413-6f64-484b-86cd-86c462006072\") " pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.991369 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/7c402413-6f64-484b-86cd-86c462006072-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-759576649c-dxbqx\" (UID: \"7c402413-6f64-484b-86cd-86c462006072\") " pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.992081 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/7c402413-6f64-484b-86cd-86c462006072-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-759576649c-dxbqx\" (UID: \"7c402413-6f64-484b-86cd-86c462006072\") " pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" Nov 24 21:24:58 crc kubenswrapper[4915]: I1124 21:24:58.992170 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/7c402413-6f64-484b-86cd-86c462006072-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-759576649c-dxbqx\" (UID: \"7c402413-6f64-484b-86cd-86c462006072\") " pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" Nov 24 21:24:59 crc kubenswrapper[4915]: I1124 21:24:59.010593 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gstgw\" (UniqueName: \"kubernetes.io/projected/7c402413-6f64-484b-86cd-86c462006072-kube-api-access-gstgw\") pod \"thanos-querier-759576649c-dxbqx\" (UID: \"7c402413-6f64-484b-86cd-86c462006072\") " pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" Nov 24 21:24:59 crc kubenswrapper[4915]: I1124 21:24:59.060048 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Nov 24 21:24:59 crc kubenswrapper[4915]: W1124 21:24:59.068239 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2aecaa08_c5f2_452a_95c3_9450177b243f.slice/crio-bb306ee7adb6af568e353b0a81a43258b2d2e1a012bd0d32247ba80df7790c90 WatchSource:0}: Error finding container bb306ee7adb6af568e353b0a81a43258b2d2e1a012bd0d32247ba80df7790c90: Status 404 returned error can't find the container with id bb306ee7adb6af568e353b0a81a43258b2d2e1a012bd0d32247ba80df7790c90 Nov 24 21:24:59 crc kubenswrapper[4915]: I1124 21:24:59.101628 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" Nov 24 21:24:59 crc kubenswrapper[4915]: I1124 21:24:59.297003 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-759576649c-dxbqx"] Nov 24 21:24:59 crc kubenswrapper[4915]: W1124 21:24:59.306679 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c402413_6f64_484b_86cd_86c462006072.slice/crio-18b016fa4fc990d1d48d67ce141a14dab94d6c18f1211f4ecbae427a0ac7a785 WatchSource:0}: Error finding container 18b016fa4fc990d1d48d67ce141a14dab94d6c18f1211f4ecbae427a0ac7a785: Status 404 returned error can't find the container with id 18b016fa4fc990d1d48d67ce141a14dab94d6c18f1211f4ecbae427a0ac7a785 Nov 24 21:24:59 crc kubenswrapper[4915]: I1124 21:24:59.621394 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" event={"ID":"7c402413-6f64-484b-86cd-86c462006072","Type":"ContainerStarted","Data":"18b016fa4fc990d1d48d67ce141a14dab94d6c18f1211f4ecbae427a0ac7a785"} Nov 24 21:24:59 crc kubenswrapper[4915]: I1124 21:24:59.623144 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-krrg6" event={"ID":"b1a3faaf-a06f-4214-9a37-3b5893ef8558","Type":"ContainerStarted","Data":"07c45f6ebed6799b1a3556e78ec6ebd42c56132d80c3f999b76ac3373f897512"} Nov 24 21:24:59 crc kubenswrapper[4915]: I1124 21:24:59.624852 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2aecaa08-c5f2-452a-95c3-9450177b243f","Type":"ContainerStarted","Data":"bb306ee7adb6af568e353b0a81a43258b2d2e1a012bd0d32247ba80df7790c90"} Nov 24 21:24:59 crc kubenswrapper[4915]: I1124 21:24:59.626922 4915 generic.go:334] "Generic (PLEG): container finished" podID="45592d0e-de97-4ea1-9f82-9980c8cdd3c1" containerID="b2c24dbe75a5cd83266ea5387808857ab9a3545e9823dfdca21b35ef21a035be" exitCode=0 Nov 24 21:24:59 crc kubenswrapper[4915]: I1124 21:24:59.626968 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-99dlp" event={"ID":"45592d0e-de97-4ea1-9f82-9980c8cdd3c1","Type":"ContainerDied","Data":"b2c24dbe75a5cd83266ea5387808857ab9a3545e9823dfdca21b35ef21a035be"} Nov 24 21:24:59 crc kubenswrapper[4915]: I1124 21:24:59.643654 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-krrg6" podStartSLOduration=2.284158869 podStartE2EDuration="3.643637728s" podCreationTimestamp="2025-11-24 21:24:56 +0000 UTC" firstStartedPulling="2025-11-24 21:24:57.556055995 +0000 UTC m=+315.872308158" lastFinishedPulling="2025-11-24 21:24:58.915534844 +0000 UTC m=+317.231787017" observedRunningTime="2025-11-24 21:24:59.64007715 +0000 UTC m=+317.956329323" watchObservedRunningTime="2025-11-24 21:24:59.643637728 +0000 UTC m=+317.959889901" Nov 24 21:25:00 crc kubenswrapper[4915]: I1124 21:25:00.634283 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-zbx7q" event={"ID":"528e2728-9273-480d-b946-8e5f92d87491","Type":"ContainerStarted","Data":"45b4870d0e2707fd17650b917dfa43962cb5748a15bb6a92c481cf0e67dcd63e"} Nov 24 21:25:00 crc kubenswrapper[4915]: I1124 21:25:00.634930 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-zbx7q" event={"ID":"528e2728-9273-480d-b946-8e5f92d87491","Type":"ContainerStarted","Data":"387f705dfdc93ab51e9841cbc7a783863b27f5ace7aba4698f3f8d91940fe33e"} Nov 24 21:25:00 crc kubenswrapper[4915]: I1124 21:25:00.634949 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-zbx7q" event={"ID":"528e2728-9273-480d-b946-8e5f92d87491","Type":"ContainerStarted","Data":"771ab4011ade0db64e03fcb037ab4c2593017da522274837ab023b3613adc7af"} Nov 24 21:25:00 crc kubenswrapper[4915]: I1124 21:25:00.641438 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-99dlp" event={"ID":"45592d0e-de97-4ea1-9f82-9980c8cdd3c1","Type":"ContainerStarted","Data":"261366deda18fe917f15a5a2c52e7f63fe069758fc7729a9701dc5fe0a97bf53"} Nov 24 21:25:00 crc kubenswrapper[4915]: I1124 21:25:00.641482 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-99dlp" event={"ID":"45592d0e-de97-4ea1-9f82-9980c8cdd3c1","Type":"ContainerStarted","Data":"5e68cb9f18a36f8dab7c19cb0db7a2efcdd6e1fc29ab5931711ee6d65b6e4611"} Nov 24 21:25:00 crc kubenswrapper[4915]: I1124 21:25:00.665611 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-zbx7q" podStartSLOduration=3.168003551 podStartE2EDuration="4.665558041s" podCreationTimestamp="2025-11-24 21:24:56 +0000 UTC" firstStartedPulling="2025-11-24 21:24:58.47044743 +0000 UTC m=+316.786699603" lastFinishedPulling="2025-11-24 21:24:59.96800192 +0000 UTC m=+318.284254093" observedRunningTime="2025-11-24 21:25:00.656645346 +0000 UTC m=+318.972897539" watchObservedRunningTime="2025-11-24 21:25:00.665558041 +0000 UTC m=+318.981810224" Nov 24 21:25:00 crc kubenswrapper[4915]: I1124 21:25:00.675900 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-99dlp" podStartSLOduration=3.372785243 podStartE2EDuration="4.675878266s" podCreationTimestamp="2025-11-24 21:24:56 +0000 UTC" firstStartedPulling="2025-11-24 21:24:57.136852305 +0000 UTC m=+315.453104478" lastFinishedPulling="2025-11-24 21:24:58.439945328 +0000 UTC m=+316.756197501" observedRunningTime="2025-11-24 21:25:00.675675511 +0000 UTC m=+318.991927714" watchObservedRunningTime="2025-11-24 21:25:00.675878266 +0000 UTC m=+318.992130439" Nov 24 21:25:01 crc kubenswrapper[4915]: I1124 21:25:01.552285 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7bc4d6bc5d-txx68"] Nov 24 21:25:01 crc kubenswrapper[4915]: I1124 21:25:01.553688 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bc4d6bc5d-txx68" Nov 24 21:25:01 crc kubenswrapper[4915]: I1124 21:25:01.562324 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7bc4d6bc5d-txx68"] Nov 24 21:25:01 crc kubenswrapper[4915]: I1124 21:25:01.621754 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-console-oauth-config\") pod \"console-7bc4d6bc5d-txx68\" (UID: \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\") " pod="openshift-console/console-7bc4d6bc5d-txx68" Nov 24 21:25:01 crc kubenswrapper[4915]: I1124 21:25:01.621897 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-trusted-ca-bundle\") pod \"console-7bc4d6bc5d-txx68\" (UID: \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\") " pod="openshift-console/console-7bc4d6bc5d-txx68" Nov 24 21:25:01 crc kubenswrapper[4915]: I1124 21:25:01.621923 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-oauth-serving-cert\") pod \"console-7bc4d6bc5d-txx68\" (UID: \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\") " pod="openshift-console/console-7bc4d6bc5d-txx68" Nov 24 21:25:01 crc kubenswrapper[4915]: I1124 21:25:01.621948 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-service-ca\") pod \"console-7bc4d6bc5d-txx68\" (UID: \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\") " pod="openshift-console/console-7bc4d6bc5d-txx68" Nov 24 21:25:01 crc kubenswrapper[4915]: I1124 21:25:01.622000 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-console-config\") pod \"console-7bc4d6bc5d-txx68\" (UID: \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\") " pod="openshift-console/console-7bc4d6bc5d-txx68" Nov 24 21:25:01 crc kubenswrapper[4915]: I1124 21:25:01.622021 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-console-serving-cert\") pod \"console-7bc4d6bc5d-txx68\" (UID: \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\") " pod="openshift-console/console-7bc4d6bc5d-txx68" Nov 24 21:25:01 crc kubenswrapper[4915]: I1124 21:25:01.622065 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f69h5\" (UniqueName: \"kubernetes.io/projected/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-kube-api-access-f69h5\") pod \"console-7bc4d6bc5d-txx68\" (UID: \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\") " pod="openshift-console/console-7bc4d6bc5d-txx68" Nov 24 21:25:01 crc kubenswrapper[4915]: I1124 21:25:01.646542 4915 generic.go:334] "Generic (PLEG): container finished" podID="2aecaa08-c5f2-452a-95c3-9450177b243f" containerID="4a25428b097e89b6a43f5a53e3d00868242b0eea45b111aefb5026edb51ec464" exitCode=0 Nov 24 21:25:01 crc kubenswrapper[4915]: I1124 21:25:01.646862 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2aecaa08-c5f2-452a-95c3-9450177b243f","Type":"ContainerDied","Data":"4a25428b097e89b6a43f5a53e3d00868242b0eea45b111aefb5026edb51ec464"} Nov 24 21:25:01 crc kubenswrapper[4915]: I1124 21:25:01.722992 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-trusted-ca-bundle\") pod \"console-7bc4d6bc5d-txx68\" (UID: \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\") " pod="openshift-console/console-7bc4d6bc5d-txx68" Nov 24 21:25:01 crc kubenswrapper[4915]: I1124 21:25:01.723172 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-oauth-serving-cert\") pod \"console-7bc4d6bc5d-txx68\" (UID: \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\") " pod="openshift-console/console-7bc4d6bc5d-txx68" Nov 24 21:25:01 crc kubenswrapper[4915]: I1124 21:25:01.723268 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-service-ca\") pod \"console-7bc4d6bc5d-txx68\" (UID: \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\") " pod="openshift-console/console-7bc4d6bc5d-txx68" Nov 24 21:25:01 crc kubenswrapper[4915]: I1124 21:25:01.723331 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-console-config\") pod \"console-7bc4d6bc5d-txx68\" (UID: \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\") " pod="openshift-console/console-7bc4d6bc5d-txx68" Nov 24 21:25:01 crc kubenswrapper[4915]: I1124 21:25:01.723394 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-console-serving-cert\") pod \"console-7bc4d6bc5d-txx68\" (UID: \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\") " pod="openshift-console/console-7bc4d6bc5d-txx68" Nov 24 21:25:01 crc kubenswrapper[4915]: I1124 21:25:01.723478 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f69h5\" (UniqueName: \"kubernetes.io/projected/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-kube-api-access-f69h5\") pod \"console-7bc4d6bc5d-txx68\" (UID: \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\") " pod="openshift-console/console-7bc4d6bc5d-txx68" Nov 24 21:25:01 crc kubenswrapper[4915]: I1124 21:25:01.724142 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-oauth-serving-cert\") pod \"console-7bc4d6bc5d-txx68\" (UID: \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\") " pod="openshift-console/console-7bc4d6bc5d-txx68" Nov 24 21:25:01 crc kubenswrapper[4915]: I1124 21:25:01.724265 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-trusted-ca-bundle\") pod \"console-7bc4d6bc5d-txx68\" (UID: \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\") " pod="openshift-console/console-7bc4d6bc5d-txx68" Nov 24 21:25:01 crc kubenswrapper[4915]: I1124 21:25:01.724454 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-console-config\") pod \"console-7bc4d6bc5d-txx68\" (UID: \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\") " pod="openshift-console/console-7bc4d6bc5d-txx68" Nov 24 21:25:01 crc kubenswrapper[4915]: I1124 21:25:01.725148 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-service-ca\") pod \"console-7bc4d6bc5d-txx68\" (UID: \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\") " pod="openshift-console/console-7bc4d6bc5d-txx68" Nov 24 21:25:01 crc kubenswrapper[4915]: I1124 21:25:01.725259 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-console-oauth-config\") pod \"console-7bc4d6bc5d-txx68\" (UID: \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\") " pod="openshift-console/console-7bc4d6bc5d-txx68" Nov 24 21:25:01 crc kubenswrapper[4915]: I1124 21:25:01.747362 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-console-serving-cert\") pod \"console-7bc4d6bc5d-txx68\" (UID: \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\") " pod="openshift-console/console-7bc4d6bc5d-txx68" Nov 24 21:25:01 crc kubenswrapper[4915]: I1124 21:25:01.754078 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-console-oauth-config\") pod \"console-7bc4d6bc5d-txx68\" (UID: \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\") " pod="openshift-console/console-7bc4d6bc5d-txx68" Nov 24 21:25:01 crc kubenswrapper[4915]: I1124 21:25:01.757015 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f69h5\" (UniqueName: \"kubernetes.io/projected/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-kube-api-access-f69h5\") pod \"console-7bc4d6bc5d-txx68\" (UID: \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\") " pod="openshift-console/console-7bc4d6bc5d-txx68" Nov 24 21:25:01 crc kubenswrapper[4915]: I1124 21:25:01.876574 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bc4d6bc5d-txx68" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.177404 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7bc4d6bc5d-txx68"] Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.215964 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-689f8b4874-tvgnn"] Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.217187 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-689f8b4874-tvgnn" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.219837 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-2di7o5hs2jouq" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.220348 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.220848 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-l7z29" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.220881 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.221058 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.221146 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.231732 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-689f8b4874-tvgnn"] Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.333391 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5355b4f4-cfc3-4083-bbc4-57ccfb241098-metrics-server-audit-profiles\") pod \"metrics-server-689f8b4874-tvgnn\" (UID: \"5355b4f4-cfc3-4083-bbc4-57ccfb241098\") " pod="openshift-monitoring/metrics-server-689f8b4874-tvgnn" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.333748 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd785\" (UniqueName: \"kubernetes.io/projected/5355b4f4-cfc3-4083-bbc4-57ccfb241098-kube-api-access-zd785\") pod \"metrics-server-689f8b4874-tvgnn\" (UID: \"5355b4f4-cfc3-4083-bbc4-57ccfb241098\") " pod="openshift-monitoring/metrics-server-689f8b4874-tvgnn" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.333798 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5355b4f4-cfc3-4083-bbc4-57ccfb241098-secret-metrics-client-certs\") pod \"metrics-server-689f8b4874-tvgnn\" (UID: \"5355b4f4-cfc3-4083-bbc4-57ccfb241098\") " pod="openshift-monitoring/metrics-server-689f8b4874-tvgnn" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.333850 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5355b4f4-cfc3-4083-bbc4-57ccfb241098-client-ca-bundle\") pod \"metrics-server-689f8b4874-tvgnn\" (UID: \"5355b4f4-cfc3-4083-bbc4-57ccfb241098\") " pod="openshift-monitoring/metrics-server-689f8b4874-tvgnn" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.333894 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5355b4f4-cfc3-4083-bbc4-57ccfb241098-secret-metrics-server-tls\") pod \"metrics-server-689f8b4874-tvgnn\" (UID: \"5355b4f4-cfc3-4083-bbc4-57ccfb241098\") " pod="openshift-monitoring/metrics-server-689f8b4874-tvgnn" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.333919 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5355b4f4-cfc3-4083-bbc4-57ccfb241098-audit-log\") pod \"metrics-server-689f8b4874-tvgnn\" (UID: \"5355b4f4-cfc3-4083-bbc4-57ccfb241098\") " pod="openshift-monitoring/metrics-server-689f8b4874-tvgnn" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.333973 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5355b4f4-cfc3-4083-bbc4-57ccfb241098-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-689f8b4874-tvgnn\" (UID: \"5355b4f4-cfc3-4083-bbc4-57ccfb241098\") " pod="openshift-monitoring/metrics-server-689f8b4874-tvgnn" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.434825 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5355b4f4-cfc3-4083-bbc4-57ccfb241098-client-ca-bundle\") pod \"metrics-server-689f8b4874-tvgnn\" (UID: \"5355b4f4-cfc3-4083-bbc4-57ccfb241098\") " pod="openshift-monitoring/metrics-server-689f8b4874-tvgnn" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.434891 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5355b4f4-cfc3-4083-bbc4-57ccfb241098-secret-metrics-server-tls\") pod \"metrics-server-689f8b4874-tvgnn\" (UID: \"5355b4f4-cfc3-4083-bbc4-57ccfb241098\") " pod="openshift-monitoring/metrics-server-689f8b4874-tvgnn" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.435123 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5355b4f4-cfc3-4083-bbc4-57ccfb241098-audit-log\") pod \"metrics-server-689f8b4874-tvgnn\" (UID: \"5355b4f4-cfc3-4083-bbc4-57ccfb241098\") " pod="openshift-monitoring/metrics-server-689f8b4874-tvgnn" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.435295 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5355b4f4-cfc3-4083-bbc4-57ccfb241098-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-689f8b4874-tvgnn\" (UID: \"5355b4f4-cfc3-4083-bbc4-57ccfb241098\") " pod="openshift-monitoring/metrics-server-689f8b4874-tvgnn" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.435443 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5355b4f4-cfc3-4083-bbc4-57ccfb241098-metrics-server-audit-profiles\") pod \"metrics-server-689f8b4874-tvgnn\" (UID: \"5355b4f4-cfc3-4083-bbc4-57ccfb241098\") " pod="openshift-monitoring/metrics-server-689f8b4874-tvgnn" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.435465 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd785\" (UniqueName: \"kubernetes.io/projected/5355b4f4-cfc3-4083-bbc4-57ccfb241098-kube-api-access-zd785\") pod \"metrics-server-689f8b4874-tvgnn\" (UID: \"5355b4f4-cfc3-4083-bbc4-57ccfb241098\") " pod="openshift-monitoring/metrics-server-689f8b4874-tvgnn" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.435614 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5355b4f4-cfc3-4083-bbc4-57ccfb241098-secret-metrics-client-certs\") pod \"metrics-server-689f8b4874-tvgnn\" (UID: \"5355b4f4-cfc3-4083-bbc4-57ccfb241098\") " pod="openshift-monitoring/metrics-server-689f8b4874-tvgnn" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.435648 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5355b4f4-cfc3-4083-bbc4-57ccfb241098-audit-log\") pod \"metrics-server-689f8b4874-tvgnn\" (UID: \"5355b4f4-cfc3-4083-bbc4-57ccfb241098\") " pod="openshift-monitoring/metrics-server-689f8b4874-tvgnn" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.436155 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5355b4f4-cfc3-4083-bbc4-57ccfb241098-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-689f8b4874-tvgnn\" (UID: \"5355b4f4-cfc3-4083-bbc4-57ccfb241098\") " pod="openshift-monitoring/metrics-server-689f8b4874-tvgnn" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.437483 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5355b4f4-cfc3-4083-bbc4-57ccfb241098-metrics-server-audit-profiles\") pod \"metrics-server-689f8b4874-tvgnn\" (UID: \"5355b4f4-cfc3-4083-bbc4-57ccfb241098\") " pod="openshift-monitoring/metrics-server-689f8b4874-tvgnn" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.441755 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5355b4f4-cfc3-4083-bbc4-57ccfb241098-client-ca-bundle\") pod \"metrics-server-689f8b4874-tvgnn\" (UID: \"5355b4f4-cfc3-4083-bbc4-57ccfb241098\") " pod="openshift-monitoring/metrics-server-689f8b4874-tvgnn" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.444761 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5355b4f4-cfc3-4083-bbc4-57ccfb241098-secret-metrics-client-certs\") pod \"metrics-server-689f8b4874-tvgnn\" (UID: \"5355b4f4-cfc3-4083-bbc4-57ccfb241098\") " pod="openshift-monitoring/metrics-server-689f8b4874-tvgnn" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.449292 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5355b4f4-cfc3-4083-bbc4-57ccfb241098-secret-metrics-server-tls\") pod \"metrics-server-689f8b4874-tvgnn\" (UID: \"5355b4f4-cfc3-4083-bbc4-57ccfb241098\") " pod="openshift-monitoring/metrics-server-689f8b4874-tvgnn" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.459400 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd785\" (UniqueName: \"kubernetes.io/projected/5355b4f4-cfc3-4083-bbc4-57ccfb241098-kube-api-access-zd785\") pod \"metrics-server-689f8b4874-tvgnn\" (UID: \"5355b4f4-cfc3-4083-bbc4-57ccfb241098\") " pod="openshift-monitoring/metrics-server-689f8b4874-tvgnn" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.510593 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-58ddbf6699-crdfx"] Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.511517 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-58ddbf6699-crdfx" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.518347 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-58ddbf6699-crdfx"] Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.519491 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.519653 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.606526 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-689f8b4874-tvgnn" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.638478 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/57116b5a-d040-43df-8d9d-22422f5fba5c-monitoring-plugin-cert\") pod \"monitoring-plugin-58ddbf6699-crdfx\" (UID: \"57116b5a-d040-43df-8d9d-22422f5fba5c\") " pod="openshift-monitoring/monitoring-plugin-58ddbf6699-crdfx" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.672740 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" event={"ID":"7c402413-6f64-484b-86cd-86c462006072","Type":"ContainerStarted","Data":"1402d95d089a183aff2bb9a87c92952010de95fe229a24da9f345abe01590913"} Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.674350 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" event={"ID":"7c402413-6f64-484b-86cd-86c462006072","Type":"ContainerStarted","Data":"377cb229e37a21e52cd8440fee408bc4376c04103aa6a9276065afcd68a8b254"} Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.674390 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" event={"ID":"7c402413-6f64-484b-86cd-86c462006072","Type":"ContainerStarted","Data":"054893b05e35dec4c55a71a87505ac967ce4982a090b5ac1009a17238712bd7a"} Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.676770 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bc4d6bc5d-txx68" event={"ID":"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0","Type":"ContainerStarted","Data":"a15f5ffd422a3851b32f1205cf09e357a18ad04f91819c6ab85d878a7071b232"} Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.676829 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bc4d6bc5d-txx68" event={"ID":"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0","Type":"ContainerStarted","Data":"b6ab915899a535c626e29a82051a18645e03bad7702ab5ee87bd42fdd05ccc8c"} Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.699303 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7bc4d6bc5d-txx68" podStartSLOduration=1.699282208 podStartE2EDuration="1.699282208s" podCreationTimestamp="2025-11-24 21:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:25:02.697580971 +0000 UTC m=+321.013833144" watchObservedRunningTime="2025-11-24 21:25:02.699282208 +0000 UTC m=+321.015534381" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.741745 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/57116b5a-d040-43df-8d9d-22422f5fba5c-monitoring-plugin-cert\") pod \"monitoring-plugin-58ddbf6699-crdfx\" (UID: \"57116b5a-d040-43df-8d9d-22422f5fba5c\") " pod="openshift-monitoring/monitoring-plugin-58ddbf6699-crdfx" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.749348 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/57116b5a-d040-43df-8d9d-22422f5fba5c-monitoring-plugin-cert\") pod \"monitoring-plugin-58ddbf6699-crdfx\" (UID: \"57116b5a-d040-43df-8d9d-22422f5fba5c\") " pod="openshift-monitoring/monitoring-plugin-58ddbf6699-crdfx" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.834139 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-58ddbf6699-crdfx" Nov 24 21:25:02 crc kubenswrapper[4915]: I1124 21:25:02.884172 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-689f8b4874-tvgnn"] Nov 24 21:25:02 crc kubenswrapper[4915]: W1124 21:25:02.907897 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5355b4f4_cfc3_4083_bbc4_57ccfb241098.slice/crio-3cd5ca86c8cec9aa40052c039f3be4897143dbc4af9d24278f3dfb2fe04bc53a WatchSource:0}: Error finding container 3cd5ca86c8cec9aa40052c039f3be4897143dbc4af9d24278f3dfb2fe04bc53a: Status 404 returned error can't find the container with id 3cd5ca86c8cec9aa40052c039f3be4897143dbc4af9d24278f3dfb2fe04bc53a Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.108710 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.111364 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.116276 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.116634 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.116885 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.117111 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-6pu3hrmqlme9l" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.117243 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.117502 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-znpvb" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.117807 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.118190 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.118385 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.118515 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.121440 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.126327 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.131725 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.132076 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.253705 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c99f19f2-b19b-48ee-97ee-d0419b0485be-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.253805 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c99f19f2-b19b-48ee-97ee-d0419b0485be-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.254006 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c99f19f2-b19b-48ee-97ee-d0419b0485be-web-config\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.254088 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9hpk\" (UniqueName: \"kubernetes.io/projected/c99f19f2-b19b-48ee-97ee-d0419b0485be-kube-api-access-z9hpk\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.254121 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/c99f19f2-b19b-48ee-97ee-d0419b0485be-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.254176 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c99f19f2-b19b-48ee-97ee-d0419b0485be-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.254246 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/c99f19f2-b19b-48ee-97ee-d0419b0485be-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.254299 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c99f19f2-b19b-48ee-97ee-d0419b0485be-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.254340 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c99f19f2-b19b-48ee-97ee-d0419b0485be-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.254370 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c99f19f2-b19b-48ee-97ee-d0419b0485be-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.254396 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/c99f19f2-b19b-48ee-97ee-d0419b0485be-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.254417 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/c99f19f2-b19b-48ee-97ee-d0419b0485be-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.254461 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c99f19f2-b19b-48ee-97ee-d0419b0485be-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.254489 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c99f19f2-b19b-48ee-97ee-d0419b0485be-config-out\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.254535 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c99f19f2-b19b-48ee-97ee-d0419b0485be-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.254557 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c99f19f2-b19b-48ee-97ee-d0419b0485be-config\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.254609 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c99f19f2-b19b-48ee-97ee-d0419b0485be-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.254647 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/c99f19f2-b19b-48ee-97ee-d0419b0485be-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.297879 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-58ddbf6699-crdfx"] Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.355920 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c99f19f2-b19b-48ee-97ee-d0419b0485be-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.355983 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c99f19f2-b19b-48ee-97ee-d0419b0485be-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.356006 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/c99f19f2-b19b-48ee-97ee-d0419b0485be-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.356038 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/c99f19f2-b19b-48ee-97ee-d0419b0485be-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.356064 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c99f19f2-b19b-48ee-97ee-d0419b0485be-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.356315 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c99f19f2-b19b-48ee-97ee-d0419b0485be-config-out\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.356343 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c99f19f2-b19b-48ee-97ee-d0419b0485be-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.356386 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c99f19f2-b19b-48ee-97ee-d0419b0485be-config\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.356415 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c99f19f2-b19b-48ee-97ee-d0419b0485be-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.356465 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/c99f19f2-b19b-48ee-97ee-d0419b0485be-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.356490 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c99f19f2-b19b-48ee-97ee-d0419b0485be-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.356508 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c99f19f2-b19b-48ee-97ee-d0419b0485be-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.356546 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c99f19f2-b19b-48ee-97ee-d0419b0485be-web-config\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.356571 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9hpk\" (UniqueName: \"kubernetes.io/projected/c99f19f2-b19b-48ee-97ee-d0419b0485be-kube-api-access-z9hpk\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.356586 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/c99f19f2-b19b-48ee-97ee-d0419b0485be-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.356624 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c99f19f2-b19b-48ee-97ee-d0419b0485be-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.356731 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/c99f19f2-b19b-48ee-97ee-d0419b0485be-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.356755 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c99f19f2-b19b-48ee-97ee-d0419b0485be-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.357496 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/c99f19f2-b19b-48ee-97ee-d0419b0485be-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.357932 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c99f19f2-b19b-48ee-97ee-d0419b0485be-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.358079 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c99f19f2-b19b-48ee-97ee-d0419b0485be-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.358412 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c99f19f2-b19b-48ee-97ee-d0419b0485be-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.359760 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c99f19f2-b19b-48ee-97ee-d0419b0485be-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.363329 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/c99f19f2-b19b-48ee-97ee-d0419b0485be-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.363357 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c99f19f2-b19b-48ee-97ee-d0419b0485be-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.363367 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c99f19f2-b19b-48ee-97ee-d0419b0485be-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.365132 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/c99f19f2-b19b-48ee-97ee-d0419b0485be-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.365317 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c99f19f2-b19b-48ee-97ee-d0419b0485be-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.365746 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c99f19f2-b19b-48ee-97ee-d0419b0485be-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.367316 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/c99f19f2-b19b-48ee-97ee-d0419b0485be-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.367367 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c99f19f2-b19b-48ee-97ee-d0419b0485be-config\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.368029 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c99f19f2-b19b-48ee-97ee-d0419b0485be-web-config\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.369371 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/c99f19f2-b19b-48ee-97ee-d0419b0485be-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.369713 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c99f19f2-b19b-48ee-97ee-d0419b0485be-config-out\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.373270 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c99f19f2-b19b-48ee-97ee-d0419b0485be-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.385004 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9hpk\" (UniqueName: \"kubernetes.io/projected/c99f19f2-b19b-48ee-97ee-d0419b0485be-kube-api-access-z9hpk\") pod \"prometheus-k8s-0\" (UID: \"c99f19f2-b19b-48ee-97ee-d0419b0485be\") " pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.430524 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:03 crc kubenswrapper[4915]: I1124 21:25:03.687046 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-689f8b4874-tvgnn" event={"ID":"5355b4f4-cfc3-4083-bbc4-57ccfb241098","Type":"ContainerStarted","Data":"3cd5ca86c8cec9aa40052c039f3be4897143dbc4af9d24278f3dfb2fe04bc53a"} Nov 24 21:25:04 crc kubenswrapper[4915]: I1124 21:25:04.695255 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" event={"ID":"7c402413-6f64-484b-86cd-86c462006072","Type":"ContainerStarted","Data":"7f450abd5f1286a07892d2a655669bda0f6d3195e35ad9226e0a7999a9fe8d16"} Nov 24 21:25:04 crc kubenswrapper[4915]: I1124 21:25:04.700226 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2aecaa08-c5f2-452a-95c3-9450177b243f","Type":"ContainerStarted","Data":"48d9c54f5a41339af840f9ed2ceb5dfe1cc0736658e57a5f0b3a853052878b65"} Nov 24 21:25:04 crc kubenswrapper[4915]: I1124 21:25:04.701357 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-58ddbf6699-crdfx" event={"ID":"57116b5a-d040-43df-8d9d-22422f5fba5c","Type":"ContainerStarted","Data":"a57c50a7e860e343d3aae861f2de926992e5708ea4c3790b16b38c98180ebd9d"} Nov 24 21:25:04 crc kubenswrapper[4915]: I1124 21:25:04.793169 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Nov 24 21:25:04 crc kubenswrapper[4915]: W1124 21:25:04.802179 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc99f19f2_b19b_48ee_97ee_d0419b0485be.slice/crio-17f516174297d23ae6c88bf66e9f441228c22966c14faaa51710c4c5212e1278 WatchSource:0}: Error finding container 17f516174297d23ae6c88bf66e9f441228c22966c14faaa51710c4c5212e1278: Status 404 returned error can't find the container with id 17f516174297d23ae6c88bf66e9f441228c22966c14faaa51710c4c5212e1278 Nov 24 21:25:05 crc kubenswrapper[4915]: I1124 21:25:05.713105 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" event={"ID":"7c402413-6f64-484b-86cd-86c462006072","Type":"ContainerStarted","Data":"d0a9b3f0a5166093509d825b3961bb9767d0861a845d25bac40c0182fbcd92fc"} Nov 24 21:25:05 crc kubenswrapper[4915]: I1124 21:25:05.713152 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" event={"ID":"7c402413-6f64-484b-86cd-86c462006072","Type":"ContainerStarted","Data":"66dda36f12c25fa3e14385eacb3dd9207e0f454b5f374721d29ed8e97b9c9160"} Nov 24 21:25:05 crc kubenswrapper[4915]: I1124 21:25:05.713253 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" Nov 24 21:25:05 crc kubenswrapper[4915]: I1124 21:25:05.715818 4915 generic.go:334] "Generic (PLEG): container finished" podID="c99f19f2-b19b-48ee-97ee-d0419b0485be" containerID="5cc8de1331b4da3031941c4c4642e0d330289f7ea2aff2b817d769a8ec7f0ef6" exitCode=0 Nov 24 21:25:05 crc kubenswrapper[4915]: I1124 21:25:05.715967 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c99f19f2-b19b-48ee-97ee-d0419b0485be","Type":"ContainerDied","Data":"5cc8de1331b4da3031941c4c4642e0d330289f7ea2aff2b817d769a8ec7f0ef6"} Nov 24 21:25:05 crc kubenswrapper[4915]: I1124 21:25:05.716060 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c99f19f2-b19b-48ee-97ee-d0419b0485be","Type":"ContainerStarted","Data":"17f516174297d23ae6c88bf66e9f441228c22966c14faaa51710c4c5212e1278"} Nov 24 21:25:05 crc kubenswrapper[4915]: I1124 21:25:05.720173 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2aecaa08-c5f2-452a-95c3-9450177b243f","Type":"ContainerStarted","Data":"759e5fac73cc6ab0b1317d45ebbd9bffcae1f1e8da9686183d302a5202a6dca9"} Nov 24 21:25:05 crc kubenswrapper[4915]: I1124 21:25:05.720223 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2aecaa08-c5f2-452a-95c3-9450177b243f","Type":"ContainerStarted","Data":"d129fc42a4d065fe7672c947c2f8f26f827b326c5a84b0483ccc1a29be4cbf9d"} Nov 24 21:25:05 crc kubenswrapper[4915]: I1124 21:25:05.747140 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" podStartSLOduration=2.692177726 podStartE2EDuration="7.747068642s" podCreationTimestamp="2025-11-24 21:24:58 +0000 UTC" firstStartedPulling="2025-11-24 21:24:59.309928048 +0000 UTC m=+317.626180221" lastFinishedPulling="2025-11-24 21:25:04.364818944 +0000 UTC m=+322.681071137" observedRunningTime="2025-11-24 21:25:05.738554947 +0000 UTC m=+324.054807140" watchObservedRunningTime="2025-11-24 21:25:05.747068642 +0000 UTC m=+324.063320885" Nov 24 21:25:06 crc kubenswrapper[4915]: I1124 21:25:06.731331 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-689f8b4874-tvgnn" event={"ID":"5355b4f4-cfc3-4083-bbc4-57ccfb241098","Type":"ContainerStarted","Data":"c2b3fdbc40c468f18cd548a077a94670b71a4a9c74e81d4493b625bd169de23f"} Nov 24 21:25:06 crc kubenswrapper[4915]: I1124 21:25:06.736628 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2aecaa08-c5f2-452a-95c3-9450177b243f","Type":"ContainerStarted","Data":"08656769080d9f8b791d44d6c3b0061bb7bdf8ca392af0b25e3e73f12f8d4d50"} Nov 24 21:25:06 crc kubenswrapper[4915]: I1124 21:25:06.736684 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2aecaa08-c5f2-452a-95c3-9450177b243f","Type":"ContainerStarted","Data":"c6c99ff95216e172c87ad1c6ba46b14a9e19f22b9961f8c77784ec86ac634aed"} Nov 24 21:25:06 crc kubenswrapper[4915]: I1124 21:25:06.736699 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2aecaa08-c5f2-452a-95c3-9450177b243f","Type":"ContainerStarted","Data":"4be76573f7f24064661e88cfa5a9330d6465fb48058838a44d1fc6a473ff7e40"} Nov 24 21:25:06 crc kubenswrapper[4915]: I1124 21:25:06.737840 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-58ddbf6699-crdfx" event={"ID":"57116b5a-d040-43df-8d9d-22422f5fba5c","Type":"ContainerStarted","Data":"441b9f533d46374489ead0168ac2b5d8d560fc0b63fbb84a98126e5c53a1751d"} Nov 24 21:25:06 crc kubenswrapper[4915]: I1124 21:25:06.759807 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-689f8b4874-tvgnn" podStartSLOduration=1.599146653 podStartE2EDuration="4.7597876s" podCreationTimestamp="2025-11-24 21:25:02 +0000 UTC" firstStartedPulling="2025-11-24 21:25:02.911687771 +0000 UTC m=+321.227939944" lastFinishedPulling="2025-11-24 21:25:06.072328718 +0000 UTC m=+324.388580891" observedRunningTime="2025-11-24 21:25:06.758976748 +0000 UTC m=+325.075228951" watchObservedRunningTime="2025-11-24 21:25:06.7597876 +0000 UTC m=+325.076039783" Nov 24 21:25:06 crc kubenswrapper[4915]: I1124 21:25:06.793026 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=4.504369871 podStartE2EDuration="9.793007247s" podCreationTimestamp="2025-11-24 21:24:57 +0000 UTC" firstStartedPulling="2025-11-24 21:24:59.072465885 +0000 UTC m=+317.388718058" lastFinishedPulling="2025-11-24 21:25:04.361103221 +0000 UTC m=+322.677355434" observedRunningTime="2025-11-24 21:25:06.787828614 +0000 UTC m=+325.104080807" watchObservedRunningTime="2025-11-24 21:25:06.793007247 +0000 UTC m=+325.109259420" Nov 24 21:25:07 crc kubenswrapper[4915]: I1124 21:25:07.743145 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-58ddbf6699-crdfx" Nov 24 21:25:07 crc kubenswrapper[4915]: I1124 21:25:07.751723 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-58ddbf6699-crdfx" Nov 24 21:25:07 crc kubenswrapper[4915]: I1124 21:25:07.770882 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-58ddbf6699-crdfx" podStartSLOduration=3.6411271469999997 podStartE2EDuration="5.770858854s" podCreationTimestamp="2025-11-24 21:25:02 +0000 UTC" firstStartedPulling="2025-11-24 21:25:03.943824325 +0000 UTC m=+322.260076498" lastFinishedPulling="2025-11-24 21:25:06.073556032 +0000 UTC m=+324.389808205" observedRunningTime="2025-11-24 21:25:06.813065991 +0000 UTC m=+325.129318174" watchObservedRunningTime="2025-11-24 21:25:07.770858854 +0000 UTC m=+326.087111027" Nov 24 21:25:08 crc kubenswrapper[4915]: I1124 21:25:08.516102 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-h77kv" Nov 24 21:25:08 crc kubenswrapper[4915]: I1124 21:25:08.563808 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-97jl2"] Nov 24 21:25:09 crc kubenswrapper[4915]: I1124 21:25:09.111211 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-759576649c-dxbqx" Nov 24 21:25:09 crc kubenswrapper[4915]: I1124 21:25:09.761736 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c99f19f2-b19b-48ee-97ee-d0419b0485be","Type":"ContainerStarted","Data":"5bca232f6d81d473c6af1b51d5f7399fb8577bd7823dc16b85adacfe2490bbe2"} Nov 24 21:25:09 crc kubenswrapper[4915]: I1124 21:25:09.762084 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c99f19f2-b19b-48ee-97ee-d0419b0485be","Type":"ContainerStarted","Data":"1b435d999d7219500e3bb97d654acec92cfcd626fbfbedd7cfac8b89af310679"} Nov 24 21:25:09 crc kubenswrapper[4915]: I1124 21:25:09.762110 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c99f19f2-b19b-48ee-97ee-d0419b0485be","Type":"ContainerStarted","Data":"9542a023fc81750a4da8f37b6c3fa03f6b966c45cbe065b90a84734521e3159c"} Nov 24 21:25:09 crc kubenswrapper[4915]: I1124 21:25:09.762122 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c99f19f2-b19b-48ee-97ee-d0419b0485be","Type":"ContainerStarted","Data":"32cb2f777d990beb79b9f7d029c2b5979ec64a1783b8a1748f1760b3bfd149a7"} Nov 24 21:25:09 crc kubenswrapper[4915]: I1124 21:25:09.762133 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c99f19f2-b19b-48ee-97ee-d0419b0485be","Type":"ContainerStarted","Data":"3191a343d3ad5960194218a013b1ebdeab129b74b56985f436cef78f23749be4"} Nov 24 21:25:09 crc kubenswrapper[4915]: I1124 21:25:09.762144 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c99f19f2-b19b-48ee-97ee-d0419b0485be","Type":"ContainerStarted","Data":"13dba20917175b491ca74f192a082841c364b6fd15025e5ef850f8866aed3f75"} Nov 24 21:25:10 crc kubenswrapper[4915]: I1124 21:25:10.820432 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.598218748 podStartE2EDuration="7.82041452s" podCreationTimestamp="2025-11-24 21:25:03 +0000 UTC" firstStartedPulling="2025-11-24 21:25:05.718570795 +0000 UTC m=+324.034822968" lastFinishedPulling="2025-11-24 21:25:08.940766567 +0000 UTC m=+327.257018740" observedRunningTime="2025-11-24 21:25:10.810863308 +0000 UTC m=+329.127115501" watchObservedRunningTime="2025-11-24 21:25:10.82041452 +0000 UTC m=+329.136666693" Nov 24 21:25:11 crc kubenswrapper[4915]: I1124 21:25:11.877149 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7bc4d6bc5d-txx68" Nov 24 21:25:11 crc kubenswrapper[4915]: I1124 21:25:11.877469 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7bc4d6bc5d-txx68" Nov 24 21:25:11 crc kubenswrapper[4915]: I1124 21:25:11.882953 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7bc4d6bc5d-txx68" Nov 24 21:25:12 crc kubenswrapper[4915]: I1124 21:25:12.794610 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7bc4d6bc5d-txx68" Nov 24 21:25:12 crc kubenswrapper[4915]: I1124 21:25:12.860162 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-x7cqd"] Nov 24 21:25:13 crc kubenswrapper[4915]: I1124 21:25:13.431210 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:25:22 crc kubenswrapper[4915]: I1124 21:25:22.606620 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-689f8b4874-tvgnn" Nov 24 21:25:22 crc kubenswrapper[4915]: I1124 21:25:22.608304 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-689f8b4874-tvgnn" Nov 24 21:25:24 crc kubenswrapper[4915]: I1124 21:25:24.327617 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:25:24 crc kubenswrapper[4915]: I1124 21:25:24.328743 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:25:33 crc kubenswrapper[4915]: I1124 21:25:33.788211 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" podUID="03660b87-4011-4ee8-ac77-a26a9f853005" containerName="registry" containerID="cri-o://baf6a2c4ea026d31999b13c60fb342d528aedb9a5d653f881fd9a7d8c8bb6b8e" gracePeriod=30 Nov 24 21:25:33 crc kubenswrapper[4915]: I1124 21:25:33.944979 4915 generic.go:334] "Generic (PLEG): container finished" podID="03660b87-4011-4ee8-ac77-a26a9f853005" containerID="baf6a2c4ea026d31999b13c60fb342d528aedb9a5d653f881fd9a7d8c8bb6b8e" exitCode=0 Nov 24 21:25:33 crc kubenswrapper[4915]: I1124 21:25:33.945073 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" event={"ID":"03660b87-4011-4ee8-ac77-a26a9f853005","Type":"ContainerDied","Data":"baf6a2c4ea026d31999b13c60fb342d528aedb9a5d653f881fd9a7d8c8bb6b8e"} Nov 24 21:25:34 crc kubenswrapper[4915]: I1124 21:25:34.126492 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:25:34 crc kubenswrapper[4915]: I1124 21:25:34.173752 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/03660b87-4011-4ee8-ac77-a26a9f853005-installation-pull-secrets\") pod \"03660b87-4011-4ee8-ac77-a26a9f853005\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " Nov 24 21:25:34 crc kubenswrapper[4915]: I1124 21:25:34.173849 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptlpg\" (UniqueName: \"kubernetes.io/projected/03660b87-4011-4ee8-ac77-a26a9f853005-kube-api-access-ptlpg\") pod \"03660b87-4011-4ee8-ac77-a26a9f853005\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " Nov 24 21:25:34 crc kubenswrapper[4915]: I1124 21:25:34.173888 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/03660b87-4011-4ee8-ac77-a26a9f853005-registry-certificates\") pod \"03660b87-4011-4ee8-ac77-a26a9f853005\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " Nov 24 21:25:34 crc kubenswrapper[4915]: I1124 21:25:34.174116 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"03660b87-4011-4ee8-ac77-a26a9f853005\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " Nov 24 21:25:34 crc kubenswrapper[4915]: I1124 21:25:34.174151 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/03660b87-4011-4ee8-ac77-a26a9f853005-registry-tls\") pod \"03660b87-4011-4ee8-ac77-a26a9f853005\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " Nov 24 21:25:34 crc kubenswrapper[4915]: I1124 21:25:34.174192 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/03660b87-4011-4ee8-ac77-a26a9f853005-trusted-ca\") pod \"03660b87-4011-4ee8-ac77-a26a9f853005\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " Nov 24 21:25:34 crc kubenswrapper[4915]: I1124 21:25:34.174256 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/03660b87-4011-4ee8-ac77-a26a9f853005-bound-sa-token\") pod \"03660b87-4011-4ee8-ac77-a26a9f853005\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " Nov 24 21:25:34 crc kubenswrapper[4915]: I1124 21:25:34.174292 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/03660b87-4011-4ee8-ac77-a26a9f853005-ca-trust-extracted\") pod \"03660b87-4011-4ee8-ac77-a26a9f853005\" (UID: \"03660b87-4011-4ee8-ac77-a26a9f853005\") " Nov 24 21:25:34 crc kubenswrapper[4915]: I1124 21:25:34.175030 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03660b87-4011-4ee8-ac77-a26a9f853005-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "03660b87-4011-4ee8-ac77-a26a9f853005" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:25:34 crc kubenswrapper[4915]: I1124 21:25:34.175092 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03660b87-4011-4ee8-ac77-a26a9f853005-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "03660b87-4011-4ee8-ac77-a26a9f853005" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:25:34 crc kubenswrapper[4915]: I1124 21:25:34.181353 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03660b87-4011-4ee8-ac77-a26a9f853005-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "03660b87-4011-4ee8-ac77-a26a9f853005" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:25:34 crc kubenswrapper[4915]: I1124 21:25:34.182831 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03660b87-4011-4ee8-ac77-a26a9f853005-kube-api-access-ptlpg" (OuterVolumeSpecName: "kube-api-access-ptlpg") pod "03660b87-4011-4ee8-ac77-a26a9f853005" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005"). InnerVolumeSpecName "kube-api-access-ptlpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:25:34 crc kubenswrapper[4915]: I1124 21:25:34.185909 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03660b87-4011-4ee8-ac77-a26a9f853005-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "03660b87-4011-4ee8-ac77-a26a9f853005" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:25:34 crc kubenswrapper[4915]: I1124 21:25:34.186812 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03660b87-4011-4ee8-ac77-a26a9f853005-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "03660b87-4011-4ee8-ac77-a26a9f853005" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:25:34 crc kubenswrapper[4915]: I1124 21:25:34.194518 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "03660b87-4011-4ee8-ac77-a26a9f853005" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 21:25:34 crc kubenswrapper[4915]: I1124 21:25:34.194802 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03660b87-4011-4ee8-ac77-a26a9f853005-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "03660b87-4011-4ee8-ac77-a26a9f853005" (UID: "03660b87-4011-4ee8-ac77-a26a9f853005"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:25:34 crc kubenswrapper[4915]: I1124 21:25:34.275795 4915 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/03660b87-4011-4ee8-ac77-a26a9f853005-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 21:25:34 crc kubenswrapper[4915]: I1124 21:25:34.275838 4915 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/03660b87-4011-4ee8-ac77-a26a9f853005-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 24 21:25:34 crc kubenswrapper[4915]: I1124 21:25:34.275851 4915 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/03660b87-4011-4ee8-ac77-a26a9f853005-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 24 21:25:34 crc kubenswrapper[4915]: I1124 21:25:34.275866 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptlpg\" (UniqueName: \"kubernetes.io/projected/03660b87-4011-4ee8-ac77-a26a9f853005-kube-api-access-ptlpg\") on node \"crc\" DevicePath \"\"" Nov 24 21:25:34 crc kubenswrapper[4915]: I1124 21:25:34.275874 4915 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/03660b87-4011-4ee8-ac77-a26a9f853005-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 24 21:25:34 crc kubenswrapper[4915]: I1124 21:25:34.275883 4915 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/03660b87-4011-4ee8-ac77-a26a9f853005-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:25:34 crc kubenswrapper[4915]: I1124 21:25:34.275891 4915 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/03660b87-4011-4ee8-ac77-a26a9f853005-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:25:34 crc kubenswrapper[4915]: I1124 21:25:34.951880 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" event={"ID":"03660b87-4011-4ee8-ac77-a26a9f853005","Type":"ContainerDied","Data":"1b431c8d983a04a81624e0f7c65196ff9b33e70bbf148ee7c2e910df1eaded68"} Nov 24 21:25:34 crc kubenswrapper[4915]: I1124 21:25:34.951919 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-97jl2" Nov 24 21:25:34 crc kubenswrapper[4915]: I1124 21:25:34.951942 4915 scope.go:117] "RemoveContainer" containerID="baf6a2c4ea026d31999b13c60fb342d528aedb9a5d653f881fd9a7d8c8bb6b8e" Nov 24 21:25:34 crc kubenswrapper[4915]: I1124 21:25:34.974011 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-97jl2"] Nov 24 21:25:34 crc kubenswrapper[4915]: I1124 21:25:34.979059 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-97jl2"] Nov 24 21:25:36 crc kubenswrapper[4915]: I1124 21:25:36.440128 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03660b87-4011-4ee8-ac77-a26a9f853005" path="/var/lib/kubelet/pods/03660b87-4011-4ee8-ac77-a26a9f853005/volumes" Nov 24 21:25:37 crc kubenswrapper[4915]: I1124 21:25:37.907664 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-x7cqd" podUID="c25872a5-42e3-4e20-ad54-594477784fa2" containerName="console" containerID="cri-o://f660b77cb81d6453ec9b45bfba1f25c7ba4f7b63fcff763867e90dbd077f103a" gracePeriod=15 Nov 24 21:25:38 crc kubenswrapper[4915]: I1124 21:25:38.261712 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-x7cqd_c25872a5-42e3-4e20-ad54-594477784fa2/console/0.log" Nov 24 21:25:38 crc kubenswrapper[4915]: I1124 21:25:38.261791 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x7cqd" Nov 24 21:25:38 crc kubenswrapper[4915]: I1124 21:25:38.335433 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c25872a5-42e3-4e20-ad54-594477784fa2-console-oauth-config\") pod \"c25872a5-42e3-4e20-ad54-594477784fa2\" (UID: \"c25872a5-42e3-4e20-ad54-594477784fa2\") " Nov 24 21:25:38 crc kubenswrapper[4915]: I1124 21:25:38.335495 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c25872a5-42e3-4e20-ad54-594477784fa2-service-ca\") pod \"c25872a5-42e3-4e20-ad54-594477784fa2\" (UID: \"c25872a5-42e3-4e20-ad54-594477784fa2\") " Nov 24 21:25:38 crc kubenswrapper[4915]: I1124 21:25:38.335533 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c25872a5-42e3-4e20-ad54-594477784fa2-console-serving-cert\") pod \"c25872a5-42e3-4e20-ad54-594477784fa2\" (UID: \"c25872a5-42e3-4e20-ad54-594477784fa2\") " Nov 24 21:25:38 crc kubenswrapper[4915]: I1124 21:25:38.335597 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb5cp\" (UniqueName: \"kubernetes.io/projected/c25872a5-42e3-4e20-ad54-594477784fa2-kube-api-access-wb5cp\") pod \"c25872a5-42e3-4e20-ad54-594477784fa2\" (UID: \"c25872a5-42e3-4e20-ad54-594477784fa2\") " Nov 24 21:25:38 crc kubenswrapper[4915]: I1124 21:25:38.335638 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c25872a5-42e3-4e20-ad54-594477784fa2-console-config\") pod \"c25872a5-42e3-4e20-ad54-594477784fa2\" (UID: \"c25872a5-42e3-4e20-ad54-594477784fa2\") " Nov 24 21:25:38 crc kubenswrapper[4915]: I1124 21:25:38.335671 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c25872a5-42e3-4e20-ad54-594477784fa2-trusted-ca-bundle\") pod \"c25872a5-42e3-4e20-ad54-594477784fa2\" (UID: \"c25872a5-42e3-4e20-ad54-594477784fa2\") " Nov 24 21:25:38 crc kubenswrapper[4915]: I1124 21:25:38.335712 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c25872a5-42e3-4e20-ad54-594477784fa2-oauth-serving-cert\") pod \"c25872a5-42e3-4e20-ad54-594477784fa2\" (UID: \"c25872a5-42e3-4e20-ad54-594477784fa2\") " Nov 24 21:25:38 crc kubenswrapper[4915]: I1124 21:25:38.336396 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c25872a5-42e3-4e20-ad54-594477784fa2-service-ca" (OuterVolumeSpecName: "service-ca") pod "c25872a5-42e3-4e20-ad54-594477784fa2" (UID: "c25872a5-42e3-4e20-ad54-594477784fa2"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:25:38 crc kubenswrapper[4915]: I1124 21:25:38.336686 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c25872a5-42e3-4e20-ad54-594477784fa2-console-config" (OuterVolumeSpecName: "console-config") pod "c25872a5-42e3-4e20-ad54-594477784fa2" (UID: "c25872a5-42e3-4e20-ad54-594477784fa2"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:25:38 crc kubenswrapper[4915]: I1124 21:25:38.336698 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c25872a5-42e3-4e20-ad54-594477784fa2-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c25872a5-42e3-4e20-ad54-594477784fa2" (UID: "c25872a5-42e3-4e20-ad54-594477784fa2"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:25:38 crc kubenswrapper[4915]: I1124 21:25:38.337135 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c25872a5-42e3-4e20-ad54-594477784fa2-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c25872a5-42e3-4e20-ad54-594477784fa2" (UID: "c25872a5-42e3-4e20-ad54-594477784fa2"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:25:38 crc kubenswrapper[4915]: I1124 21:25:38.341247 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c25872a5-42e3-4e20-ad54-594477784fa2-kube-api-access-wb5cp" (OuterVolumeSpecName: "kube-api-access-wb5cp") pod "c25872a5-42e3-4e20-ad54-594477784fa2" (UID: "c25872a5-42e3-4e20-ad54-594477784fa2"). InnerVolumeSpecName "kube-api-access-wb5cp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:25:38 crc kubenswrapper[4915]: I1124 21:25:38.342039 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c25872a5-42e3-4e20-ad54-594477784fa2-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c25872a5-42e3-4e20-ad54-594477784fa2" (UID: "c25872a5-42e3-4e20-ad54-594477784fa2"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:25:38 crc kubenswrapper[4915]: I1124 21:25:38.342244 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c25872a5-42e3-4e20-ad54-594477784fa2-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c25872a5-42e3-4e20-ad54-594477784fa2" (UID: "c25872a5-42e3-4e20-ad54-594477784fa2"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:25:38 crc kubenswrapper[4915]: I1124 21:25:38.439707 4915 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c25872a5-42e3-4e20-ad54-594477784fa2-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:25:38 crc kubenswrapper[4915]: I1124 21:25:38.439766 4915 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c25872a5-42e3-4e20-ad54-594477784fa2-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:25:38 crc kubenswrapper[4915]: I1124 21:25:38.439801 4915 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c25872a5-42e3-4e20-ad54-594477784fa2-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:25:38 crc kubenswrapper[4915]: I1124 21:25:38.439814 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb5cp\" (UniqueName: \"kubernetes.io/projected/c25872a5-42e3-4e20-ad54-594477784fa2-kube-api-access-wb5cp\") on node \"crc\" DevicePath \"\"" Nov 24 21:25:38 crc kubenswrapper[4915]: I1124 21:25:38.439833 4915 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c25872a5-42e3-4e20-ad54-594477784fa2-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:25:38 crc kubenswrapper[4915]: I1124 21:25:38.439845 4915 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c25872a5-42e3-4e20-ad54-594477784fa2-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:25:38 crc kubenswrapper[4915]: I1124 21:25:38.439856 4915 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c25872a5-42e3-4e20-ad54-594477784fa2-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:25:38 crc kubenswrapper[4915]: I1124 21:25:38.986878 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-x7cqd_c25872a5-42e3-4e20-ad54-594477784fa2/console/0.log" Nov 24 21:25:38 crc kubenswrapper[4915]: I1124 21:25:38.987265 4915 generic.go:334] "Generic (PLEG): container finished" podID="c25872a5-42e3-4e20-ad54-594477784fa2" containerID="f660b77cb81d6453ec9b45bfba1f25c7ba4f7b63fcff763867e90dbd077f103a" exitCode=2 Nov 24 21:25:38 crc kubenswrapper[4915]: I1124 21:25:38.987309 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x7cqd" event={"ID":"c25872a5-42e3-4e20-ad54-594477784fa2","Type":"ContainerDied","Data":"f660b77cb81d6453ec9b45bfba1f25c7ba4f7b63fcff763867e90dbd077f103a"} Nov 24 21:25:38 crc kubenswrapper[4915]: I1124 21:25:38.987344 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-x7cqd" event={"ID":"c25872a5-42e3-4e20-ad54-594477784fa2","Type":"ContainerDied","Data":"301aea8096aea37d3915fad8cca394e0257069f6ff0a3e7f6ddd19f734cf05ea"} Nov 24 21:25:38 crc kubenswrapper[4915]: I1124 21:25:38.987369 4915 scope.go:117] "RemoveContainer" containerID="f660b77cb81d6453ec9b45bfba1f25c7ba4f7b63fcff763867e90dbd077f103a" Nov 24 21:25:38 crc kubenswrapper[4915]: I1124 21:25:38.987519 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-x7cqd" Nov 24 21:25:39 crc kubenswrapper[4915]: I1124 21:25:39.009309 4915 scope.go:117] "RemoveContainer" containerID="f660b77cb81d6453ec9b45bfba1f25c7ba4f7b63fcff763867e90dbd077f103a" Nov 24 21:25:39 crc kubenswrapper[4915]: E1124 21:25:39.009634 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f660b77cb81d6453ec9b45bfba1f25c7ba4f7b63fcff763867e90dbd077f103a\": container with ID starting with f660b77cb81d6453ec9b45bfba1f25c7ba4f7b63fcff763867e90dbd077f103a not found: ID does not exist" containerID="f660b77cb81d6453ec9b45bfba1f25c7ba4f7b63fcff763867e90dbd077f103a" Nov 24 21:25:39 crc kubenswrapper[4915]: I1124 21:25:39.009664 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f660b77cb81d6453ec9b45bfba1f25c7ba4f7b63fcff763867e90dbd077f103a"} err="failed to get container status \"f660b77cb81d6453ec9b45bfba1f25c7ba4f7b63fcff763867e90dbd077f103a\": rpc error: code = NotFound desc = could not find container \"f660b77cb81d6453ec9b45bfba1f25c7ba4f7b63fcff763867e90dbd077f103a\": container with ID starting with f660b77cb81d6453ec9b45bfba1f25c7ba4f7b63fcff763867e90dbd077f103a not found: ID does not exist" Nov 24 21:25:39 crc kubenswrapper[4915]: I1124 21:25:39.013618 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-x7cqd"] Nov 24 21:25:39 crc kubenswrapper[4915]: I1124 21:25:39.020232 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-x7cqd"] Nov 24 21:25:40 crc kubenswrapper[4915]: I1124 21:25:40.440886 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c25872a5-42e3-4e20-ad54-594477784fa2" path="/var/lib/kubelet/pods/c25872a5-42e3-4e20-ad54-594477784fa2/volumes" Nov 24 21:25:42 crc kubenswrapper[4915]: I1124 21:25:42.619114 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-689f8b4874-tvgnn" Nov 24 21:25:42 crc kubenswrapper[4915]: I1124 21:25:42.626374 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-689f8b4874-tvgnn" Nov 24 21:25:54 crc kubenswrapper[4915]: I1124 21:25:54.328180 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:25:54 crc kubenswrapper[4915]: I1124 21:25:54.328870 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:26:03 crc kubenswrapper[4915]: I1124 21:26:03.431122 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:26:04 crc kubenswrapper[4915]: I1124 21:26:03.456364 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:26:04 crc kubenswrapper[4915]: I1124 21:26:04.188859 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Nov 24 21:26:16 crc kubenswrapper[4915]: I1124 21:26:16.416534 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6679746c6d-lph77"] Nov 24 21:26:16 crc kubenswrapper[4915]: E1124 21:26:16.417428 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c25872a5-42e3-4e20-ad54-594477784fa2" containerName="console" Nov 24 21:26:16 crc kubenswrapper[4915]: I1124 21:26:16.417447 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c25872a5-42e3-4e20-ad54-594477784fa2" containerName="console" Nov 24 21:26:16 crc kubenswrapper[4915]: E1124 21:26:16.417497 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03660b87-4011-4ee8-ac77-a26a9f853005" containerName="registry" Nov 24 21:26:16 crc kubenswrapper[4915]: I1124 21:26:16.417510 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="03660b87-4011-4ee8-ac77-a26a9f853005" containerName="registry" Nov 24 21:26:16 crc kubenswrapper[4915]: I1124 21:26:16.417717 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="03660b87-4011-4ee8-ac77-a26a9f853005" containerName="registry" Nov 24 21:26:16 crc kubenswrapper[4915]: I1124 21:26:16.417739 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c25872a5-42e3-4e20-ad54-594477784fa2" containerName="console" Nov 24 21:26:16 crc kubenswrapper[4915]: I1124 21:26:16.418381 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6679746c6d-lph77" Nov 24 21:26:16 crc kubenswrapper[4915]: I1124 21:26:16.442223 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6679746c6d-lph77"] Nov 24 21:26:16 crc kubenswrapper[4915]: I1124 21:26:16.502843 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-service-ca\") pod \"console-6679746c6d-lph77\" (UID: \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\") " pod="openshift-console/console-6679746c6d-lph77" Nov 24 21:26:16 crc kubenswrapper[4915]: I1124 21:26:16.502887 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8nqt\" (UniqueName: \"kubernetes.io/projected/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-kube-api-access-w8nqt\") pod \"console-6679746c6d-lph77\" (UID: \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\") " pod="openshift-console/console-6679746c6d-lph77" Nov 24 21:26:16 crc kubenswrapper[4915]: I1124 21:26:16.502936 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-console-oauth-config\") pod \"console-6679746c6d-lph77\" (UID: \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\") " pod="openshift-console/console-6679746c6d-lph77" Nov 24 21:26:16 crc kubenswrapper[4915]: I1124 21:26:16.502971 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-console-config\") pod \"console-6679746c6d-lph77\" (UID: \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\") " pod="openshift-console/console-6679746c6d-lph77" Nov 24 21:26:16 crc kubenswrapper[4915]: I1124 21:26:16.502990 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-trusted-ca-bundle\") pod \"console-6679746c6d-lph77\" (UID: \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\") " pod="openshift-console/console-6679746c6d-lph77" Nov 24 21:26:16 crc kubenswrapper[4915]: I1124 21:26:16.503014 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-oauth-serving-cert\") pod \"console-6679746c6d-lph77\" (UID: \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\") " pod="openshift-console/console-6679746c6d-lph77" Nov 24 21:26:16 crc kubenswrapper[4915]: I1124 21:26:16.503061 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-console-serving-cert\") pod \"console-6679746c6d-lph77\" (UID: \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\") " pod="openshift-console/console-6679746c6d-lph77" Nov 24 21:26:16 crc kubenswrapper[4915]: I1124 21:26:16.604874 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-service-ca\") pod \"console-6679746c6d-lph77\" (UID: \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\") " pod="openshift-console/console-6679746c6d-lph77" Nov 24 21:26:16 crc kubenswrapper[4915]: I1124 21:26:16.605496 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8nqt\" (UniqueName: \"kubernetes.io/projected/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-kube-api-access-w8nqt\") pod \"console-6679746c6d-lph77\" (UID: \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\") " pod="openshift-console/console-6679746c6d-lph77" Nov 24 21:26:16 crc kubenswrapper[4915]: I1124 21:26:16.605575 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-console-oauth-config\") pod \"console-6679746c6d-lph77\" (UID: \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\") " pod="openshift-console/console-6679746c6d-lph77" Nov 24 21:26:16 crc kubenswrapper[4915]: I1124 21:26:16.605637 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-console-config\") pod \"console-6679746c6d-lph77\" (UID: \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\") " pod="openshift-console/console-6679746c6d-lph77" Nov 24 21:26:16 crc kubenswrapper[4915]: I1124 21:26:16.605652 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-trusted-ca-bundle\") pod \"console-6679746c6d-lph77\" (UID: \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\") " pod="openshift-console/console-6679746c6d-lph77" Nov 24 21:26:16 crc kubenswrapper[4915]: I1124 21:26:16.605674 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-oauth-serving-cert\") pod \"console-6679746c6d-lph77\" (UID: \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\") " pod="openshift-console/console-6679746c6d-lph77" Nov 24 21:26:16 crc kubenswrapper[4915]: I1124 21:26:16.605681 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-service-ca\") pod \"console-6679746c6d-lph77\" (UID: \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\") " pod="openshift-console/console-6679746c6d-lph77" Nov 24 21:26:16 crc kubenswrapper[4915]: I1124 21:26:16.605705 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-console-serving-cert\") pod \"console-6679746c6d-lph77\" (UID: \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\") " pod="openshift-console/console-6679746c6d-lph77" Nov 24 21:26:16 crc kubenswrapper[4915]: I1124 21:26:16.606212 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-oauth-serving-cert\") pod \"console-6679746c6d-lph77\" (UID: \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\") " pod="openshift-console/console-6679746c6d-lph77" Nov 24 21:26:16 crc kubenswrapper[4915]: I1124 21:26:16.606653 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-trusted-ca-bundle\") pod \"console-6679746c6d-lph77\" (UID: \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\") " pod="openshift-console/console-6679746c6d-lph77" Nov 24 21:26:16 crc kubenswrapper[4915]: I1124 21:26:16.607033 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-console-config\") pod \"console-6679746c6d-lph77\" (UID: \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\") " pod="openshift-console/console-6679746c6d-lph77" Nov 24 21:26:16 crc kubenswrapper[4915]: I1124 21:26:16.610850 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-console-oauth-config\") pod \"console-6679746c6d-lph77\" (UID: \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\") " pod="openshift-console/console-6679746c6d-lph77" Nov 24 21:26:16 crc kubenswrapper[4915]: I1124 21:26:16.612508 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-console-serving-cert\") pod \"console-6679746c6d-lph77\" (UID: \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\") " pod="openshift-console/console-6679746c6d-lph77" Nov 24 21:26:16 crc kubenswrapper[4915]: I1124 21:26:16.621051 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8nqt\" (UniqueName: \"kubernetes.io/projected/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-kube-api-access-w8nqt\") pod \"console-6679746c6d-lph77\" (UID: \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\") " pod="openshift-console/console-6679746c6d-lph77" Nov 24 21:26:16 crc kubenswrapper[4915]: I1124 21:26:16.741913 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6679746c6d-lph77" Nov 24 21:26:17 crc kubenswrapper[4915]: I1124 21:26:17.169897 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6679746c6d-lph77"] Nov 24 21:26:17 crc kubenswrapper[4915]: I1124 21:26:17.248897 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6679746c6d-lph77" event={"ID":"c539181b-6e2f-4f16-98f1-0adf73a2c1a3","Type":"ContainerStarted","Data":"b3ec44b28cfda45aff4df8bdf983267e6b386842c328bbab538bb25dfa563417"} Nov 24 21:26:18 crc kubenswrapper[4915]: I1124 21:26:18.255369 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6679746c6d-lph77" event={"ID":"c539181b-6e2f-4f16-98f1-0adf73a2c1a3","Type":"ContainerStarted","Data":"778146fae420e3911807b26e94cf72ab3512fef945bf53af2338c96adf3ea131"} Nov 24 21:26:18 crc kubenswrapper[4915]: I1124 21:26:18.278378 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6679746c6d-lph77" podStartSLOduration=2.278358508 podStartE2EDuration="2.278358508s" podCreationTimestamp="2025-11-24 21:26:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:26:18.273580796 +0000 UTC m=+396.589832979" watchObservedRunningTime="2025-11-24 21:26:18.278358508 +0000 UTC m=+396.594610681" Nov 24 21:26:24 crc kubenswrapper[4915]: I1124 21:26:24.328191 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:26:24 crc kubenswrapper[4915]: I1124 21:26:24.328904 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:26:24 crc kubenswrapper[4915]: I1124 21:26:24.328974 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 21:26:24 crc kubenswrapper[4915]: I1124 21:26:24.329805 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7a927e1b01e5942836750a7160599076a5dc63cd1a6d7a3fc0c7b1b258e1c919"} pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:26:24 crc kubenswrapper[4915]: I1124 21:26:24.329907 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" containerID="cri-o://7a927e1b01e5942836750a7160599076a5dc63cd1a6d7a3fc0c7b1b258e1c919" gracePeriod=600 Nov 24 21:26:25 crc kubenswrapper[4915]: I1124 21:26:25.304990 4915 generic.go:334] "Generic (PLEG): container finished" podID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerID="7a927e1b01e5942836750a7160599076a5dc63cd1a6d7a3fc0c7b1b258e1c919" exitCode=0 Nov 24 21:26:25 crc kubenswrapper[4915]: I1124 21:26:25.305189 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerDied","Data":"7a927e1b01e5942836750a7160599076a5dc63cd1a6d7a3fc0c7b1b258e1c919"} Nov 24 21:26:25 crc kubenswrapper[4915]: I1124 21:26:25.305552 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerStarted","Data":"57ea3717b75122ffe3155ca58b5c8ae3efe0bf8c9a3d96219f40084a7d067ef2"} Nov 24 21:26:25 crc kubenswrapper[4915]: I1124 21:26:25.305579 4915 scope.go:117] "RemoveContainer" containerID="34ca0517f2f9e7cb322919f7e171cd19a354c19842d7cdff719d15d3f14f1336" Nov 24 21:26:26 crc kubenswrapper[4915]: I1124 21:26:26.742492 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6679746c6d-lph77" Nov 24 21:26:26 crc kubenswrapper[4915]: I1124 21:26:26.742863 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6679746c6d-lph77" Nov 24 21:26:26 crc kubenswrapper[4915]: I1124 21:26:26.747595 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6679746c6d-lph77" Nov 24 21:26:27 crc kubenswrapper[4915]: I1124 21:26:27.324842 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6679746c6d-lph77" Nov 24 21:26:27 crc kubenswrapper[4915]: I1124 21:26:27.371937 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7bc4d6bc5d-txx68"] Nov 24 21:26:52 crc kubenswrapper[4915]: I1124 21:26:52.415266 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7bc4d6bc5d-txx68" podUID="fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0" containerName="console" containerID="cri-o://a15f5ffd422a3851b32f1205cf09e357a18ad04f91819c6ab85d878a7071b232" gracePeriod=15 Nov 24 21:26:52 crc kubenswrapper[4915]: I1124 21:26:52.757118 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7bc4d6bc5d-txx68_fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0/console/0.log" Nov 24 21:26:52 crc kubenswrapper[4915]: I1124 21:26:52.757542 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bc4d6bc5d-txx68" Nov 24 21:26:52 crc kubenswrapper[4915]: I1124 21:26:52.954296 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-oauth-serving-cert\") pod \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\" (UID: \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\") " Nov 24 21:26:52 crc kubenswrapper[4915]: I1124 21:26:52.954394 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-console-config\") pod \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\" (UID: \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\") " Nov 24 21:26:52 crc kubenswrapper[4915]: I1124 21:26:52.954499 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-trusted-ca-bundle\") pod \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\" (UID: \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\") " Nov 24 21:26:52 crc kubenswrapper[4915]: I1124 21:26:52.954702 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f69h5\" (UniqueName: \"kubernetes.io/projected/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-kube-api-access-f69h5\") pod \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\" (UID: \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\") " Nov 24 21:26:52 crc kubenswrapper[4915]: I1124 21:26:52.954764 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-service-ca\") pod \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\" (UID: \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\") " Nov 24 21:26:52 crc kubenswrapper[4915]: I1124 21:26:52.954900 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-console-oauth-config\") pod \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\" (UID: \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\") " Nov 24 21:26:52 crc kubenswrapper[4915]: I1124 21:26:52.954945 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-console-serving-cert\") pod \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\" (UID: \"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0\") " Nov 24 21:26:52 crc kubenswrapper[4915]: I1124 21:26:52.955419 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0" (UID: "fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:26:52 crc kubenswrapper[4915]: I1124 21:26:52.955439 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-console-config" (OuterVolumeSpecName: "console-config") pod "fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0" (UID: "fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:26:52 crc kubenswrapper[4915]: I1124 21:26:52.955661 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-service-ca" (OuterVolumeSpecName: "service-ca") pod "fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0" (UID: "fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:26:52 crc kubenswrapper[4915]: I1124 21:26:52.955816 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0" (UID: "fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:26:52 crc kubenswrapper[4915]: I1124 21:26:52.959898 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-kube-api-access-f69h5" (OuterVolumeSpecName: "kube-api-access-f69h5") pod "fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0" (UID: "fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0"). InnerVolumeSpecName "kube-api-access-f69h5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:26:52 crc kubenswrapper[4915]: I1124 21:26:52.959967 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0" (UID: "fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:26:52 crc kubenswrapper[4915]: I1124 21:26:52.961363 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0" (UID: "fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:26:53 crc kubenswrapper[4915]: I1124 21:26:53.057005 4915 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:53 crc kubenswrapper[4915]: I1124 21:26:53.057056 4915 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:53 crc kubenswrapper[4915]: I1124 21:26:53.057067 4915 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:53 crc kubenswrapper[4915]: I1124 21:26:53.057078 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f69h5\" (UniqueName: \"kubernetes.io/projected/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-kube-api-access-f69h5\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:53 crc kubenswrapper[4915]: I1124 21:26:53.057093 4915 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:53 crc kubenswrapper[4915]: I1124 21:26:53.057103 4915 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:53 crc kubenswrapper[4915]: I1124 21:26:53.057113 4915 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:26:53 crc kubenswrapper[4915]: I1124 21:26:53.500846 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7bc4d6bc5d-txx68_fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0/console/0.log" Nov 24 21:26:53 crc kubenswrapper[4915]: I1124 21:26:53.501228 4915 generic.go:334] "Generic (PLEG): container finished" podID="fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0" containerID="a15f5ffd422a3851b32f1205cf09e357a18ad04f91819c6ab85d878a7071b232" exitCode=2 Nov 24 21:26:53 crc kubenswrapper[4915]: I1124 21:26:53.501328 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bc4d6bc5d-txx68" Nov 24 21:26:53 crc kubenswrapper[4915]: I1124 21:26:53.501323 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bc4d6bc5d-txx68" event={"ID":"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0","Type":"ContainerDied","Data":"a15f5ffd422a3851b32f1205cf09e357a18ad04f91819c6ab85d878a7071b232"} Nov 24 21:26:53 crc kubenswrapper[4915]: I1124 21:26:53.501469 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bc4d6bc5d-txx68" event={"ID":"fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0","Type":"ContainerDied","Data":"b6ab915899a535c626e29a82051a18645e03bad7702ab5ee87bd42fdd05ccc8c"} Nov 24 21:26:53 crc kubenswrapper[4915]: I1124 21:26:53.501618 4915 scope.go:117] "RemoveContainer" containerID="a15f5ffd422a3851b32f1205cf09e357a18ad04f91819c6ab85d878a7071b232" Nov 24 21:26:53 crc kubenswrapper[4915]: I1124 21:26:53.524961 4915 scope.go:117] "RemoveContainer" containerID="a15f5ffd422a3851b32f1205cf09e357a18ad04f91819c6ab85d878a7071b232" Nov 24 21:26:53 crc kubenswrapper[4915]: E1124 21:26:53.530277 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a15f5ffd422a3851b32f1205cf09e357a18ad04f91819c6ab85d878a7071b232\": container with ID starting with a15f5ffd422a3851b32f1205cf09e357a18ad04f91819c6ab85d878a7071b232 not found: ID does not exist" containerID="a15f5ffd422a3851b32f1205cf09e357a18ad04f91819c6ab85d878a7071b232" Nov 24 21:26:53 crc kubenswrapper[4915]: I1124 21:26:53.530343 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a15f5ffd422a3851b32f1205cf09e357a18ad04f91819c6ab85d878a7071b232"} err="failed to get container status \"a15f5ffd422a3851b32f1205cf09e357a18ad04f91819c6ab85d878a7071b232\": rpc error: code = NotFound desc = could not find container \"a15f5ffd422a3851b32f1205cf09e357a18ad04f91819c6ab85d878a7071b232\": container with ID starting with a15f5ffd422a3851b32f1205cf09e357a18ad04f91819c6ab85d878a7071b232 not found: ID does not exist" Nov 24 21:26:53 crc kubenswrapper[4915]: I1124 21:26:53.547906 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7bc4d6bc5d-txx68"] Nov 24 21:26:53 crc kubenswrapper[4915]: I1124 21:26:53.553099 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7bc4d6bc5d-txx68"] Nov 24 21:26:54 crc kubenswrapper[4915]: I1124 21:26:54.436679 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0" path="/var/lib/kubelet/pods/fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0/volumes" Nov 24 21:28:54 crc kubenswrapper[4915]: I1124 21:28:54.327900 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:28:54 crc kubenswrapper[4915]: I1124 21:28:54.328443 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:29:24 crc kubenswrapper[4915]: I1124 21:29:24.327372 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:29:24 crc kubenswrapper[4915]: I1124 21:29:24.328059 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:29:54 crc kubenswrapper[4915]: I1124 21:29:54.327743 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:29:54 crc kubenswrapper[4915]: I1124 21:29:54.328342 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:29:54 crc kubenswrapper[4915]: I1124 21:29:54.328406 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 21:29:54 crc kubenswrapper[4915]: I1124 21:29:54.329144 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"57ea3717b75122ffe3155ca58b5c8ae3efe0bf8c9a3d96219f40084a7d067ef2"} pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:29:54 crc kubenswrapper[4915]: I1124 21:29:54.329227 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" containerID="cri-o://57ea3717b75122ffe3155ca58b5c8ae3efe0bf8c9a3d96219f40084a7d067ef2" gracePeriod=600 Nov 24 21:29:54 crc kubenswrapper[4915]: I1124 21:29:54.854635 4915 generic.go:334] "Generic (PLEG): container finished" podID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerID="57ea3717b75122ffe3155ca58b5c8ae3efe0bf8c9a3d96219f40084a7d067ef2" exitCode=0 Nov 24 21:29:54 crc kubenswrapper[4915]: I1124 21:29:54.854742 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerDied","Data":"57ea3717b75122ffe3155ca58b5c8ae3efe0bf8c9a3d96219f40084a7d067ef2"} Nov 24 21:29:54 crc kubenswrapper[4915]: I1124 21:29:54.855391 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerStarted","Data":"615393d21d21ae1d445108dc4b58018415536dc2737ece31d186b4e6013b73e9"} Nov 24 21:29:54 crc kubenswrapper[4915]: I1124 21:29:54.855547 4915 scope.go:117] "RemoveContainer" containerID="7a927e1b01e5942836750a7160599076a5dc63cd1a6d7a3fc0c7b1b258e1c919" Nov 24 21:30:00 crc kubenswrapper[4915]: I1124 21:30:00.157926 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400330-m77rg"] Nov 24 21:30:00 crc kubenswrapper[4915]: E1124 21:30:00.159365 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0" containerName="console" Nov 24 21:30:00 crc kubenswrapper[4915]: I1124 21:30:00.159397 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0" containerName="console" Nov 24 21:30:00 crc kubenswrapper[4915]: I1124 21:30:00.159672 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdf97979-22ba-43ca-b0e6-f5dc3ecc84e0" containerName="console" Nov 24 21:30:00 crc kubenswrapper[4915]: I1124 21:30:00.162836 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-m77rg" Nov 24 21:30:00 crc kubenswrapper[4915]: I1124 21:30:00.165313 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 21:30:00 crc kubenswrapper[4915]: I1124 21:30:00.165558 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 21:30:00 crc kubenswrapper[4915]: I1124 21:30:00.175040 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400330-m77rg"] Nov 24 21:30:00 crc kubenswrapper[4915]: I1124 21:30:00.223389 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc-secret-volume\") pod \"collect-profiles-29400330-m77rg\" (UID: \"e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-m77rg" Nov 24 21:30:00 crc kubenswrapper[4915]: I1124 21:30:00.223486 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc-config-volume\") pod \"collect-profiles-29400330-m77rg\" (UID: \"e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-m77rg" Nov 24 21:30:00 crc kubenswrapper[4915]: I1124 21:30:00.223536 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgxwq\" (UniqueName: \"kubernetes.io/projected/e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc-kube-api-access-bgxwq\") pod \"collect-profiles-29400330-m77rg\" (UID: \"e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-m77rg" Nov 24 21:30:00 crc kubenswrapper[4915]: I1124 21:30:00.324084 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc-secret-volume\") pod \"collect-profiles-29400330-m77rg\" (UID: \"e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-m77rg" Nov 24 21:30:00 crc kubenswrapper[4915]: I1124 21:30:00.324144 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc-config-volume\") pod \"collect-profiles-29400330-m77rg\" (UID: \"e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-m77rg" Nov 24 21:30:00 crc kubenswrapper[4915]: I1124 21:30:00.324180 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgxwq\" (UniqueName: \"kubernetes.io/projected/e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc-kube-api-access-bgxwq\") pod \"collect-profiles-29400330-m77rg\" (UID: \"e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-m77rg" Nov 24 21:30:00 crc kubenswrapper[4915]: I1124 21:30:00.325756 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc-config-volume\") pod \"collect-profiles-29400330-m77rg\" (UID: \"e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-m77rg" Nov 24 21:30:00 crc kubenswrapper[4915]: I1124 21:30:00.335209 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc-secret-volume\") pod \"collect-profiles-29400330-m77rg\" (UID: \"e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-m77rg" Nov 24 21:30:00 crc kubenswrapper[4915]: I1124 21:30:00.342313 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgxwq\" (UniqueName: \"kubernetes.io/projected/e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc-kube-api-access-bgxwq\") pod \"collect-profiles-29400330-m77rg\" (UID: \"e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-m77rg" Nov 24 21:30:00 crc kubenswrapper[4915]: I1124 21:30:00.493204 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-m77rg" Nov 24 21:30:00 crc kubenswrapper[4915]: I1124 21:30:00.964990 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400330-m77rg"] Nov 24 21:30:01 crc kubenswrapper[4915]: I1124 21:30:01.914589 4915 generic.go:334] "Generic (PLEG): container finished" podID="e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc" containerID="c62f234ecec394e652ad6e26029697d408d5aa75b9e1029dc4acdd7a8230cf89" exitCode=0 Nov 24 21:30:01 crc kubenswrapper[4915]: I1124 21:30:01.914648 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-m77rg" event={"ID":"e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc","Type":"ContainerDied","Data":"c62f234ecec394e652ad6e26029697d408d5aa75b9e1029dc4acdd7a8230cf89"} Nov 24 21:30:01 crc kubenswrapper[4915]: I1124 21:30:01.914693 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-m77rg" event={"ID":"e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc","Type":"ContainerStarted","Data":"1b3891d169ac375aea9a8b1910c3a3bbea9416725a83f74f0637ddcb6db59bdf"} Nov 24 21:30:03 crc kubenswrapper[4915]: I1124 21:30:03.170010 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-m77rg" Nov 24 21:30:03 crc kubenswrapper[4915]: I1124 21:30:03.276456 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc-secret-volume\") pod \"e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc\" (UID: \"e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc\") " Nov 24 21:30:03 crc kubenswrapper[4915]: I1124 21:30:03.276514 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgxwq\" (UniqueName: \"kubernetes.io/projected/e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc-kube-api-access-bgxwq\") pod \"e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc\" (UID: \"e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc\") " Nov 24 21:30:03 crc kubenswrapper[4915]: I1124 21:30:03.276581 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc-config-volume\") pod \"e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc\" (UID: \"e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc\") " Nov 24 21:30:03 crc kubenswrapper[4915]: I1124 21:30:03.277587 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc-config-volume" (OuterVolumeSpecName: "config-volume") pod "e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc" (UID: "e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:30:03 crc kubenswrapper[4915]: I1124 21:30:03.282718 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc" (UID: "e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:30:03 crc kubenswrapper[4915]: I1124 21:30:03.282906 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc-kube-api-access-bgxwq" (OuterVolumeSpecName: "kube-api-access-bgxwq") pod "e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc" (UID: "e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc"). InnerVolumeSpecName "kube-api-access-bgxwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:30:03 crc kubenswrapper[4915]: I1124 21:30:03.378520 4915 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:03 crc kubenswrapper[4915]: I1124 21:30:03.378567 4915 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:03 crc kubenswrapper[4915]: I1124 21:30:03.378581 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgxwq\" (UniqueName: \"kubernetes.io/projected/e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc-kube-api-access-bgxwq\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:03 crc kubenswrapper[4915]: I1124 21:30:03.931868 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-m77rg" event={"ID":"e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc","Type":"ContainerDied","Data":"1b3891d169ac375aea9a8b1910c3a3bbea9416725a83f74f0637ddcb6db59bdf"} Nov 24 21:30:03 crc kubenswrapper[4915]: I1124 21:30:03.932258 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b3891d169ac375aea9a8b1910c3a3bbea9416725a83f74f0637ddcb6db59bdf" Nov 24 21:30:03 crc kubenswrapper[4915]: I1124 21:30:03.931928 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400330-m77rg" Nov 24 21:30:36 crc kubenswrapper[4915]: I1124 21:30:36.681062 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm"] Nov 24 21:30:36 crc kubenswrapper[4915]: E1124 21:30:36.681903 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc" containerName="collect-profiles" Nov 24 21:30:36 crc kubenswrapper[4915]: I1124 21:30:36.681918 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc" containerName="collect-profiles" Nov 24 21:30:36 crc kubenswrapper[4915]: I1124 21:30:36.682077 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc" containerName="collect-profiles" Nov 24 21:30:36 crc kubenswrapper[4915]: I1124 21:30:36.683240 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm" Nov 24 21:30:36 crc kubenswrapper[4915]: I1124 21:30:36.685112 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 21:30:36 crc kubenswrapper[4915]: I1124 21:30:36.697914 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm"] Nov 24 21:30:36 crc kubenswrapper[4915]: I1124 21:30:36.868234 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd00050c-03ed-4857-a586-146fc1d10b91-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm\" (UID: \"fd00050c-03ed-4857-a586-146fc1d10b91\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm" Nov 24 21:30:36 crc kubenswrapper[4915]: I1124 21:30:36.868692 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4v5z\" (UniqueName: \"kubernetes.io/projected/fd00050c-03ed-4857-a586-146fc1d10b91-kube-api-access-m4v5z\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm\" (UID: \"fd00050c-03ed-4857-a586-146fc1d10b91\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm" Nov 24 21:30:36 crc kubenswrapper[4915]: I1124 21:30:36.868725 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd00050c-03ed-4857-a586-146fc1d10b91-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm\" (UID: \"fd00050c-03ed-4857-a586-146fc1d10b91\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm" Nov 24 21:30:36 crc kubenswrapper[4915]: I1124 21:30:36.969828 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd00050c-03ed-4857-a586-146fc1d10b91-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm\" (UID: \"fd00050c-03ed-4857-a586-146fc1d10b91\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm" Nov 24 21:30:36 crc kubenswrapper[4915]: I1124 21:30:36.969885 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4v5z\" (UniqueName: \"kubernetes.io/projected/fd00050c-03ed-4857-a586-146fc1d10b91-kube-api-access-m4v5z\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm\" (UID: \"fd00050c-03ed-4857-a586-146fc1d10b91\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm" Nov 24 21:30:36 crc kubenswrapper[4915]: I1124 21:30:36.969912 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd00050c-03ed-4857-a586-146fc1d10b91-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm\" (UID: \"fd00050c-03ed-4857-a586-146fc1d10b91\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm" Nov 24 21:30:36 crc kubenswrapper[4915]: I1124 21:30:36.970331 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd00050c-03ed-4857-a586-146fc1d10b91-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm\" (UID: \"fd00050c-03ed-4857-a586-146fc1d10b91\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm" Nov 24 21:30:36 crc kubenswrapper[4915]: I1124 21:30:36.971755 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd00050c-03ed-4857-a586-146fc1d10b91-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm\" (UID: \"fd00050c-03ed-4857-a586-146fc1d10b91\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm" Nov 24 21:30:36 crc kubenswrapper[4915]: I1124 21:30:36.992589 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4v5z\" (UniqueName: \"kubernetes.io/projected/fd00050c-03ed-4857-a586-146fc1d10b91-kube-api-access-m4v5z\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm\" (UID: \"fd00050c-03ed-4857-a586-146fc1d10b91\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm" Nov 24 21:30:37 crc kubenswrapper[4915]: I1124 21:30:37.006978 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm" Nov 24 21:30:37 crc kubenswrapper[4915]: I1124 21:30:37.243616 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm"] Nov 24 21:30:38 crc kubenswrapper[4915]: I1124 21:30:38.232190 4915 generic.go:334] "Generic (PLEG): container finished" podID="fd00050c-03ed-4857-a586-146fc1d10b91" containerID="098a1d068526b77f83d887ca4ea946eb2c4dbbeec095aac85f3e11c0502e24dd" exitCode=0 Nov 24 21:30:38 crc kubenswrapper[4915]: I1124 21:30:38.232272 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm" event={"ID":"fd00050c-03ed-4857-a586-146fc1d10b91","Type":"ContainerDied","Data":"098a1d068526b77f83d887ca4ea946eb2c4dbbeec095aac85f3e11c0502e24dd"} Nov 24 21:30:38 crc kubenswrapper[4915]: I1124 21:30:38.232769 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm" event={"ID":"fd00050c-03ed-4857-a586-146fc1d10b91","Type":"ContainerStarted","Data":"c2464b4af44adc533ec2d8032aa8d55ce00757b0f842a35d42506765f25ebd8e"} Nov 24 21:30:38 crc kubenswrapper[4915]: I1124 21:30:38.235282 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 21:30:40 crc kubenswrapper[4915]: I1124 21:30:40.247293 4915 generic.go:334] "Generic (PLEG): container finished" podID="fd00050c-03ed-4857-a586-146fc1d10b91" containerID="8f4af0b54050ab6fe5db301a73f43775cc3bbefac3d702acc4cbd6907a56a2f2" exitCode=0 Nov 24 21:30:40 crc kubenswrapper[4915]: I1124 21:30:40.247406 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm" event={"ID":"fd00050c-03ed-4857-a586-146fc1d10b91","Type":"ContainerDied","Data":"8f4af0b54050ab6fe5db301a73f43775cc3bbefac3d702acc4cbd6907a56a2f2"} Nov 24 21:30:41 crc kubenswrapper[4915]: I1124 21:30:41.262279 4915 generic.go:334] "Generic (PLEG): container finished" podID="fd00050c-03ed-4857-a586-146fc1d10b91" containerID="b55383d45643089c219984b2393e70e99c83f585b67fa14c9d9ee7fb839778fd" exitCode=0 Nov 24 21:30:41 crc kubenswrapper[4915]: I1124 21:30:41.262335 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm" event={"ID":"fd00050c-03ed-4857-a586-146fc1d10b91","Type":"ContainerDied","Data":"b55383d45643089c219984b2393e70e99c83f585b67fa14c9d9ee7fb839778fd"} Nov 24 21:30:42 crc kubenswrapper[4915]: I1124 21:30:42.583116 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm" Nov 24 21:30:42 crc kubenswrapper[4915]: I1124 21:30:42.760852 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd00050c-03ed-4857-a586-146fc1d10b91-util\") pod \"fd00050c-03ed-4857-a586-146fc1d10b91\" (UID: \"fd00050c-03ed-4857-a586-146fc1d10b91\") " Nov 24 21:30:42 crc kubenswrapper[4915]: I1124 21:30:42.761034 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4v5z\" (UniqueName: \"kubernetes.io/projected/fd00050c-03ed-4857-a586-146fc1d10b91-kube-api-access-m4v5z\") pod \"fd00050c-03ed-4857-a586-146fc1d10b91\" (UID: \"fd00050c-03ed-4857-a586-146fc1d10b91\") " Nov 24 21:30:42 crc kubenswrapper[4915]: I1124 21:30:42.761114 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd00050c-03ed-4857-a586-146fc1d10b91-bundle\") pod \"fd00050c-03ed-4857-a586-146fc1d10b91\" (UID: \"fd00050c-03ed-4857-a586-146fc1d10b91\") " Nov 24 21:30:42 crc kubenswrapper[4915]: I1124 21:30:42.762958 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd00050c-03ed-4857-a586-146fc1d10b91-bundle" (OuterVolumeSpecName: "bundle") pod "fd00050c-03ed-4857-a586-146fc1d10b91" (UID: "fd00050c-03ed-4857-a586-146fc1d10b91"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:30:42 crc kubenswrapper[4915]: I1124 21:30:42.769603 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd00050c-03ed-4857-a586-146fc1d10b91-kube-api-access-m4v5z" (OuterVolumeSpecName: "kube-api-access-m4v5z") pod "fd00050c-03ed-4857-a586-146fc1d10b91" (UID: "fd00050c-03ed-4857-a586-146fc1d10b91"). InnerVolumeSpecName "kube-api-access-m4v5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:30:42 crc kubenswrapper[4915]: I1124 21:30:42.862868 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4v5z\" (UniqueName: \"kubernetes.io/projected/fd00050c-03ed-4857-a586-146fc1d10b91-kube-api-access-m4v5z\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:42 crc kubenswrapper[4915]: I1124 21:30:42.862916 4915 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fd00050c-03ed-4857-a586-146fc1d10b91-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:43 crc kubenswrapper[4915]: I1124 21:30:43.110270 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd00050c-03ed-4857-a586-146fc1d10b91-util" (OuterVolumeSpecName: "util") pod "fd00050c-03ed-4857-a586-146fc1d10b91" (UID: "fd00050c-03ed-4857-a586-146fc1d10b91"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:30:43 crc kubenswrapper[4915]: I1124 21:30:43.167044 4915 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fd00050c-03ed-4857-a586-146fc1d10b91-util\") on node \"crc\" DevicePath \"\"" Nov 24 21:30:43 crc kubenswrapper[4915]: I1124 21:30:43.277966 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm" event={"ID":"fd00050c-03ed-4857-a586-146fc1d10b91","Type":"ContainerDied","Data":"c2464b4af44adc533ec2d8032aa8d55ce00757b0f842a35d42506765f25ebd8e"} Nov 24 21:30:43 crc kubenswrapper[4915]: I1124 21:30:43.278008 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2464b4af44adc533ec2d8032aa8d55ce00757b0f842a35d42506765f25ebd8e" Nov 24 21:30:43 crc kubenswrapper[4915]: I1124 21:30:43.278066 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm" Nov 24 21:30:54 crc kubenswrapper[4915]: I1124 21:30:54.775575 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-qtz6h"] Nov 24 21:30:54 crc kubenswrapper[4915]: E1124 21:30:54.776264 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd00050c-03ed-4857-a586-146fc1d10b91" containerName="pull" Nov 24 21:30:54 crc kubenswrapper[4915]: I1124 21:30:54.776275 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd00050c-03ed-4857-a586-146fc1d10b91" containerName="pull" Nov 24 21:30:54 crc kubenswrapper[4915]: E1124 21:30:54.776293 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd00050c-03ed-4857-a586-146fc1d10b91" containerName="util" Nov 24 21:30:54 crc kubenswrapper[4915]: I1124 21:30:54.776298 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd00050c-03ed-4857-a586-146fc1d10b91" containerName="util" Nov 24 21:30:54 crc kubenswrapper[4915]: E1124 21:30:54.776309 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd00050c-03ed-4857-a586-146fc1d10b91" containerName="extract" Nov 24 21:30:54 crc kubenswrapper[4915]: I1124 21:30:54.776315 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd00050c-03ed-4857-a586-146fc1d10b91" containerName="extract" Nov 24 21:30:54 crc kubenswrapper[4915]: I1124 21:30:54.776434 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd00050c-03ed-4857-a586-146fc1d10b91" containerName="extract" Nov 24 21:30:54 crc kubenswrapper[4915]: I1124 21:30:54.776844 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-qtz6h" Nov 24 21:30:54 crc kubenswrapper[4915]: I1124 21:30:54.779609 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-qc6tk" Nov 24 21:30:54 crc kubenswrapper[4915]: I1124 21:30:54.779705 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Nov 24 21:30:54 crc kubenswrapper[4915]: I1124 21:30:54.781275 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Nov 24 21:30:54 crc kubenswrapper[4915]: I1124 21:30:54.804863 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-qtz6h"] Nov 24 21:30:54 crc kubenswrapper[4915]: I1124 21:30:54.897566 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d49669fb4-9fbpw"] Nov 24 21:30:54 crc kubenswrapper[4915]: I1124 21:30:54.898268 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d49669fb4-9fbpw" Nov 24 21:30:54 crc kubenswrapper[4915]: I1124 21:30:54.903206 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-grjhg" Nov 24 21:30:54 crc kubenswrapper[4915]: I1124 21:30:54.904714 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Nov 24 21:30:54 crc kubenswrapper[4915]: I1124 21:30:54.917866 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d49669fb4-s5pjq"] Nov 24 21:30:54 crc kubenswrapper[4915]: I1124 21:30:54.918814 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d49669fb4-s5pjq" Nov 24 21:30:54 crc kubenswrapper[4915]: I1124 21:30:54.930513 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6p2j\" (UniqueName: \"kubernetes.io/projected/00e70017-d6b3-4d57-b669-068d163bc3c4-kube-api-access-q6p2j\") pod \"obo-prometheus-operator-668cf9dfbb-qtz6h\" (UID: \"00e70017-d6b3-4d57-b669-068d163bc3c4\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-qtz6h" Nov 24 21:30:54 crc kubenswrapper[4915]: I1124 21:30:54.930585 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5720a2a7-47a3-4871-8003-736d6f5d7673-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d49669fb4-9fbpw\" (UID: \"5720a2a7-47a3-4871-8003-736d6f5d7673\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d49669fb4-9fbpw" Nov 24 21:30:54 crc kubenswrapper[4915]: I1124 21:30:54.930605 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5720a2a7-47a3-4871-8003-736d6f5d7673-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d49669fb4-9fbpw\" (UID: \"5720a2a7-47a3-4871-8003-736d6f5d7673\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d49669fb4-9fbpw" Nov 24 21:30:54 crc kubenswrapper[4915]: I1124 21:30:54.930630 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/58a55a1e-cef9-48f3-b562-5b3a02c92cc3-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d49669fb4-s5pjq\" (UID: \"58a55a1e-cef9-48f3-b562-5b3a02c92cc3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d49669fb4-s5pjq" Nov 24 21:30:54 crc kubenswrapper[4915]: I1124 21:30:54.930653 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/58a55a1e-cef9-48f3-b562-5b3a02c92cc3-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d49669fb4-s5pjq\" (UID: \"58a55a1e-cef9-48f3-b562-5b3a02c92cc3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d49669fb4-s5pjq" Nov 24 21:30:54 crc kubenswrapper[4915]: I1124 21:30:54.931044 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d49669fb4-s5pjq"] Nov 24 21:30:54 crc kubenswrapper[4915]: I1124 21:30:54.962936 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d49669fb4-9fbpw"] Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.002554 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-sdmdj"] Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.003252 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-sdmdj" Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.008475 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.010731 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-5hrxk" Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.021978 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-sdmdj"] Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.031610 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6p2j\" (UniqueName: \"kubernetes.io/projected/00e70017-d6b3-4d57-b669-068d163bc3c4-kube-api-access-q6p2j\") pod \"obo-prometheus-operator-668cf9dfbb-qtz6h\" (UID: \"00e70017-d6b3-4d57-b669-068d163bc3c4\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-qtz6h" Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.031843 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5720a2a7-47a3-4871-8003-736d6f5d7673-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d49669fb4-9fbpw\" (UID: \"5720a2a7-47a3-4871-8003-736d6f5d7673\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d49669fb4-9fbpw" Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.031883 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5720a2a7-47a3-4871-8003-736d6f5d7673-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d49669fb4-9fbpw\" (UID: \"5720a2a7-47a3-4871-8003-736d6f5d7673\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d49669fb4-9fbpw" Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.031942 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/58a55a1e-cef9-48f3-b562-5b3a02c92cc3-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d49669fb4-s5pjq\" (UID: \"58a55a1e-cef9-48f3-b562-5b3a02c92cc3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d49669fb4-s5pjq" Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.031978 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/58a55a1e-cef9-48f3-b562-5b3a02c92cc3-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d49669fb4-s5pjq\" (UID: \"58a55a1e-cef9-48f3-b562-5b3a02c92cc3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d49669fb4-s5pjq" Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.042684 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/58a55a1e-cef9-48f3-b562-5b3a02c92cc3-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d49669fb4-s5pjq\" (UID: \"58a55a1e-cef9-48f3-b562-5b3a02c92cc3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d49669fb4-s5pjq" Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.043266 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/58a55a1e-cef9-48f3-b562-5b3a02c92cc3-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d49669fb4-s5pjq\" (UID: \"58a55a1e-cef9-48f3-b562-5b3a02c92cc3\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d49669fb4-s5pjq" Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.047488 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5720a2a7-47a3-4871-8003-736d6f5d7673-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6d49669fb4-9fbpw\" (UID: \"5720a2a7-47a3-4871-8003-736d6f5d7673\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d49669fb4-9fbpw" Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.056138 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6p2j\" (UniqueName: \"kubernetes.io/projected/00e70017-d6b3-4d57-b669-068d163bc3c4-kube-api-access-q6p2j\") pod \"obo-prometheus-operator-668cf9dfbb-qtz6h\" (UID: \"00e70017-d6b3-4d57-b669-068d163bc3c4\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-qtz6h" Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.059136 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5720a2a7-47a3-4871-8003-736d6f5d7673-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6d49669fb4-9fbpw\" (UID: \"5720a2a7-47a3-4871-8003-736d6f5d7673\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d49669fb4-9fbpw" Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.096516 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-qtz6h" Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.132935 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/f68bd40b-44eb-4241-9f04-5d3deadf1951-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-sdmdj\" (UID: \"f68bd40b-44eb-4241-9f04-5d3deadf1951\") " pod="openshift-operators/observability-operator-d8bb48f5d-sdmdj" Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.132990 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l4q8\" (UniqueName: \"kubernetes.io/projected/f68bd40b-44eb-4241-9f04-5d3deadf1951-kube-api-access-6l4q8\") pod \"observability-operator-d8bb48f5d-sdmdj\" (UID: \"f68bd40b-44eb-4241-9f04-5d3deadf1951\") " pod="openshift-operators/observability-operator-d8bb48f5d-sdmdj" Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.207745 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-wmrpm"] Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.208956 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-wmrpm" Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.213169 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-qkn7p" Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.215054 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d49669fb4-9fbpw" Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.231995 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d49669fb4-s5pjq" Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.232623 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-wmrpm"] Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.233417 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ggln\" (UniqueName: \"kubernetes.io/projected/582a64c0-8cfc-43d9-beeb-37f6e2460561-kube-api-access-4ggln\") pod \"perses-operator-5446b9c989-wmrpm\" (UID: \"582a64c0-8cfc-43d9-beeb-37f6e2460561\") " pod="openshift-operators/perses-operator-5446b9c989-wmrpm" Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.233448 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/582a64c0-8cfc-43d9-beeb-37f6e2460561-openshift-service-ca\") pod \"perses-operator-5446b9c989-wmrpm\" (UID: \"582a64c0-8cfc-43d9-beeb-37f6e2460561\") " pod="openshift-operators/perses-operator-5446b9c989-wmrpm" Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.233491 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/f68bd40b-44eb-4241-9f04-5d3deadf1951-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-sdmdj\" (UID: \"f68bd40b-44eb-4241-9f04-5d3deadf1951\") " pod="openshift-operators/observability-operator-d8bb48f5d-sdmdj" Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.233512 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l4q8\" (UniqueName: \"kubernetes.io/projected/f68bd40b-44eb-4241-9f04-5d3deadf1951-kube-api-access-6l4q8\") pod \"observability-operator-d8bb48f5d-sdmdj\" (UID: \"f68bd40b-44eb-4241-9f04-5d3deadf1951\") " pod="openshift-operators/observability-operator-d8bb48f5d-sdmdj" Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.238549 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/f68bd40b-44eb-4241-9f04-5d3deadf1951-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-sdmdj\" (UID: \"f68bd40b-44eb-4241-9f04-5d3deadf1951\") " pod="openshift-operators/observability-operator-d8bb48f5d-sdmdj" Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.256517 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l4q8\" (UniqueName: \"kubernetes.io/projected/f68bd40b-44eb-4241-9f04-5d3deadf1951-kube-api-access-6l4q8\") pod \"observability-operator-d8bb48f5d-sdmdj\" (UID: \"f68bd40b-44eb-4241-9f04-5d3deadf1951\") " pod="openshift-operators/observability-operator-d8bb48f5d-sdmdj" Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.317749 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-sdmdj" Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.335271 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ggln\" (UniqueName: \"kubernetes.io/projected/582a64c0-8cfc-43d9-beeb-37f6e2460561-kube-api-access-4ggln\") pod \"perses-operator-5446b9c989-wmrpm\" (UID: \"582a64c0-8cfc-43d9-beeb-37f6e2460561\") " pod="openshift-operators/perses-operator-5446b9c989-wmrpm" Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.335313 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/582a64c0-8cfc-43d9-beeb-37f6e2460561-openshift-service-ca\") pod \"perses-operator-5446b9c989-wmrpm\" (UID: \"582a64c0-8cfc-43d9-beeb-37f6e2460561\") " pod="openshift-operators/perses-operator-5446b9c989-wmrpm" Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.336174 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/582a64c0-8cfc-43d9-beeb-37f6e2460561-openshift-service-ca\") pod \"perses-operator-5446b9c989-wmrpm\" (UID: \"582a64c0-8cfc-43d9-beeb-37f6e2460561\") " pod="openshift-operators/perses-operator-5446b9c989-wmrpm" Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.355734 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-qtz6h"] Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.358616 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ggln\" (UniqueName: \"kubernetes.io/projected/582a64c0-8cfc-43d9-beeb-37f6e2460561-kube-api-access-4ggln\") pod \"perses-operator-5446b9c989-wmrpm\" (UID: \"582a64c0-8cfc-43d9-beeb-37f6e2460561\") " pod="openshift-operators/perses-operator-5446b9c989-wmrpm" Nov 24 21:30:55 crc kubenswrapper[4915]: W1124 21:30:55.393697 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00e70017_d6b3_4d57_b669_068d163bc3c4.slice/crio-14f1c8d0ceef2a72bada2d4acb2bfac2fa27f8a3f6c790594354c6fd55da6231 WatchSource:0}: Error finding container 14f1c8d0ceef2a72bada2d4acb2bfac2fa27f8a3f6c790594354c6fd55da6231: Status 404 returned error can't find the container with id 14f1c8d0ceef2a72bada2d4acb2bfac2fa27f8a3f6c790594354c6fd55da6231 Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.456040 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d49669fb4-s5pjq"] Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.508821 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6d49669fb4-9fbpw"] Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.553302 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-wmrpm" Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.698364 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-sdmdj"] Nov 24 21:30:55 crc kubenswrapper[4915]: W1124 21:30:55.712339 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf68bd40b_44eb_4241_9f04_5d3deadf1951.slice/crio-c739e6fd2015f21dbf41a7455da8b73434282dc791b79d2811c64cc177bab71a WatchSource:0}: Error finding container c739e6fd2015f21dbf41a7455da8b73434282dc791b79d2811c64cc177bab71a: Status 404 returned error can't find the container with id c739e6fd2015f21dbf41a7455da8b73434282dc791b79d2811c64cc177bab71a Nov 24 21:30:55 crc kubenswrapper[4915]: I1124 21:30:55.803295 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-wmrpm"] Nov 24 21:30:55 crc kubenswrapper[4915]: W1124 21:30:55.810878 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod582a64c0_8cfc_43d9_beeb_37f6e2460561.slice/crio-d9d30389b8d934aa9972ccdd04121a1a18d187ff2dd793bf5eb910ae20f91015 WatchSource:0}: Error finding container d9d30389b8d934aa9972ccdd04121a1a18d187ff2dd793bf5eb910ae20f91015: Status 404 returned error can't find the container with id d9d30389b8d934aa9972ccdd04121a1a18d187ff2dd793bf5eb910ae20f91015 Nov 24 21:30:56 crc kubenswrapper[4915]: I1124 21:30:56.376965 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-qtz6h" event={"ID":"00e70017-d6b3-4d57-b669-068d163bc3c4","Type":"ContainerStarted","Data":"14f1c8d0ceef2a72bada2d4acb2bfac2fa27f8a3f6c790594354c6fd55da6231"} Nov 24 21:30:56 crc kubenswrapper[4915]: I1124 21:30:56.378016 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d49669fb4-9fbpw" event={"ID":"5720a2a7-47a3-4871-8003-736d6f5d7673","Type":"ContainerStarted","Data":"45f36b607479a7d56de6fdc560a5ad780d92ce4aa250b2ad51f438ca61cc7075"} Nov 24 21:30:56 crc kubenswrapper[4915]: I1124 21:30:56.379060 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-wmrpm" event={"ID":"582a64c0-8cfc-43d9-beeb-37f6e2460561","Type":"ContainerStarted","Data":"d9d30389b8d934aa9972ccdd04121a1a18d187ff2dd793bf5eb910ae20f91015"} Nov 24 21:30:56 crc kubenswrapper[4915]: I1124 21:30:56.380145 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d49669fb4-s5pjq" event={"ID":"58a55a1e-cef9-48f3-b562-5b3a02c92cc3","Type":"ContainerStarted","Data":"8a83a931e2d01d4c56a48594b21b8081fe0a0c6a2901a8479f97d015648dbe69"} Nov 24 21:30:56 crc kubenswrapper[4915]: I1124 21:30:56.381459 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-sdmdj" event={"ID":"f68bd40b-44eb-4241-9f04-5d3deadf1951","Type":"ContainerStarted","Data":"c739e6fd2015f21dbf41a7455da8b73434282dc791b79d2811c64cc177bab71a"} Nov 24 21:31:03 crc kubenswrapper[4915]: I1124 21:31:03.933558 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jmqqt"] Nov 24 21:31:03 crc kubenswrapper[4915]: I1124 21:31:03.934869 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="ovn-controller" containerID="cri-o://ce0174ac3e69992dce18dea349b0c6ec98cd95c67d12cbd93110c0d604f76e93" gracePeriod=30 Nov 24 21:31:03 crc kubenswrapper[4915]: I1124 21:31:03.935012 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="northd" containerID="cri-o://27d8ac7b09233a7e864b5dfba1aefafe23e08de6ef34879478cf63c7ec594794" gracePeriod=30 Nov 24 21:31:03 crc kubenswrapper[4915]: I1124 21:31:03.935061 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://ca8862d30ccca23f9831d25c0184b0e342ca7e8a4c67b0fe96241323ed74bc49" gracePeriod=30 Nov 24 21:31:03 crc kubenswrapper[4915]: I1124 21:31:03.934991 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="nbdb" containerID="cri-o://6932314dff48e102e97de6739f7af9e8c748a9a027a00f6cb31e70c19b9f2027" gracePeriod=30 Nov 24 21:31:03 crc kubenswrapper[4915]: I1124 21:31:03.935103 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="kube-rbac-proxy-node" containerID="cri-o://d6520b970c1e0279fea2d39233569ea1cb355259398e39a9fd63a1b61218dcf7" gracePeriod=30 Nov 24 21:31:03 crc kubenswrapper[4915]: I1124 21:31:03.935157 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="ovn-acl-logging" containerID="cri-o://a07279f43e6efab7e8ce1739eb703e344027c7da1e758e62f581fe05c5028c52" gracePeriod=30 Nov 24 21:31:03 crc kubenswrapper[4915]: I1124 21:31:03.935492 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="sbdb" containerID="cri-o://8e3c0b39454a6a8c631a99c9c4faab3c4b956f476ba652178ee7b11041b825f2" gracePeriod=30 Nov 24 21:31:03 crc kubenswrapper[4915]: I1124 21:31:03.994796 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="ovnkube-controller" containerID="cri-o://e47e7134ba9ef188b696b948409a4823455230ba2a169348b8aca9dccac27514" gracePeriod=30 Nov 24 21:31:04 crc kubenswrapper[4915]: I1124 21:31:04.468621 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b8kq8_f5b8930d-4919-4a02-a962-c93b5f8f4ad3/kube-multus/2.log" Nov 24 21:31:04 crc kubenswrapper[4915]: I1124 21:31:04.469074 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b8kq8_f5b8930d-4919-4a02-a962-c93b5f8f4ad3/kube-multus/1.log" Nov 24 21:31:04 crc kubenswrapper[4915]: I1124 21:31:04.469112 4915 generic.go:334] "Generic (PLEG): container finished" podID="f5b8930d-4919-4a02-a962-c93b5f8f4ad3" containerID="b4dbca3c2e2b93a7e5cc889b1b96416b4a9df27216e9ed45cb8ff4b73b75f830" exitCode=2 Nov 24 21:31:04 crc kubenswrapper[4915]: I1124 21:31:04.469168 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b8kq8" event={"ID":"f5b8930d-4919-4a02-a962-c93b5f8f4ad3","Type":"ContainerDied","Data":"b4dbca3c2e2b93a7e5cc889b1b96416b4a9df27216e9ed45cb8ff4b73b75f830"} Nov 24 21:31:04 crc kubenswrapper[4915]: I1124 21:31:04.469202 4915 scope.go:117] "RemoveContainer" containerID="926013354edf1382934bf5829af75dc38d00843d1d93ae599bfcedd1322571d7" Nov 24 21:31:04 crc kubenswrapper[4915]: I1124 21:31:04.469702 4915 scope.go:117] "RemoveContainer" containerID="b4dbca3c2e2b93a7e5cc889b1b96416b4a9df27216e9ed45cb8ff4b73b75f830" Nov 24 21:31:04 crc kubenswrapper[4915]: E1124 21:31:04.469964 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-b8kq8_openshift-multus(f5b8930d-4919-4a02-a962-c93b5f8f4ad3)\"" pod="openshift-multus/multus-b8kq8" podUID="f5b8930d-4919-4a02-a962-c93b5f8f4ad3" Nov 24 21:31:04 crc kubenswrapper[4915]: I1124 21:31:04.470955 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jmqqt_3f235785-6b02-4304-99b8-3b216c369d45/ovnkube-controller/3.log" Nov 24 21:31:04 crc kubenswrapper[4915]: I1124 21:31:04.472750 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jmqqt_3f235785-6b02-4304-99b8-3b216c369d45/ovn-acl-logging/0.log" Nov 24 21:31:04 crc kubenswrapper[4915]: I1124 21:31:04.473208 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jmqqt_3f235785-6b02-4304-99b8-3b216c369d45/ovn-controller/0.log" Nov 24 21:31:04 crc kubenswrapper[4915]: I1124 21:31:04.473862 4915 generic.go:334] "Generic (PLEG): container finished" podID="3f235785-6b02-4304-99b8-3b216c369d45" containerID="e47e7134ba9ef188b696b948409a4823455230ba2a169348b8aca9dccac27514" exitCode=0 Nov 24 21:31:04 crc kubenswrapper[4915]: I1124 21:31:04.473899 4915 generic.go:334] "Generic (PLEG): container finished" podID="3f235785-6b02-4304-99b8-3b216c369d45" containerID="8e3c0b39454a6a8c631a99c9c4faab3c4b956f476ba652178ee7b11041b825f2" exitCode=0 Nov 24 21:31:04 crc kubenswrapper[4915]: I1124 21:31:04.473908 4915 generic.go:334] "Generic (PLEG): container finished" podID="3f235785-6b02-4304-99b8-3b216c369d45" containerID="6932314dff48e102e97de6739f7af9e8c748a9a027a00f6cb31e70c19b9f2027" exitCode=0 Nov 24 21:31:04 crc kubenswrapper[4915]: I1124 21:31:04.473917 4915 generic.go:334] "Generic (PLEG): container finished" podID="3f235785-6b02-4304-99b8-3b216c369d45" containerID="27d8ac7b09233a7e864b5dfba1aefafe23e08de6ef34879478cf63c7ec594794" exitCode=0 Nov 24 21:31:04 crc kubenswrapper[4915]: I1124 21:31:04.473925 4915 generic.go:334] "Generic (PLEG): container finished" podID="3f235785-6b02-4304-99b8-3b216c369d45" containerID="a07279f43e6efab7e8ce1739eb703e344027c7da1e758e62f581fe05c5028c52" exitCode=143 Nov 24 21:31:04 crc kubenswrapper[4915]: I1124 21:31:04.473928 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" event={"ID":"3f235785-6b02-4304-99b8-3b216c369d45","Type":"ContainerDied","Data":"e47e7134ba9ef188b696b948409a4823455230ba2a169348b8aca9dccac27514"} Nov 24 21:31:04 crc kubenswrapper[4915]: I1124 21:31:04.473962 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" event={"ID":"3f235785-6b02-4304-99b8-3b216c369d45","Type":"ContainerDied","Data":"8e3c0b39454a6a8c631a99c9c4faab3c4b956f476ba652178ee7b11041b825f2"} Nov 24 21:31:04 crc kubenswrapper[4915]: I1124 21:31:04.473973 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" event={"ID":"3f235785-6b02-4304-99b8-3b216c369d45","Type":"ContainerDied","Data":"6932314dff48e102e97de6739f7af9e8c748a9a027a00f6cb31e70c19b9f2027"} Nov 24 21:31:04 crc kubenswrapper[4915]: I1124 21:31:04.473981 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" event={"ID":"3f235785-6b02-4304-99b8-3b216c369d45","Type":"ContainerDied","Data":"27d8ac7b09233a7e864b5dfba1aefafe23e08de6ef34879478cf63c7ec594794"} Nov 24 21:31:04 crc kubenswrapper[4915]: I1124 21:31:04.473993 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" event={"ID":"3f235785-6b02-4304-99b8-3b216c369d45","Type":"ContainerDied","Data":"a07279f43e6efab7e8ce1739eb703e344027c7da1e758e62f581fe05c5028c52"} Nov 24 21:31:04 crc kubenswrapper[4915]: I1124 21:31:04.474003 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" event={"ID":"3f235785-6b02-4304-99b8-3b216c369d45","Type":"ContainerDied","Data":"ce0174ac3e69992dce18dea349b0c6ec98cd95c67d12cbd93110c0d604f76e93"} Nov 24 21:31:04 crc kubenswrapper[4915]: I1124 21:31:04.473935 4915 generic.go:334] "Generic (PLEG): container finished" podID="3f235785-6b02-4304-99b8-3b216c369d45" containerID="ce0174ac3e69992dce18dea349b0c6ec98cd95c67d12cbd93110c0d604f76e93" exitCode=143 Nov 24 21:31:05 crc kubenswrapper[4915]: I1124 21:31:05.483798 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jmqqt_3f235785-6b02-4304-99b8-3b216c369d45/ovnkube-controller/3.log" Nov 24 21:31:05 crc kubenswrapper[4915]: I1124 21:31:05.490514 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jmqqt_3f235785-6b02-4304-99b8-3b216c369d45/ovn-acl-logging/0.log" Nov 24 21:31:05 crc kubenswrapper[4915]: I1124 21:31:05.491323 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jmqqt_3f235785-6b02-4304-99b8-3b216c369d45/ovn-controller/0.log" Nov 24 21:31:05 crc kubenswrapper[4915]: I1124 21:31:05.491797 4915 generic.go:334] "Generic (PLEG): container finished" podID="3f235785-6b02-4304-99b8-3b216c369d45" containerID="ca8862d30ccca23f9831d25c0184b0e342ca7e8a4c67b0fe96241323ed74bc49" exitCode=0 Nov 24 21:31:05 crc kubenswrapper[4915]: I1124 21:31:05.491827 4915 generic.go:334] "Generic (PLEG): container finished" podID="3f235785-6b02-4304-99b8-3b216c369d45" containerID="d6520b970c1e0279fea2d39233569ea1cb355259398e39a9fd63a1b61218dcf7" exitCode=0 Nov 24 21:31:05 crc kubenswrapper[4915]: I1124 21:31:05.491856 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" event={"ID":"3f235785-6b02-4304-99b8-3b216c369d45","Type":"ContainerDied","Data":"ca8862d30ccca23f9831d25c0184b0e342ca7e8a4c67b0fe96241323ed74bc49"} Nov 24 21:31:05 crc kubenswrapper[4915]: I1124 21:31:05.491894 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" event={"ID":"3f235785-6b02-4304-99b8-3b216c369d45","Type":"ContainerDied","Data":"d6520b970c1e0279fea2d39233569ea1cb355259398e39a9fd63a1b61218dcf7"} Nov 24 21:31:10 crc kubenswrapper[4915]: E1124 21:31:10.360178 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385" Nov 24 21:31:10 crc kubenswrapper[4915]: E1124 21:31:10.361182 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:perses-operator,Image:registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openshift-service-ca,ReadOnly:true,MountPath:/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4ggln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod perses-operator-5446b9c989-wmrpm_openshift-operators(582a64c0-8cfc-43d9-beeb-37f6e2460561): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 24 21:31:10 crc kubenswrapper[4915]: E1124 21:31:10.362394 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/perses-operator-5446b9c989-wmrpm" podUID="582a64c0-8cfc-43d9-beeb-37f6e2460561" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.413861 4915 scope.go:117] "RemoveContainer" containerID="ea71245c41a0f3fc20b632dcfff65b7ee7185eb47f94d40ee9a6391a6dd81960" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.445850 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jmqqt_3f235785-6b02-4304-99b8-3b216c369d45/ovn-acl-logging/0.log" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.446312 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jmqqt_3f235785-6b02-4304-99b8-3b216c369d45/ovn-controller/0.log" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.446765 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.510444 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hwjlt"] Nov 24 21:31:10 crc kubenswrapper[4915]: E1124 21:31:10.510745 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="ovn-controller" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.510767 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="ovn-controller" Nov 24 21:31:10 crc kubenswrapper[4915]: E1124 21:31:10.510799 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="kube-rbac-proxy-node" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.510808 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="kube-rbac-proxy-node" Nov 24 21:31:10 crc kubenswrapper[4915]: E1124 21:31:10.510819 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="ovnkube-controller" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.510826 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="ovnkube-controller" Nov 24 21:31:10 crc kubenswrapper[4915]: E1124 21:31:10.510836 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="ovnkube-controller" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.510843 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="ovnkube-controller" Nov 24 21:31:10 crc kubenswrapper[4915]: E1124 21:31:10.510853 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="northd" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.510860 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="northd" Nov 24 21:31:10 crc kubenswrapper[4915]: E1124 21:31:10.510870 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.510878 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 21:31:10 crc kubenswrapper[4915]: E1124 21:31:10.510889 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="ovnkube-controller" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.510897 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="ovnkube-controller" Nov 24 21:31:10 crc kubenswrapper[4915]: E1124 21:31:10.510909 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="ovn-acl-logging" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.510916 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="ovn-acl-logging" Nov 24 21:31:10 crc kubenswrapper[4915]: E1124 21:31:10.510927 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="ovnkube-controller" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.510935 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="ovnkube-controller" Nov 24 21:31:10 crc kubenswrapper[4915]: E1124 21:31:10.510944 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="kubecfg-setup" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.510951 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="kubecfg-setup" Nov 24 21:31:10 crc kubenswrapper[4915]: E1124 21:31:10.510963 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="ovnkube-controller" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.510970 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="ovnkube-controller" Nov 24 21:31:10 crc kubenswrapper[4915]: E1124 21:31:10.510982 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="sbdb" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.510989 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="sbdb" Nov 24 21:31:10 crc kubenswrapper[4915]: E1124 21:31:10.510999 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="nbdb" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.511006 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="nbdb" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.511149 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="ovn-acl-logging" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.511163 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="kube-rbac-proxy-ovn-metrics" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.511174 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="northd" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.511184 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="ovnkube-controller" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.511195 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="ovnkube-controller" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.511207 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="ovn-controller" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.511216 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="nbdb" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.511225 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="sbdb" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.511241 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="ovnkube-controller" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.511249 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="kube-rbac-proxy-node" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.511526 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="ovnkube-controller" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.511541 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f235785-6b02-4304-99b8-3b216c369d45" containerName="ovnkube-controller" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.514460 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.533290 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jmqqt_3f235785-6b02-4304-99b8-3b216c369d45/ovn-acl-logging/0.log" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.533732 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-jmqqt_3f235785-6b02-4304-99b8-3b216c369d45/ovn-controller/0.log" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.534162 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" event={"ID":"3f235785-6b02-4304-99b8-3b216c369d45","Type":"ContainerDied","Data":"284e5a7af05772bce2a87ba77f8d70473ed3a2118ef6cd86790c844416f36e91"} Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.534301 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-jmqqt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.536027 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b8kq8_f5b8930d-4919-4a02-a962-c93b5f8f4ad3/kube-multus/2.log" Nov 24 21:31:10 crc kubenswrapper[4915]: E1124 21:31:10.542032 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385\\\"\"" pod="openshift-operators/perses-operator-5446b9c989-wmrpm" podUID="582a64c0-8cfc-43d9-beeb-37f6e2460561" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.542232 4915 scope.go:117] "RemoveContainer" containerID="e47e7134ba9ef188b696b948409a4823455230ba2a169348b8aca9dccac27514" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.599281 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3f235785-6b02-4304-99b8-3b216c369d45-env-overrides\") pod \"3f235785-6b02-4304-99b8-3b216c369d45\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.599355 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-systemd-units\") pod \"3f235785-6b02-4304-99b8-3b216c369d45\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.599408 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3f235785-6b02-4304-99b8-3b216c369d45-ovnkube-script-lib\") pod \"3f235785-6b02-4304-99b8-3b216c369d45\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.599434 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-slash\") pod \"3f235785-6b02-4304-99b8-3b216c369d45\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.599490 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-slash" (OuterVolumeSpecName: "host-slash") pod "3f235785-6b02-4304-99b8-3b216c369d45" (UID: "3f235785-6b02-4304-99b8-3b216c369d45"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.599550 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-run-ovn-kubernetes\") pod \"3f235785-6b02-4304-99b8-3b216c369d45\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.599610 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "3f235785-6b02-4304-99b8-3b216c369d45" (UID: "3f235785-6b02-4304-99b8-3b216c369d45"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.599697 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2w9k\" (UniqueName: \"kubernetes.io/projected/3f235785-6b02-4304-99b8-3b216c369d45-kube-api-access-l2w9k\") pod \"3f235785-6b02-4304-99b8-3b216c369d45\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.600052 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f235785-6b02-4304-99b8-3b216c369d45-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "3f235785-6b02-4304-99b8-3b216c369d45" (UID: "3f235785-6b02-4304-99b8-3b216c369d45"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.600083 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "3f235785-6b02-4304-99b8-3b216c369d45" (UID: "3f235785-6b02-4304-99b8-3b216c369d45"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.600371 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f235785-6b02-4304-99b8-3b216c369d45-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "3f235785-6b02-4304-99b8-3b216c369d45" (UID: "3f235785-6b02-4304-99b8-3b216c369d45"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.600477 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-run-netns\") pod \"3f235785-6b02-4304-99b8-3b216c369d45\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.600510 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-run-systemd\") pod \"3f235785-6b02-4304-99b8-3b216c369d45\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.600538 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3f235785-6b02-4304-99b8-3b216c369d45-ovn-node-metrics-cert\") pod \"3f235785-6b02-4304-99b8-3b216c369d45\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.600579 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-run-ovn\") pod \"3f235785-6b02-4304-99b8-3b216c369d45\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.600595 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-log-socket\") pod \"3f235785-6b02-4304-99b8-3b216c369d45\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.600609 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-run-openvswitch\") pod \"3f235785-6b02-4304-99b8-3b216c369d45\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.600621 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-cni-netd\") pod \"3f235785-6b02-4304-99b8-3b216c369d45\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.600635 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-kubelet\") pod \"3f235785-6b02-4304-99b8-3b216c369d45\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.600653 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-node-log\") pod \"3f235785-6b02-4304-99b8-3b216c369d45\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.600665 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-cni-bin\") pod \"3f235785-6b02-4304-99b8-3b216c369d45\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.600684 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-etc-openvswitch\") pod \"3f235785-6b02-4304-99b8-3b216c369d45\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.600697 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-var-lib-openvswitch\") pod \"3f235785-6b02-4304-99b8-3b216c369d45\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.600738 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3f235785-6b02-4304-99b8-3b216c369d45-ovnkube-config\") pod \"3f235785-6b02-4304-99b8-3b216c369d45\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.600762 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-var-lib-cni-networks-ovn-kubernetes\") pod \"3f235785-6b02-4304-99b8-3b216c369d45\" (UID: \"3f235785-6b02-4304-99b8-3b216c369d45\") " Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.600855 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-ovnkube-config\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.600887 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-host-cni-netd\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.600906 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-var-lib-openvswitch\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.600922 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-etc-openvswitch\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.600937 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-run-openvswitch\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.600954 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-log-socket\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.600979 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-host-kubelet\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.600992 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-host-slash\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.601032 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-run-ovn\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.601046 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-host-run-ovn-kubernetes\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.601070 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-host-cni-bin\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.601100 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-ovnkube-script-lib\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.601122 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-run-systemd\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.601138 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-ovn-node-metrics-cert\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.601173 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-systemd-units\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.601201 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-host-run-netns\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.601231 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlqv5\" (UniqueName: \"kubernetes.io/projected/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-kube-api-access-rlqv5\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.601247 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-node-log\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.601264 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-env-overrides\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.601286 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.601435 4915 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/3f235785-6b02-4304-99b8-3b216c369d45-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.601452 4915 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.601463 4915 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/3f235785-6b02-4304-99b8-3b216c369d45-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.601472 4915 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-slash\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.601480 4915 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.601528 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "3f235785-6b02-4304-99b8-3b216c369d45" (UID: "3f235785-6b02-4304-99b8-3b216c369d45"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.603278 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "3f235785-6b02-4304-99b8-3b216c369d45" (UID: "3f235785-6b02-4304-99b8-3b216c369d45"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.603304 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "3f235785-6b02-4304-99b8-3b216c369d45" (UID: "3f235785-6b02-4304-99b8-3b216c369d45"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.603320 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "3f235785-6b02-4304-99b8-3b216c369d45" (UID: "3f235785-6b02-4304-99b8-3b216c369d45"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.603347 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-log-socket" (OuterVolumeSpecName: "log-socket") pod "3f235785-6b02-4304-99b8-3b216c369d45" (UID: "3f235785-6b02-4304-99b8-3b216c369d45"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.603373 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "3f235785-6b02-4304-99b8-3b216c369d45" (UID: "3f235785-6b02-4304-99b8-3b216c369d45"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.603394 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "3f235785-6b02-4304-99b8-3b216c369d45" (UID: "3f235785-6b02-4304-99b8-3b216c369d45"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.603416 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "3f235785-6b02-4304-99b8-3b216c369d45" (UID: "3f235785-6b02-4304-99b8-3b216c369d45"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.603459 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-node-log" (OuterVolumeSpecName: "node-log") pod "3f235785-6b02-4304-99b8-3b216c369d45" (UID: "3f235785-6b02-4304-99b8-3b216c369d45"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.603482 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "3f235785-6b02-4304-99b8-3b216c369d45" (UID: "3f235785-6b02-4304-99b8-3b216c369d45"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.603535 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "3f235785-6b02-4304-99b8-3b216c369d45" (UID: "3f235785-6b02-4304-99b8-3b216c369d45"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.608403 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f235785-6b02-4304-99b8-3b216c369d45-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "3f235785-6b02-4304-99b8-3b216c369d45" (UID: "3f235785-6b02-4304-99b8-3b216c369d45"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.614359 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f235785-6b02-4304-99b8-3b216c369d45-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "3f235785-6b02-4304-99b8-3b216c369d45" (UID: "3f235785-6b02-4304-99b8-3b216c369d45"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.620388 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f235785-6b02-4304-99b8-3b216c369d45-kube-api-access-l2w9k" (OuterVolumeSpecName: "kube-api-access-l2w9k") pod "3f235785-6b02-4304-99b8-3b216c369d45" (UID: "3f235785-6b02-4304-99b8-3b216c369d45"). InnerVolumeSpecName "kube-api-access-l2w9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.626537 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "3f235785-6b02-4304-99b8-3b216c369d45" (UID: "3f235785-6b02-4304-99b8-3b216c369d45"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.627962 4915 scope.go:117] "RemoveContainer" containerID="ea71245c41a0f3fc20b632dcfff65b7ee7185eb47f94d40ee9a6391a6dd81960" Nov 24 21:31:10 crc kubenswrapper[4915]: E1124 21:31:10.632895 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea71245c41a0f3fc20b632dcfff65b7ee7185eb47f94d40ee9a6391a6dd81960\": container with ID starting with ea71245c41a0f3fc20b632dcfff65b7ee7185eb47f94d40ee9a6391a6dd81960 not found: ID does not exist" containerID="ea71245c41a0f3fc20b632dcfff65b7ee7185eb47f94d40ee9a6391a6dd81960" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.632935 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea71245c41a0f3fc20b632dcfff65b7ee7185eb47f94d40ee9a6391a6dd81960"} err="failed to get container status \"ea71245c41a0f3fc20b632dcfff65b7ee7185eb47f94d40ee9a6391a6dd81960\": rpc error: code = NotFound desc = could not find container \"ea71245c41a0f3fc20b632dcfff65b7ee7185eb47f94d40ee9a6391a6dd81960\": container with ID starting with ea71245c41a0f3fc20b632dcfff65b7ee7185eb47f94d40ee9a6391a6dd81960 not found: ID does not exist" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.632960 4915 scope.go:117] "RemoveContainer" containerID="8e3c0b39454a6a8c631a99c9c4faab3c4b956f476ba652178ee7b11041b825f2" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.677670 4915 scope.go:117] "RemoveContainer" containerID="6932314dff48e102e97de6739f7af9e8c748a9a027a00f6cb31e70c19b9f2027" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.702471 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlqv5\" (UniqueName: \"kubernetes.io/projected/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-kube-api-access-rlqv5\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.702528 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-node-log\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.702567 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-env-overrides\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.702599 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.702633 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-ovnkube-config\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.702662 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-host-cni-netd\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.702687 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-var-lib-openvswitch\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.702705 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-etc-openvswitch\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.702722 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-run-openvswitch\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.702742 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-log-socket\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.702767 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-host-kubelet\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.702804 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-host-slash\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.702836 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-run-ovn\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.702855 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-host-run-ovn-kubernetes\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.702880 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-host-cni-bin\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.702901 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-ovnkube-script-lib\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.702923 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-run-systemd\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.702941 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-ovn-node-metrics-cert\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.702969 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-systemd-units\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.702997 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-host-run-netns\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.703050 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2w9k\" (UniqueName: \"kubernetes.io/projected/3f235785-6b02-4304-99b8-3b216c369d45-kube-api-access-l2w9k\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.703062 4915 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.703071 4915 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.703081 4915 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/3f235785-6b02-4304-99b8-3b216c369d45-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.703090 4915 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.703099 4915 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-log-socket\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.703109 4915 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.703118 4915 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.703128 4915 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.703136 4915 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-node-log\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.703144 4915 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.703154 4915 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.703164 4915 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.703173 4915 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/3f235785-6b02-4304-99b8-3b216c369d45-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.703182 4915 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3f235785-6b02-4304-99b8-3b216c369d45-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.703225 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-host-run-netns\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.703970 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-node-log\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.704669 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-env-overrides\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.704723 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.705292 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-ovnkube-config\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.705341 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-host-cni-netd\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.705374 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-var-lib-openvswitch\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.705402 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-etc-openvswitch\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.705431 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-run-openvswitch\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.705458 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-log-socket\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.705488 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-host-kubelet\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.705516 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-host-slash\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.705546 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-run-ovn\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.705574 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-host-run-ovn-kubernetes\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.705600 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-host-cni-bin\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.705905 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-systemd-units\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.705976 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-run-systemd\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.706997 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-ovnkube-script-lib\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.719977 4915 scope.go:117] "RemoveContainer" containerID="27d8ac7b09233a7e864b5dfba1aefafe23e08de6ef34879478cf63c7ec594794" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.730514 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-ovn-node-metrics-cert\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.734506 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlqv5\" (UniqueName: \"kubernetes.io/projected/c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c-kube-api-access-rlqv5\") pod \"ovnkube-node-hwjlt\" (UID: \"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.749269 4915 scope.go:117] "RemoveContainer" containerID="ca8862d30ccca23f9831d25c0184b0e342ca7e8a4c67b0fe96241323ed74bc49" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.797997 4915 scope.go:117] "RemoveContainer" containerID="d6520b970c1e0279fea2d39233569ea1cb355259398e39a9fd63a1b61218dcf7" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.813130 4915 scope.go:117] "RemoveContainer" containerID="a07279f43e6efab7e8ce1739eb703e344027c7da1e758e62f581fe05c5028c52" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.827994 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.834412 4915 scope.go:117] "RemoveContainer" containerID="ce0174ac3e69992dce18dea349b0c6ec98cd95c67d12cbd93110c0d604f76e93" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.867239 4915 scope.go:117] "RemoveContainer" containerID="e6fb280d5b346c93b8ea53e69cce36cb17cdb53c8f8948d1c957293d7ab2441b" Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.871397 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jmqqt"] Nov 24 21:31:10 crc kubenswrapper[4915]: I1124 21:31:10.878290 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-jmqqt"] Nov 24 21:31:11 crc kubenswrapper[4915]: I1124 21:31:11.544588 4915 generic.go:334] "Generic (PLEG): container finished" podID="c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c" containerID="9fc4a82a8f9bec419b5d83d0bdda6ca8f002cede2fa7e25f188baa28addf3803" exitCode=0 Nov 24 21:31:11 crc kubenswrapper[4915]: I1124 21:31:11.544629 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" event={"ID":"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c","Type":"ContainerDied","Data":"9fc4a82a8f9bec419b5d83d0bdda6ca8f002cede2fa7e25f188baa28addf3803"} Nov 24 21:31:11 crc kubenswrapper[4915]: I1124 21:31:11.544696 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" event={"ID":"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c","Type":"ContainerStarted","Data":"3fd924348a8ae8c9992b981c372742b85bcf2996fac98b07c52da0f5acc48e6a"} Nov 24 21:31:11 crc kubenswrapper[4915]: I1124 21:31:11.546513 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d49669fb4-9fbpw" event={"ID":"5720a2a7-47a3-4871-8003-736d6f5d7673","Type":"ContainerStarted","Data":"40c40e66ccd571b10f36bbf8cc8e7750d1fd1720c0e0b3855df3caba084c3d58"} Nov 24 21:31:11 crc kubenswrapper[4915]: I1124 21:31:11.549344 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d49669fb4-s5pjq" event={"ID":"58a55a1e-cef9-48f3-b562-5b3a02c92cc3","Type":"ContainerStarted","Data":"8ee7b0bc1bcd28b50e97ff95df4b4159e800365af1502e4a95da49b0e6e2173d"} Nov 24 21:31:11 crc kubenswrapper[4915]: I1124 21:31:11.551262 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-sdmdj" event={"ID":"f68bd40b-44eb-4241-9f04-5d3deadf1951","Type":"ContainerStarted","Data":"5339a9e3b3b7893c0f74caffa3a065b35b6329db7a2e8bc9b8db9da654992eeb"} Nov 24 21:31:11 crc kubenswrapper[4915]: I1124 21:31:11.551456 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-sdmdj" Nov 24 21:31:11 crc kubenswrapper[4915]: I1124 21:31:11.556942 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-qtz6h" event={"ID":"00e70017-d6b3-4d57-b669-068d163bc3c4","Type":"ContainerStarted","Data":"66c258c82205f159d781a42aaa0f258bfbd2f5369d7cd3c6eae8ae187345de86"} Nov 24 21:31:11 crc kubenswrapper[4915]: I1124 21:31:11.573556 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-sdmdj" Nov 24 21:31:11 crc kubenswrapper[4915]: I1124 21:31:11.656589 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-qtz6h" podStartSLOduration=2.528291394 podStartE2EDuration="17.656564486s" podCreationTimestamp="2025-11-24 21:30:54 +0000 UTC" firstStartedPulling="2025-11-24 21:30:55.415656807 +0000 UTC m=+673.731908980" lastFinishedPulling="2025-11-24 21:31:10.543929909 +0000 UTC m=+688.860182072" observedRunningTime="2025-11-24 21:31:11.653005839 +0000 UTC m=+689.969258012" watchObservedRunningTime="2025-11-24 21:31:11.656564486 +0000 UTC m=+689.972816659" Nov 24 21:31:11 crc kubenswrapper[4915]: I1124 21:31:11.697076 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d49669fb4-9fbpw" podStartSLOduration=2.6767226920000002 podStartE2EDuration="17.697060168s" podCreationTimestamp="2025-11-24 21:30:54 +0000 UTC" firstStartedPulling="2025-11-24 21:30:55.531964861 +0000 UTC m=+673.848217034" lastFinishedPulling="2025-11-24 21:31:10.552302337 +0000 UTC m=+688.868554510" observedRunningTime="2025-11-24 21:31:11.695407703 +0000 UTC m=+690.011659876" watchObservedRunningTime="2025-11-24 21:31:11.697060168 +0000 UTC m=+690.013312341" Nov 24 21:31:11 crc kubenswrapper[4915]: I1124 21:31:11.722728 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6d49669fb4-s5pjq" podStartSLOduration=2.54136724 podStartE2EDuration="17.722711796s" podCreationTimestamp="2025-11-24 21:30:54 +0000 UTC" firstStartedPulling="2025-11-24 21:30:55.488041426 +0000 UTC m=+673.804293599" lastFinishedPulling="2025-11-24 21:31:10.669385982 +0000 UTC m=+688.985638155" observedRunningTime="2025-11-24 21:31:11.714908103 +0000 UTC m=+690.031160276" watchObservedRunningTime="2025-11-24 21:31:11.722711796 +0000 UTC m=+690.038963969" Nov 24 21:31:11 crc kubenswrapper[4915]: I1124 21:31:11.746891 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-sdmdj" podStartSLOduration=2.919106244 podStartE2EDuration="17.746875332s" podCreationTimestamp="2025-11-24 21:30:54 +0000 UTC" firstStartedPulling="2025-11-24 21:30:55.716293085 +0000 UTC m=+674.032545258" lastFinishedPulling="2025-11-24 21:31:10.544062163 +0000 UTC m=+688.860314346" observedRunningTime="2025-11-24 21:31:11.746087801 +0000 UTC m=+690.062339974" watchObservedRunningTime="2025-11-24 21:31:11.746875332 +0000 UTC m=+690.063127505" Nov 24 21:31:12 crc kubenswrapper[4915]: I1124 21:31:12.435357 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f235785-6b02-4304-99b8-3b216c369d45" path="/var/lib/kubelet/pods/3f235785-6b02-4304-99b8-3b216c369d45/volumes" Nov 24 21:31:12 crc kubenswrapper[4915]: I1124 21:31:12.564935 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" event={"ID":"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c","Type":"ContainerStarted","Data":"b32c367af1347af5456c21b4bf479e8d27881c356a805e453d21924b669ab770"} Nov 24 21:31:12 crc kubenswrapper[4915]: I1124 21:31:12.564979 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" event={"ID":"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c","Type":"ContainerStarted","Data":"a081664da816a4d91cf53594b7a0f85014891862822aaf4270a8d8105bed0707"} Nov 24 21:31:12 crc kubenswrapper[4915]: I1124 21:31:12.564990 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" event={"ID":"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c","Type":"ContainerStarted","Data":"74630ab39e64e5a8047b8bf9c5d561ecd27b1da344ccc4dca1ea11001a123b22"} Nov 24 21:31:12 crc kubenswrapper[4915]: I1124 21:31:12.565001 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" event={"ID":"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c","Type":"ContainerStarted","Data":"ce1bbcead94b69d2bd30aa611c453892fcff501f3e0318005971b550829819da"} Nov 24 21:31:12 crc kubenswrapper[4915]: I1124 21:31:12.565010 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" event={"ID":"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c","Type":"ContainerStarted","Data":"6713e10908a3cd09ff185a0c54dc2d5a684e4b999ac26128924f17a6329a943a"} Nov 24 21:31:12 crc kubenswrapper[4915]: I1124 21:31:12.565019 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" event={"ID":"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c","Type":"ContainerStarted","Data":"412ee1e0d0782002de67beb5c59271e9f2e718b22fb9633719f21500f7521123"} Nov 24 21:31:15 crc kubenswrapper[4915]: I1124 21:31:15.426636 4915 scope.go:117] "RemoveContainer" containerID="b4dbca3c2e2b93a7e5cc889b1b96416b4a9df27216e9ed45cb8ff4b73b75f830" Nov 24 21:31:15 crc kubenswrapper[4915]: E1124 21:31:15.427254 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-b8kq8_openshift-multus(f5b8930d-4919-4a02-a962-c93b5f8f4ad3)\"" pod="openshift-multus/multus-b8kq8" podUID="f5b8930d-4919-4a02-a962-c93b5f8f4ad3" Nov 24 21:31:15 crc kubenswrapper[4915]: I1124 21:31:15.585899 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" event={"ID":"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c","Type":"ContainerStarted","Data":"a8c49ff59aca4de36278c1ba747a855060c213cf812ccc972b10cdcb7e207a3e"} Nov 24 21:31:17 crc kubenswrapper[4915]: I1124 21:31:17.600603 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" event={"ID":"c77c5173-5e3c-4fa5-973b-b7c3c48ecf1c","Type":"ContainerStarted","Data":"89f3e80262dee782e5862f71581eac955e815b91ccde642d4cad4e0a80ff59d6"} Nov 24 21:31:17 crc kubenswrapper[4915]: I1124 21:31:17.602228 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:17 crc kubenswrapper[4915]: I1124 21:31:17.602257 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:17 crc kubenswrapper[4915]: I1124 21:31:17.602300 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:17 crc kubenswrapper[4915]: I1124 21:31:17.643656 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" podStartSLOduration=7.643631774 podStartE2EDuration="7.643631774s" podCreationTimestamp="2025-11-24 21:31:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:31:17.638557175 +0000 UTC m=+695.954809378" watchObservedRunningTime="2025-11-24 21:31:17.643631774 +0000 UTC m=+695.959883947" Nov 24 21:31:17 crc kubenswrapper[4915]: I1124 21:31:17.664710 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:17 crc kubenswrapper[4915]: I1124 21:31:17.677818 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:20 crc kubenswrapper[4915]: I1124 21:31:20.266003 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-zp2vk"] Nov 24 21:31:20 crc kubenswrapper[4915]: I1124 21:31:20.267383 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-zp2vk" Nov 24 21:31:20 crc kubenswrapper[4915]: I1124 21:31:20.270286 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 24 21:31:20 crc kubenswrapper[4915]: I1124 21:31:20.270534 4915 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-bvjns" Nov 24 21:31:20 crc kubenswrapper[4915]: I1124 21:31:20.279345 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 24 21:31:20 crc kubenswrapper[4915]: I1124 21:31:20.285053 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-7k4q2"] Nov 24 21:31:20 crc kubenswrapper[4915]: I1124 21:31:20.285720 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-7k4q2" Nov 24 21:31:20 crc kubenswrapper[4915]: I1124 21:31:20.289331 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-zp2vk"] Nov 24 21:31:20 crc kubenswrapper[4915]: I1124 21:31:20.295199 4915 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-vjn79" Nov 24 21:31:20 crc kubenswrapper[4915]: I1124 21:31:20.303344 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-7k4q2"] Nov 24 21:31:20 crc kubenswrapper[4915]: I1124 21:31:20.328828 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-wstf7"] Nov 24 21:31:20 crc kubenswrapper[4915]: I1124 21:31:20.329630 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-wstf7" Nov 24 21:31:20 crc kubenswrapper[4915]: I1124 21:31:20.336662 4915 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-4kz2z" Nov 24 21:31:20 crc kubenswrapper[4915]: I1124 21:31:20.379848 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-wstf7"] Nov 24 21:31:20 crc kubenswrapper[4915]: I1124 21:31:20.450388 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwxw7\" (UniqueName: \"kubernetes.io/projected/357bd908-9457-4270-9b3b-a6b6ad47016f-kube-api-access-kwxw7\") pod \"cert-manager-webhook-5655c58dd6-wstf7\" (UID: \"357bd908-9457-4270-9b3b-a6b6ad47016f\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-wstf7" Nov 24 21:31:20 crc kubenswrapper[4915]: I1124 21:31:20.450651 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j74dt\" (UniqueName: \"kubernetes.io/projected/dde8b00e-87d8-4fe6-8a29-82774ca1e721-kube-api-access-j74dt\") pod \"cert-manager-cainjector-7f985d654d-7k4q2\" (UID: \"dde8b00e-87d8-4fe6-8a29-82774ca1e721\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-7k4q2" Nov 24 21:31:20 crc kubenswrapper[4915]: I1124 21:31:20.450726 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdzhn\" (UniqueName: \"kubernetes.io/projected/18dddeaf-8d70-474f-8a26-d39556870aa5-kube-api-access-qdzhn\") pod \"cert-manager-5b446d88c5-zp2vk\" (UID: \"18dddeaf-8d70-474f-8a26-d39556870aa5\") " pod="cert-manager/cert-manager-5b446d88c5-zp2vk" Nov 24 21:31:20 crc kubenswrapper[4915]: I1124 21:31:20.551832 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j74dt\" (UniqueName: \"kubernetes.io/projected/dde8b00e-87d8-4fe6-8a29-82774ca1e721-kube-api-access-j74dt\") pod \"cert-manager-cainjector-7f985d654d-7k4q2\" (UID: \"dde8b00e-87d8-4fe6-8a29-82774ca1e721\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-7k4q2" Nov 24 21:31:20 crc kubenswrapper[4915]: I1124 21:31:20.551901 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdzhn\" (UniqueName: \"kubernetes.io/projected/18dddeaf-8d70-474f-8a26-d39556870aa5-kube-api-access-qdzhn\") pod \"cert-manager-5b446d88c5-zp2vk\" (UID: \"18dddeaf-8d70-474f-8a26-d39556870aa5\") " pod="cert-manager/cert-manager-5b446d88c5-zp2vk" Nov 24 21:31:20 crc kubenswrapper[4915]: I1124 21:31:20.552012 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwxw7\" (UniqueName: \"kubernetes.io/projected/357bd908-9457-4270-9b3b-a6b6ad47016f-kube-api-access-kwxw7\") pod \"cert-manager-webhook-5655c58dd6-wstf7\" (UID: \"357bd908-9457-4270-9b3b-a6b6ad47016f\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-wstf7" Nov 24 21:31:20 crc kubenswrapper[4915]: I1124 21:31:20.586438 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j74dt\" (UniqueName: \"kubernetes.io/projected/dde8b00e-87d8-4fe6-8a29-82774ca1e721-kube-api-access-j74dt\") pod \"cert-manager-cainjector-7f985d654d-7k4q2\" (UID: \"dde8b00e-87d8-4fe6-8a29-82774ca1e721\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-7k4q2" Nov 24 21:31:20 crc kubenswrapper[4915]: I1124 21:31:20.590838 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwxw7\" (UniqueName: \"kubernetes.io/projected/357bd908-9457-4270-9b3b-a6b6ad47016f-kube-api-access-kwxw7\") pod \"cert-manager-webhook-5655c58dd6-wstf7\" (UID: \"357bd908-9457-4270-9b3b-a6b6ad47016f\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-wstf7" Nov 24 21:31:20 crc kubenswrapper[4915]: I1124 21:31:20.593926 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdzhn\" (UniqueName: \"kubernetes.io/projected/18dddeaf-8d70-474f-8a26-d39556870aa5-kube-api-access-qdzhn\") pod \"cert-manager-5b446d88c5-zp2vk\" (UID: \"18dddeaf-8d70-474f-8a26-d39556870aa5\") " pod="cert-manager/cert-manager-5b446d88c5-zp2vk" Nov 24 21:31:20 crc kubenswrapper[4915]: I1124 21:31:20.601769 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-zp2vk" Nov 24 21:31:20 crc kubenswrapper[4915]: I1124 21:31:20.628171 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-7k4q2" Nov 24 21:31:20 crc kubenswrapper[4915]: E1124 21:31:20.642547 4915 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-5b446d88c5-zp2vk_cert-manager_18dddeaf-8d70-474f-8a26-d39556870aa5_0(9786b681478009bd1e6b21e355a01c3c944d28999906215ce2ac26adf5842010): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 21:31:20 crc kubenswrapper[4915]: E1124 21:31:20.642704 4915 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-5b446d88c5-zp2vk_cert-manager_18dddeaf-8d70-474f-8a26-d39556870aa5_0(9786b681478009bd1e6b21e355a01c3c944d28999906215ce2ac26adf5842010): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-5b446d88c5-zp2vk" Nov 24 21:31:20 crc kubenswrapper[4915]: E1124 21:31:20.642798 4915 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-5b446d88c5-zp2vk_cert-manager_18dddeaf-8d70-474f-8a26-d39556870aa5_0(9786b681478009bd1e6b21e355a01c3c944d28999906215ce2ac26adf5842010): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-5b446d88c5-zp2vk" Nov 24 21:31:20 crc kubenswrapper[4915]: E1124 21:31:20.642910 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-5b446d88c5-zp2vk_cert-manager(18dddeaf-8d70-474f-8a26-d39556870aa5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-5b446d88c5-zp2vk_cert-manager(18dddeaf-8d70-474f-8a26-d39556870aa5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-5b446d88c5-zp2vk_cert-manager_18dddeaf-8d70-474f-8a26-d39556870aa5_0(9786b681478009bd1e6b21e355a01c3c944d28999906215ce2ac26adf5842010): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-5b446d88c5-zp2vk" podUID="18dddeaf-8d70-474f-8a26-d39556870aa5" Nov 24 21:31:20 crc kubenswrapper[4915]: E1124 21:31:20.658768 4915 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-7f985d654d-7k4q2_cert-manager_dde8b00e-87d8-4fe6-8a29-82774ca1e721_0(e03789102f24542eab53086d70f5724fa7863072f92e685b2beddd8cc01a9893): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 21:31:20 crc kubenswrapper[4915]: E1124 21:31:20.658902 4915 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-7f985d654d-7k4q2_cert-manager_dde8b00e-87d8-4fe6-8a29-82774ca1e721_0(e03789102f24542eab53086d70f5724fa7863072f92e685b2beddd8cc01a9893): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-7f985d654d-7k4q2" Nov 24 21:31:20 crc kubenswrapper[4915]: E1124 21:31:20.658926 4915 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-7f985d654d-7k4q2_cert-manager_dde8b00e-87d8-4fe6-8a29-82774ca1e721_0(e03789102f24542eab53086d70f5724fa7863072f92e685b2beddd8cc01a9893): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-7f985d654d-7k4q2" Nov 24 21:31:20 crc kubenswrapper[4915]: I1124 21:31:20.658958 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-wstf7" Nov 24 21:31:20 crc kubenswrapper[4915]: E1124 21:31:20.658971 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-cainjector-7f985d654d-7k4q2_cert-manager(dde8b00e-87d8-4fe6-8a29-82774ca1e721)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-cainjector-7f985d654d-7k4q2_cert-manager(dde8b00e-87d8-4fe6-8a29-82774ca1e721)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-7f985d654d-7k4q2_cert-manager_dde8b00e-87d8-4fe6-8a29-82774ca1e721_0(e03789102f24542eab53086d70f5724fa7863072f92e685b2beddd8cc01a9893): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-cainjector-7f985d654d-7k4q2" podUID="dde8b00e-87d8-4fe6-8a29-82774ca1e721" Nov 24 21:31:20 crc kubenswrapper[4915]: E1124 21:31:20.698951 4915 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-5655c58dd6-wstf7_cert-manager_357bd908-9457-4270-9b3b-a6b6ad47016f_0(be1683a9f6294fd28d5b001e4456eb39fb714945ac5e6b727840e2d85c347f8d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 21:31:20 crc kubenswrapper[4915]: E1124 21:31:20.699064 4915 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-5655c58dd6-wstf7_cert-manager_357bd908-9457-4270-9b3b-a6b6ad47016f_0(be1683a9f6294fd28d5b001e4456eb39fb714945ac5e6b727840e2d85c347f8d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-5655c58dd6-wstf7" Nov 24 21:31:20 crc kubenswrapper[4915]: E1124 21:31:20.699089 4915 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-5655c58dd6-wstf7_cert-manager_357bd908-9457-4270-9b3b-a6b6ad47016f_0(be1683a9f6294fd28d5b001e4456eb39fb714945ac5e6b727840e2d85c347f8d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-5655c58dd6-wstf7" Nov 24 21:31:20 crc kubenswrapper[4915]: E1124 21:31:20.699137 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-webhook-5655c58dd6-wstf7_cert-manager(357bd908-9457-4270-9b3b-a6b6ad47016f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-webhook-5655c58dd6-wstf7_cert-manager(357bd908-9457-4270-9b3b-a6b6ad47016f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-5655c58dd6-wstf7_cert-manager_357bd908-9457-4270-9b3b-a6b6ad47016f_0(be1683a9f6294fd28d5b001e4456eb39fb714945ac5e6b727840e2d85c347f8d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-webhook-5655c58dd6-wstf7" podUID="357bd908-9457-4270-9b3b-a6b6ad47016f" Nov 24 21:31:21 crc kubenswrapper[4915]: I1124 21:31:21.632316 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-zp2vk" Nov 24 21:31:21 crc kubenswrapper[4915]: I1124 21:31:21.632349 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-wstf7" Nov 24 21:31:21 crc kubenswrapper[4915]: I1124 21:31:21.632470 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-7k4q2" Nov 24 21:31:21 crc kubenswrapper[4915]: I1124 21:31:21.633265 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-7k4q2" Nov 24 21:31:21 crc kubenswrapper[4915]: I1124 21:31:21.633271 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-zp2vk" Nov 24 21:31:21 crc kubenswrapper[4915]: I1124 21:31:21.633424 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-wstf7" Nov 24 21:31:21 crc kubenswrapper[4915]: E1124 21:31:21.713582 4915 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-7f985d654d-7k4q2_cert-manager_dde8b00e-87d8-4fe6-8a29-82774ca1e721_0(84b8a981b4970d993f083a01036c9405011468162217df6152620a57da9e264c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 21:31:21 crc kubenswrapper[4915]: E1124 21:31:21.713687 4915 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-7f985d654d-7k4q2_cert-manager_dde8b00e-87d8-4fe6-8a29-82774ca1e721_0(84b8a981b4970d993f083a01036c9405011468162217df6152620a57da9e264c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-7f985d654d-7k4q2" Nov 24 21:31:21 crc kubenswrapper[4915]: E1124 21:31:21.713745 4915 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-7f985d654d-7k4q2_cert-manager_dde8b00e-87d8-4fe6-8a29-82774ca1e721_0(84b8a981b4970d993f083a01036c9405011468162217df6152620a57da9e264c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-7f985d654d-7k4q2" Nov 24 21:31:21 crc kubenswrapper[4915]: E1124 21:31:21.713939 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-cainjector-7f985d654d-7k4q2_cert-manager(dde8b00e-87d8-4fe6-8a29-82774ca1e721)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-cainjector-7f985d654d-7k4q2_cert-manager(dde8b00e-87d8-4fe6-8a29-82774ca1e721)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-7f985d654d-7k4q2_cert-manager_dde8b00e-87d8-4fe6-8a29-82774ca1e721_0(84b8a981b4970d993f083a01036c9405011468162217df6152620a57da9e264c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-cainjector-7f985d654d-7k4q2" podUID="dde8b00e-87d8-4fe6-8a29-82774ca1e721" Nov 24 21:31:21 crc kubenswrapper[4915]: E1124 21:31:21.729504 4915 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-5b446d88c5-zp2vk_cert-manager_18dddeaf-8d70-474f-8a26-d39556870aa5_0(7e98b4c306c78c7507c0775a8273e78db21af33e72a9e8d71d65f31ea5467fa3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 21:31:21 crc kubenswrapper[4915]: E1124 21:31:21.729626 4915 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-5b446d88c5-zp2vk_cert-manager_18dddeaf-8d70-474f-8a26-d39556870aa5_0(7e98b4c306c78c7507c0775a8273e78db21af33e72a9e8d71d65f31ea5467fa3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-5b446d88c5-zp2vk" Nov 24 21:31:21 crc kubenswrapper[4915]: E1124 21:31:21.729666 4915 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-5b446d88c5-zp2vk_cert-manager_18dddeaf-8d70-474f-8a26-d39556870aa5_0(7e98b4c306c78c7507c0775a8273e78db21af33e72a9e8d71d65f31ea5467fa3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-5b446d88c5-zp2vk" Nov 24 21:31:21 crc kubenswrapper[4915]: E1124 21:31:21.729746 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-5b446d88c5-zp2vk_cert-manager(18dddeaf-8d70-474f-8a26-d39556870aa5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-5b446d88c5-zp2vk_cert-manager(18dddeaf-8d70-474f-8a26-d39556870aa5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-5b446d88c5-zp2vk_cert-manager_18dddeaf-8d70-474f-8a26-d39556870aa5_0(7e98b4c306c78c7507c0775a8273e78db21af33e72a9e8d71d65f31ea5467fa3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-5b446d88c5-zp2vk" podUID="18dddeaf-8d70-474f-8a26-d39556870aa5" Nov 24 21:31:21 crc kubenswrapper[4915]: E1124 21:31:21.743378 4915 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-5655c58dd6-wstf7_cert-manager_357bd908-9457-4270-9b3b-a6b6ad47016f_0(a0ed1eed60902261758863a792d58ae38da97bf2a2289c2a8fc3952187ee3fe2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 24 21:31:21 crc kubenswrapper[4915]: E1124 21:31:21.743492 4915 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-5655c58dd6-wstf7_cert-manager_357bd908-9457-4270-9b3b-a6b6ad47016f_0(a0ed1eed60902261758863a792d58ae38da97bf2a2289c2a8fc3952187ee3fe2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-5655c58dd6-wstf7" Nov 24 21:31:21 crc kubenswrapper[4915]: E1124 21:31:21.743545 4915 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-5655c58dd6-wstf7_cert-manager_357bd908-9457-4270-9b3b-a6b6ad47016f_0(a0ed1eed60902261758863a792d58ae38da97bf2a2289c2a8fc3952187ee3fe2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-5655c58dd6-wstf7" Nov 24 21:31:21 crc kubenswrapper[4915]: E1124 21:31:21.743633 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-webhook-5655c58dd6-wstf7_cert-manager(357bd908-9457-4270-9b3b-a6b6ad47016f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-webhook-5655c58dd6-wstf7_cert-manager(357bd908-9457-4270-9b3b-a6b6ad47016f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-5655c58dd6-wstf7_cert-manager_357bd908-9457-4270-9b3b-a6b6ad47016f_0(a0ed1eed60902261758863a792d58ae38da97bf2a2289c2a8fc3952187ee3fe2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-webhook-5655c58dd6-wstf7" podUID="357bd908-9457-4270-9b3b-a6b6ad47016f" Nov 24 21:31:24 crc kubenswrapper[4915]: I1124 21:31:24.656442 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-wmrpm" event={"ID":"582a64c0-8cfc-43d9-beeb-37f6e2460561","Type":"ContainerStarted","Data":"d5156e7a04275548e6b6f2225e441ec532e3ac24223c6fef33984ad59ea7631a"} Nov 24 21:31:24 crc kubenswrapper[4915]: I1124 21:31:24.659010 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-wmrpm" Nov 24 21:31:24 crc kubenswrapper[4915]: I1124 21:31:24.685087 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-wmrpm" podStartSLOduration=1.022905326 podStartE2EDuration="29.685062321s" podCreationTimestamp="2025-11-24 21:30:55 +0000 UTC" firstStartedPulling="2025-11-24 21:30:55.812761939 +0000 UTC m=+674.129014112" lastFinishedPulling="2025-11-24 21:31:24.474918924 +0000 UTC m=+702.791171107" observedRunningTime="2025-11-24 21:31:24.681366899 +0000 UTC m=+702.997619112" watchObservedRunningTime="2025-11-24 21:31:24.685062321 +0000 UTC m=+703.001314524" Nov 24 21:31:27 crc kubenswrapper[4915]: I1124 21:31:27.427112 4915 scope.go:117] "RemoveContainer" containerID="b4dbca3c2e2b93a7e5cc889b1b96416b4a9df27216e9ed45cb8ff4b73b75f830" Nov 24 21:31:27 crc kubenswrapper[4915]: I1124 21:31:27.677591 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-b8kq8_f5b8930d-4919-4a02-a962-c93b5f8f4ad3/kube-multus/2.log" Nov 24 21:31:27 crc kubenswrapper[4915]: I1124 21:31:27.678087 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-b8kq8" event={"ID":"f5b8930d-4919-4a02-a962-c93b5f8f4ad3","Type":"ContainerStarted","Data":"9731f9fe172f813d3199794426e9a7782c1374777dac7f99515b6e9cfaf196a9"} Nov 24 21:31:34 crc kubenswrapper[4915]: I1124 21:31:34.426749 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-wstf7" Nov 24 21:31:34 crc kubenswrapper[4915]: I1124 21:31:34.428109 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-wstf7" Nov 24 21:31:34 crc kubenswrapper[4915]: I1124 21:31:34.920048 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-wstf7"] Nov 24 21:31:34 crc kubenswrapper[4915]: W1124 21:31:34.933842 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod357bd908_9457_4270_9b3b_a6b6ad47016f.slice/crio-e977741d91d514f74341bff744e6b37609da4978c1060eef1d6be3695a5f3831 WatchSource:0}: Error finding container e977741d91d514f74341bff744e6b37609da4978c1060eef1d6be3695a5f3831: Status 404 returned error can't find the container with id e977741d91d514f74341bff744e6b37609da4978c1060eef1d6be3695a5f3831 Nov 24 21:31:35 crc kubenswrapper[4915]: I1124 21:31:35.426761 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-7k4q2" Nov 24 21:31:35 crc kubenswrapper[4915]: I1124 21:31:35.426861 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-zp2vk" Nov 24 21:31:35 crc kubenswrapper[4915]: I1124 21:31:35.427931 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-7k4q2" Nov 24 21:31:35 crc kubenswrapper[4915]: I1124 21:31:35.428115 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-zp2vk" Nov 24 21:31:35 crc kubenswrapper[4915]: I1124 21:31:35.559405 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-wmrpm" Nov 24 21:31:35 crc kubenswrapper[4915]: I1124 21:31:35.776861 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-wstf7" event={"ID":"357bd908-9457-4270-9b3b-a6b6ad47016f","Type":"ContainerStarted","Data":"e977741d91d514f74341bff744e6b37609da4978c1060eef1d6be3695a5f3831"} Nov 24 21:31:35 crc kubenswrapper[4915]: I1124 21:31:35.822525 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-zp2vk"] Nov 24 21:31:35 crc kubenswrapper[4915]: W1124 21:31:35.927376 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18dddeaf_8d70_474f_8a26_d39556870aa5.slice/crio-3d6047d29da226c29dd1932f03cfbb428e465de85f687cd489a4447baaa09214 WatchSource:0}: Error finding container 3d6047d29da226c29dd1932f03cfbb428e465de85f687cd489a4447baaa09214: Status 404 returned error can't find the container with id 3d6047d29da226c29dd1932f03cfbb428e465de85f687cd489a4447baaa09214 Nov 24 21:31:36 crc kubenswrapper[4915]: I1124 21:31:36.054021 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-7k4q2"] Nov 24 21:31:36 crc kubenswrapper[4915]: I1124 21:31:36.784997 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-zp2vk" event={"ID":"18dddeaf-8d70-474f-8a26-d39556870aa5","Type":"ContainerStarted","Data":"3d6047d29da226c29dd1932f03cfbb428e465de85f687cd489a4447baaa09214"} Nov 24 21:31:40 crc kubenswrapper[4915]: I1124 21:31:40.806434 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-7k4q2" event={"ID":"dde8b00e-87d8-4fe6-8a29-82774ca1e721","Type":"ContainerStarted","Data":"c776566d1e4ae95e24f971e1ee736f347dc344868358f42a898fcf35f9937644"} Nov 24 21:31:40 crc kubenswrapper[4915]: I1124 21:31:40.887928 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hwjlt" Nov 24 21:31:43 crc kubenswrapper[4915]: I1124 21:31:43.828738 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-7k4q2" event={"ID":"dde8b00e-87d8-4fe6-8a29-82774ca1e721","Type":"ContainerStarted","Data":"9e236c4b96803fe0081e1c44a30f28a072019342631416eb2c05225d64ce3d16"} Nov 24 21:31:43 crc kubenswrapper[4915]: I1124 21:31:43.830695 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-zp2vk" event={"ID":"18dddeaf-8d70-474f-8a26-d39556870aa5","Type":"ContainerStarted","Data":"50336637eaffbf560e6a8cdcecdcb921e4bb14e28158f56cf8806b47b85f86ef"} Nov 24 21:31:43 crc kubenswrapper[4915]: I1124 21:31:43.832347 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-wstf7" event={"ID":"357bd908-9457-4270-9b3b-a6b6ad47016f","Type":"ContainerStarted","Data":"3c1017fbe7b2bd17df40608e3634f899692ca16a92ae9a881c562640cb03350a"} Nov 24 21:31:43 crc kubenswrapper[4915]: I1124 21:31:43.832596 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-wstf7" Nov 24 21:31:43 crc kubenswrapper[4915]: I1124 21:31:43.865345 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-7k4q2" podStartSLOduration=20.690748689 podStartE2EDuration="23.865320323s" podCreationTimestamp="2025-11-24 21:31:20 +0000 UTC" firstStartedPulling="2025-11-24 21:31:39.869738145 +0000 UTC m=+718.185990328" lastFinishedPulling="2025-11-24 21:31:43.044309799 +0000 UTC m=+721.360561962" observedRunningTime="2025-11-24 21:31:43.861071027 +0000 UTC m=+722.177323220" watchObservedRunningTime="2025-11-24 21:31:43.865320323 +0000 UTC m=+722.181572516" Nov 24 21:31:43 crc kubenswrapper[4915]: I1124 21:31:43.888552 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-zp2vk" podStartSLOduration=16.726350557 podStartE2EDuration="23.888534554s" podCreationTimestamp="2025-11-24 21:31:20 +0000 UTC" firstStartedPulling="2025-11-24 21:31:35.931042913 +0000 UTC m=+714.247295086" lastFinishedPulling="2025-11-24 21:31:43.0932269 +0000 UTC m=+721.409479083" observedRunningTime="2025-11-24 21:31:43.885579183 +0000 UTC m=+722.201831386" watchObservedRunningTime="2025-11-24 21:31:43.888534554 +0000 UTC m=+722.204786727" Nov 24 21:31:43 crc kubenswrapper[4915]: I1124 21:31:43.917751 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-wstf7" podStartSLOduration=15.817431831 podStartE2EDuration="23.917732947s" podCreationTimestamp="2025-11-24 21:31:20 +0000 UTC" firstStartedPulling="2025-11-24 21:31:34.937661301 +0000 UTC m=+713.253913474" lastFinishedPulling="2025-11-24 21:31:43.037962417 +0000 UTC m=+721.354214590" observedRunningTime="2025-11-24 21:31:43.902953645 +0000 UTC m=+722.219205878" watchObservedRunningTime="2025-11-24 21:31:43.917732947 +0000 UTC m=+722.233985120" Nov 24 21:31:50 crc kubenswrapper[4915]: I1124 21:31:50.663773 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-wstf7" Nov 24 21:31:54 crc kubenswrapper[4915]: I1124 21:31:54.327708 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:31:54 crc kubenswrapper[4915]: I1124 21:31:54.328460 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:32:10 crc kubenswrapper[4915]: I1124 21:32:10.721428 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-p7n9h"] Nov 24 21:32:10 crc kubenswrapper[4915]: I1124 21:32:10.722829 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-p7n9h" podUID="d83bcc5d-5419-4656-882e-964b0d87e966" containerName="controller-manager" containerID="cri-o://8e4e2983221b2abbdd5b0329b83a3f246bc58650f1066548c29a8ecf235ac018" gracePeriod=30 Nov 24 21:32:10 crc kubenswrapper[4915]: I1124 21:32:10.804322 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-762rz"] Nov 24 21:32:10 crc kubenswrapper[4915]: I1124 21:32:10.804530 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-762rz" podUID="de799d6d-f599-4dfd-a95b-4f7daf00d23a" containerName="route-controller-manager" containerID="cri-o://b07a68ec4794038dfd4ead15b5a42a212636a6d7f7aadf45cc8f71afa776286d" gracePeriod=30 Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.041413 4915 generic.go:334] "Generic (PLEG): container finished" podID="de799d6d-f599-4dfd-a95b-4f7daf00d23a" containerID="b07a68ec4794038dfd4ead15b5a42a212636a6d7f7aadf45cc8f71afa776286d" exitCode=0 Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.041508 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-762rz" event={"ID":"de799d6d-f599-4dfd-a95b-4f7daf00d23a","Type":"ContainerDied","Data":"b07a68ec4794038dfd4ead15b5a42a212636a6d7f7aadf45cc8f71afa776286d"} Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.044080 4915 generic.go:334] "Generic (PLEG): container finished" podID="d83bcc5d-5419-4656-882e-964b0d87e966" containerID="8e4e2983221b2abbdd5b0329b83a3f246bc58650f1066548c29a8ecf235ac018" exitCode=0 Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.044107 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-p7n9h" event={"ID":"d83bcc5d-5419-4656-882e-964b0d87e966","Type":"ContainerDied","Data":"8e4e2983221b2abbdd5b0329b83a3f246bc58650f1066548c29a8ecf235ac018"} Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.129268 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-p7n9h" Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.197852 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-762rz" Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.275383 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d83bcc5d-5419-4656-882e-964b0d87e966-proxy-ca-bundles\") pod \"d83bcc5d-5419-4656-882e-964b0d87e966\" (UID: \"d83bcc5d-5419-4656-882e-964b0d87e966\") " Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.275432 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d83bcc5d-5419-4656-882e-964b0d87e966-config\") pod \"d83bcc5d-5419-4656-882e-964b0d87e966\" (UID: \"d83bcc5d-5419-4656-882e-964b0d87e966\") " Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.275482 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de799d6d-f599-4dfd-a95b-4f7daf00d23a-client-ca\") pod \"de799d6d-f599-4dfd-a95b-4f7daf00d23a\" (UID: \"de799d6d-f599-4dfd-a95b-4f7daf00d23a\") " Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.275513 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d83bcc5d-5419-4656-882e-964b0d87e966-serving-cert\") pod \"d83bcc5d-5419-4656-882e-964b0d87e966\" (UID: \"d83bcc5d-5419-4656-882e-964b0d87e966\") " Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.275550 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmdhg\" (UniqueName: \"kubernetes.io/projected/de799d6d-f599-4dfd-a95b-4f7daf00d23a-kube-api-access-wmdhg\") pod \"de799d6d-f599-4dfd-a95b-4f7daf00d23a\" (UID: \"de799d6d-f599-4dfd-a95b-4f7daf00d23a\") " Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.275645 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqnnm\" (UniqueName: \"kubernetes.io/projected/d83bcc5d-5419-4656-882e-964b0d87e966-kube-api-access-mqnnm\") pod \"d83bcc5d-5419-4656-882e-964b0d87e966\" (UID: \"d83bcc5d-5419-4656-882e-964b0d87e966\") " Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.275669 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de799d6d-f599-4dfd-a95b-4f7daf00d23a-serving-cert\") pod \"de799d6d-f599-4dfd-a95b-4f7daf00d23a\" (UID: \"de799d6d-f599-4dfd-a95b-4f7daf00d23a\") " Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.275738 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de799d6d-f599-4dfd-a95b-4f7daf00d23a-config\") pod \"de799d6d-f599-4dfd-a95b-4f7daf00d23a\" (UID: \"de799d6d-f599-4dfd-a95b-4f7daf00d23a\") " Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.275768 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d83bcc5d-5419-4656-882e-964b0d87e966-client-ca\") pod \"d83bcc5d-5419-4656-882e-964b0d87e966\" (UID: \"d83bcc5d-5419-4656-882e-964b0d87e966\") " Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.276337 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de799d6d-f599-4dfd-a95b-4f7daf00d23a-client-ca" (OuterVolumeSpecName: "client-ca") pod "de799d6d-f599-4dfd-a95b-4f7daf00d23a" (UID: "de799d6d-f599-4dfd-a95b-4f7daf00d23a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.276615 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d83bcc5d-5419-4656-882e-964b0d87e966-client-ca" (OuterVolumeSpecName: "client-ca") pod "d83bcc5d-5419-4656-882e-964b0d87e966" (UID: "d83bcc5d-5419-4656-882e-964b0d87e966"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.276862 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d83bcc5d-5419-4656-882e-964b0d87e966-config" (OuterVolumeSpecName: "config") pod "d83bcc5d-5419-4656-882e-964b0d87e966" (UID: "d83bcc5d-5419-4656-882e-964b0d87e966"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.276944 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d83bcc5d-5419-4656-882e-964b0d87e966-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d83bcc5d-5419-4656-882e-964b0d87e966" (UID: "d83bcc5d-5419-4656-882e-964b0d87e966"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.277017 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de799d6d-f599-4dfd-a95b-4f7daf00d23a-config" (OuterVolumeSpecName: "config") pod "de799d6d-f599-4dfd-a95b-4f7daf00d23a" (UID: "de799d6d-f599-4dfd-a95b-4f7daf00d23a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.281225 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de799d6d-f599-4dfd-a95b-4f7daf00d23a-kube-api-access-wmdhg" (OuterVolumeSpecName: "kube-api-access-wmdhg") pod "de799d6d-f599-4dfd-a95b-4f7daf00d23a" (UID: "de799d6d-f599-4dfd-a95b-4f7daf00d23a"). InnerVolumeSpecName "kube-api-access-wmdhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.281819 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d83bcc5d-5419-4656-882e-964b0d87e966-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d83bcc5d-5419-4656-882e-964b0d87e966" (UID: "d83bcc5d-5419-4656-882e-964b0d87e966"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.281851 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de799d6d-f599-4dfd-a95b-4f7daf00d23a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "de799d6d-f599-4dfd-a95b-4f7daf00d23a" (UID: "de799d6d-f599-4dfd-a95b-4f7daf00d23a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.281928 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d83bcc5d-5419-4656-882e-964b0d87e966-kube-api-access-mqnnm" (OuterVolumeSpecName: "kube-api-access-mqnnm") pod "d83bcc5d-5419-4656-882e-964b0d87e966" (UID: "d83bcc5d-5419-4656-882e-964b0d87e966"). InnerVolumeSpecName "kube-api-access-mqnnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.377310 4915 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d83bcc5d-5419-4656-882e-964b0d87e966-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.377367 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d83bcc5d-5419-4656-882e-964b0d87e966-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.377387 4915 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de799d6d-f599-4dfd-a95b-4f7daf00d23a-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.377404 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d83bcc5d-5419-4656-882e-964b0d87e966-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.377425 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmdhg\" (UniqueName: \"kubernetes.io/projected/de799d6d-f599-4dfd-a95b-4f7daf00d23a-kube-api-access-wmdhg\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.377447 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqnnm\" (UniqueName: \"kubernetes.io/projected/d83bcc5d-5419-4656-882e-964b0d87e966-kube-api-access-mqnnm\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.377465 4915 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de799d6d-f599-4dfd-a95b-4f7daf00d23a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.377483 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de799d6d-f599-4dfd-a95b-4f7daf00d23a-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:11 crc kubenswrapper[4915]: I1124 21:32:11.377500 4915 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d83bcc5d-5419-4656-882e-964b0d87e966-client-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.053345 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-762rz" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.053333 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-762rz" event={"ID":"de799d6d-f599-4dfd-a95b-4f7daf00d23a","Type":"ContainerDied","Data":"c4aa023d6e9855829ef6fd96609477f8aaf39216bb8e0bf1357c4f5823661e6a"} Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.053504 4915 scope.go:117] "RemoveContainer" containerID="b07a68ec4794038dfd4ead15b5a42a212636a6d7f7aadf45cc8f71afa776286d" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.058409 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-p7n9h" event={"ID":"d83bcc5d-5419-4656-882e-964b0d87e966","Type":"ContainerDied","Data":"86301b4b65b373895b342259cb2b759b3ba76d856be8f748e2c620d977d3fd5b"} Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.058557 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-p7n9h" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.093569 4915 scope.go:117] "RemoveContainer" containerID="8e4e2983221b2abbdd5b0329b83a3f246bc58650f1066548c29a8ecf235ac018" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.108534 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-762rz"] Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.121185 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-762rz"] Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.131074 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-p7n9h"] Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.136681 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-p7n9h"] Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.267469 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bc59c6d67-q6nn6"] Nov 24 21:32:12 crc kubenswrapper[4915]: E1124 21:32:12.268079 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de799d6d-f599-4dfd-a95b-4f7daf00d23a" containerName="route-controller-manager" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.268114 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="de799d6d-f599-4dfd-a95b-4f7daf00d23a" containerName="route-controller-manager" Nov 24 21:32:12 crc kubenswrapper[4915]: E1124 21:32:12.268183 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83bcc5d-5419-4656-882e-964b0d87e966" containerName="controller-manager" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.268203 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83bcc5d-5419-4656-882e-964b0d87e966" containerName="controller-manager" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.268503 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="d83bcc5d-5419-4656-882e-964b0d87e966" containerName="controller-manager" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.268533 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="de799d6d-f599-4dfd-a95b-4f7daf00d23a" containerName="route-controller-manager" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.269601 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bc59c6d67-q6nn6" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.272997 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d57478ccc-z94js"] Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.277298 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.277952 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.278587 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.278672 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d57478ccc-z94js" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.280504 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.281408 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.282969 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.283236 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.293439 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.294105 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.294421 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.295568 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.296059 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.297722 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.306988 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d57478ccc-z94js"] Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.316271 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bc59c6d67-q6nn6"] Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.401936 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b6752cc-d035-4b8d-873d-83ba0d9033ca-config\") pod \"route-controller-manager-d57478ccc-z94js\" (UID: \"2b6752cc-d035-4b8d-873d-83ba0d9033ca\") " pod="openshift-route-controller-manager/route-controller-manager-d57478ccc-z94js" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.401993 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09-serving-cert\") pod \"controller-manager-bc59c6d67-q6nn6\" (UID: \"de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09\") " pod="openshift-controller-manager/controller-manager-bc59c6d67-q6nn6" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.402041 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09-client-ca\") pod \"controller-manager-bc59c6d67-q6nn6\" (UID: \"de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09\") " pod="openshift-controller-manager/controller-manager-bc59c6d67-q6nn6" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.402060 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmvxr\" (UniqueName: \"kubernetes.io/projected/de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09-kube-api-access-bmvxr\") pod \"controller-manager-bc59c6d67-q6nn6\" (UID: \"de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09\") " pod="openshift-controller-manager/controller-manager-bc59c6d67-q6nn6" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.402078 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b6752cc-d035-4b8d-873d-83ba0d9033ca-client-ca\") pod \"route-controller-manager-d57478ccc-z94js\" (UID: \"2b6752cc-d035-4b8d-873d-83ba0d9033ca\") " pod="openshift-route-controller-manager/route-controller-manager-d57478ccc-z94js" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.402188 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09-config\") pod \"controller-manager-bc59c6d67-q6nn6\" (UID: \"de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09\") " pod="openshift-controller-manager/controller-manager-bc59c6d67-q6nn6" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.402292 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09-proxy-ca-bundles\") pod \"controller-manager-bc59c6d67-q6nn6\" (UID: \"de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09\") " pod="openshift-controller-manager/controller-manager-bc59c6d67-q6nn6" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.402391 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b6752cc-d035-4b8d-873d-83ba0d9033ca-serving-cert\") pod \"route-controller-manager-d57478ccc-z94js\" (UID: \"2b6752cc-d035-4b8d-873d-83ba0d9033ca\") " pod="openshift-route-controller-manager/route-controller-manager-d57478ccc-z94js" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.402463 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqd8z\" (UniqueName: \"kubernetes.io/projected/2b6752cc-d035-4b8d-873d-83ba0d9033ca-kube-api-access-cqd8z\") pod \"route-controller-manager-d57478ccc-z94js\" (UID: \"2b6752cc-d035-4b8d-873d-83ba0d9033ca\") " pod="openshift-route-controller-manager/route-controller-manager-d57478ccc-z94js" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.436960 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d83bcc5d-5419-4656-882e-964b0d87e966" path="/var/lib/kubelet/pods/d83bcc5d-5419-4656-882e-964b0d87e966/volumes" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.438463 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de799d6d-f599-4dfd-a95b-4f7daf00d23a" path="/var/lib/kubelet/pods/de799d6d-f599-4dfd-a95b-4f7daf00d23a/volumes" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.504832 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b6752cc-d035-4b8d-873d-83ba0d9033ca-serving-cert\") pod \"route-controller-manager-d57478ccc-z94js\" (UID: \"2b6752cc-d035-4b8d-873d-83ba0d9033ca\") " pod="openshift-route-controller-manager/route-controller-manager-d57478ccc-z94js" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.505132 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqd8z\" (UniqueName: \"kubernetes.io/projected/2b6752cc-d035-4b8d-873d-83ba0d9033ca-kube-api-access-cqd8z\") pod \"route-controller-manager-d57478ccc-z94js\" (UID: \"2b6752cc-d035-4b8d-873d-83ba0d9033ca\") " pod="openshift-route-controller-manager/route-controller-manager-d57478ccc-z94js" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.505240 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b6752cc-d035-4b8d-873d-83ba0d9033ca-config\") pod \"route-controller-manager-d57478ccc-z94js\" (UID: \"2b6752cc-d035-4b8d-873d-83ba0d9033ca\") " pod="openshift-route-controller-manager/route-controller-manager-d57478ccc-z94js" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.505342 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09-serving-cert\") pod \"controller-manager-bc59c6d67-q6nn6\" (UID: \"de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09\") " pod="openshift-controller-manager/controller-manager-bc59c6d67-q6nn6" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.505439 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09-client-ca\") pod \"controller-manager-bc59c6d67-q6nn6\" (UID: \"de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09\") " pod="openshift-controller-manager/controller-manager-bc59c6d67-q6nn6" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.505522 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmvxr\" (UniqueName: \"kubernetes.io/projected/de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09-kube-api-access-bmvxr\") pod \"controller-manager-bc59c6d67-q6nn6\" (UID: \"de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09\") " pod="openshift-controller-manager/controller-manager-bc59c6d67-q6nn6" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.505599 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b6752cc-d035-4b8d-873d-83ba0d9033ca-client-ca\") pod \"route-controller-manager-d57478ccc-z94js\" (UID: \"2b6752cc-d035-4b8d-873d-83ba0d9033ca\") " pod="openshift-route-controller-manager/route-controller-manager-d57478ccc-z94js" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.505712 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09-config\") pod \"controller-manager-bc59c6d67-q6nn6\" (UID: \"de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09\") " pod="openshift-controller-manager/controller-manager-bc59c6d67-q6nn6" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.505861 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09-proxy-ca-bundles\") pod \"controller-manager-bc59c6d67-q6nn6\" (UID: \"de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09\") " pod="openshift-controller-manager/controller-manager-bc59c6d67-q6nn6" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.507550 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2b6752cc-d035-4b8d-873d-83ba0d9033ca-client-ca\") pod \"route-controller-manager-d57478ccc-z94js\" (UID: \"2b6752cc-d035-4b8d-873d-83ba0d9033ca\") " pod="openshift-route-controller-manager/route-controller-manager-d57478ccc-z94js" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.508079 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b6752cc-d035-4b8d-873d-83ba0d9033ca-config\") pod \"route-controller-manager-d57478ccc-z94js\" (UID: \"2b6752cc-d035-4b8d-873d-83ba0d9033ca\") " pod="openshift-route-controller-manager/route-controller-manager-d57478ccc-z94js" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.509259 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09-client-ca\") pod \"controller-manager-bc59c6d67-q6nn6\" (UID: \"de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09\") " pod="openshift-controller-manager/controller-manager-bc59c6d67-q6nn6" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.510762 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09-proxy-ca-bundles\") pod \"controller-manager-bc59c6d67-q6nn6\" (UID: \"de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09\") " pod="openshift-controller-manager/controller-manager-bc59c6d67-q6nn6" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.511468 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09-config\") pod \"controller-manager-bc59c6d67-q6nn6\" (UID: \"de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09\") " pod="openshift-controller-manager/controller-manager-bc59c6d67-q6nn6" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.513942 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09-serving-cert\") pod \"controller-manager-bc59c6d67-q6nn6\" (UID: \"de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09\") " pod="openshift-controller-manager/controller-manager-bc59c6d67-q6nn6" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.517005 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2b6752cc-d035-4b8d-873d-83ba0d9033ca-serving-cert\") pod \"route-controller-manager-d57478ccc-z94js\" (UID: \"2b6752cc-d035-4b8d-873d-83ba0d9033ca\") " pod="openshift-route-controller-manager/route-controller-manager-d57478ccc-z94js" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.529600 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqd8z\" (UniqueName: \"kubernetes.io/projected/2b6752cc-d035-4b8d-873d-83ba0d9033ca-kube-api-access-cqd8z\") pod \"route-controller-manager-d57478ccc-z94js\" (UID: \"2b6752cc-d035-4b8d-873d-83ba0d9033ca\") " pod="openshift-route-controller-manager/route-controller-manager-d57478ccc-z94js" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.549747 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmvxr\" (UniqueName: \"kubernetes.io/projected/de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09-kube-api-access-bmvxr\") pod \"controller-manager-bc59c6d67-q6nn6\" (UID: \"de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09\") " pod="openshift-controller-manager/controller-manager-bc59c6d67-q6nn6" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.613270 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-d57478ccc-z94js" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.614175 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bc59c6d67-q6nn6" Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.887923 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bc59c6d67-q6nn6"] Nov 24 21:32:12 crc kubenswrapper[4915]: I1124 21:32:12.963454 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-d57478ccc-z94js"] Nov 24 21:32:12 crc kubenswrapper[4915]: W1124 21:32:12.966825 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b6752cc_d035_4b8d_873d_83ba0d9033ca.slice/crio-e2ad4197d800c0366bce174e89f6c8c0077649f17c058fe7c4fc6b7388020579 WatchSource:0}: Error finding container e2ad4197d800c0366bce174e89f6c8c0077649f17c058fe7c4fc6b7388020579: Status 404 returned error can't find the container with id e2ad4197d800c0366bce174e89f6c8c0077649f17c058fe7c4fc6b7388020579 Nov 24 21:32:13 crc kubenswrapper[4915]: I1124 21:32:13.065907 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d57478ccc-z94js" event={"ID":"2b6752cc-d035-4b8d-873d-83ba0d9033ca","Type":"ContainerStarted","Data":"e2ad4197d800c0366bce174e89f6c8c0077649f17c058fe7c4fc6b7388020579"} Nov 24 21:32:13 crc kubenswrapper[4915]: I1124 21:32:13.066641 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bc59c6d67-q6nn6" event={"ID":"de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09","Type":"ContainerStarted","Data":"0cb6d4a0d438e06daca1744de5f64b535b92008b613632e1ac290f075ed44114"} Nov 24 21:32:14 crc kubenswrapper[4915]: I1124 21:32:14.074969 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-d57478ccc-z94js" event={"ID":"2b6752cc-d035-4b8d-873d-83ba0d9033ca","Type":"ContainerStarted","Data":"d1be5f28c6be0b2a68fbe00928a1160b52538d6042dd04bab1b247d46c0d2953"} Nov 24 21:32:14 crc kubenswrapper[4915]: I1124 21:32:14.075266 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-d57478ccc-z94js" Nov 24 21:32:14 crc kubenswrapper[4915]: I1124 21:32:14.077008 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bc59c6d67-q6nn6" event={"ID":"de0a3034-e83f-4e2f-aa58-c8ebb8fcbc09","Type":"ContainerStarted","Data":"c68ea055342611ca7a1afe90cf35dea89b55bc3c9d00aa0482efe6f4ad2e6732"} Nov 24 21:32:14 crc kubenswrapper[4915]: I1124 21:32:14.077299 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bc59c6d67-q6nn6" Nov 24 21:32:14 crc kubenswrapper[4915]: I1124 21:32:14.082156 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-d57478ccc-z94js" Nov 24 21:32:14 crc kubenswrapper[4915]: I1124 21:32:14.082476 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bc59c6d67-q6nn6" Nov 24 21:32:14 crc kubenswrapper[4915]: I1124 21:32:14.123591 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-d57478ccc-z94js" podStartSLOduration=4.123566773 podStartE2EDuration="4.123566773s" podCreationTimestamp="2025-11-24 21:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:32:14.107599709 +0000 UTC m=+752.423851882" watchObservedRunningTime="2025-11-24 21:32:14.123566773 +0000 UTC m=+752.439818946" Nov 24 21:32:14 crc kubenswrapper[4915]: I1124 21:32:14.163822 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bc59c6d67-q6nn6" podStartSLOduration=4.163661503 podStartE2EDuration="4.163661503s" podCreationTimestamp="2025-11-24 21:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:32:14.130608784 +0000 UTC m=+752.446860957" watchObservedRunningTime="2025-11-24 21:32:14.163661503 +0000 UTC m=+752.479913676" Nov 24 21:32:14 crc kubenswrapper[4915]: I1124 21:32:14.528181 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v"] Nov 24 21:32:14 crc kubenswrapper[4915]: I1124 21:32:14.529810 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v" Nov 24 21:32:14 crc kubenswrapper[4915]: I1124 21:32:14.532645 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 21:32:14 crc kubenswrapper[4915]: I1124 21:32:14.549232 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v"] Nov 24 21:32:14 crc kubenswrapper[4915]: I1124 21:32:14.642728 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e1fb0a11-1a7c-48b6-ae22-75332ce05f0e-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v\" (UID: \"e1fb0a11-1a7c-48b6-ae22-75332ce05f0e\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v" Nov 24 21:32:14 crc kubenswrapper[4915]: I1124 21:32:14.642978 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e1fb0a11-1a7c-48b6-ae22-75332ce05f0e-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v\" (UID: \"e1fb0a11-1a7c-48b6-ae22-75332ce05f0e\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v" Nov 24 21:32:14 crc kubenswrapper[4915]: I1124 21:32:14.643129 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94stb\" (UniqueName: \"kubernetes.io/projected/e1fb0a11-1a7c-48b6-ae22-75332ce05f0e-kube-api-access-94stb\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v\" (UID: \"e1fb0a11-1a7c-48b6-ae22-75332ce05f0e\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v" Nov 24 21:32:14 crc kubenswrapper[4915]: I1124 21:32:14.744691 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e1fb0a11-1a7c-48b6-ae22-75332ce05f0e-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v\" (UID: \"e1fb0a11-1a7c-48b6-ae22-75332ce05f0e\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v" Nov 24 21:32:14 crc kubenswrapper[4915]: I1124 21:32:14.744760 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e1fb0a11-1a7c-48b6-ae22-75332ce05f0e-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v\" (UID: \"e1fb0a11-1a7c-48b6-ae22-75332ce05f0e\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v" Nov 24 21:32:14 crc kubenswrapper[4915]: I1124 21:32:14.744808 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94stb\" (UniqueName: \"kubernetes.io/projected/e1fb0a11-1a7c-48b6-ae22-75332ce05f0e-kube-api-access-94stb\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v\" (UID: \"e1fb0a11-1a7c-48b6-ae22-75332ce05f0e\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v" Nov 24 21:32:14 crc kubenswrapper[4915]: I1124 21:32:14.745484 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e1fb0a11-1a7c-48b6-ae22-75332ce05f0e-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v\" (UID: \"e1fb0a11-1a7c-48b6-ae22-75332ce05f0e\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v" Nov 24 21:32:14 crc kubenswrapper[4915]: I1124 21:32:14.745689 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e1fb0a11-1a7c-48b6-ae22-75332ce05f0e-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v\" (UID: \"e1fb0a11-1a7c-48b6-ae22-75332ce05f0e\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v" Nov 24 21:32:14 crc kubenswrapper[4915]: I1124 21:32:14.786216 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94stb\" (UniqueName: \"kubernetes.io/projected/e1fb0a11-1a7c-48b6-ae22-75332ce05f0e-kube-api-access-94stb\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v\" (UID: \"e1fb0a11-1a7c-48b6-ae22-75332ce05f0e\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v" Nov 24 21:32:14 crc kubenswrapper[4915]: I1124 21:32:14.848262 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v" Nov 24 21:32:14 crc kubenswrapper[4915]: I1124 21:32:14.928141 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq"] Nov 24 21:32:14 crc kubenswrapper[4915]: I1124 21:32:14.929260 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq" Nov 24 21:32:14 crc kubenswrapper[4915]: I1124 21:32:14.960069 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq"] Nov 24 21:32:15 crc kubenswrapper[4915]: I1124 21:32:15.049976 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/65a8e33a-e2b6-45a0-a989-ea41d3223442-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq\" (UID: \"65a8e33a-e2b6-45a0-a989-ea41d3223442\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq" Nov 24 21:32:15 crc kubenswrapper[4915]: I1124 21:32:15.050019 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z879\" (UniqueName: \"kubernetes.io/projected/65a8e33a-e2b6-45a0-a989-ea41d3223442-kube-api-access-5z879\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq\" (UID: \"65a8e33a-e2b6-45a0-a989-ea41d3223442\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq" Nov 24 21:32:15 crc kubenswrapper[4915]: I1124 21:32:15.050382 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/65a8e33a-e2b6-45a0-a989-ea41d3223442-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq\" (UID: \"65a8e33a-e2b6-45a0-a989-ea41d3223442\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq" Nov 24 21:32:15 crc kubenswrapper[4915]: I1124 21:32:15.152538 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/65a8e33a-e2b6-45a0-a989-ea41d3223442-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq\" (UID: \"65a8e33a-e2b6-45a0-a989-ea41d3223442\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq" Nov 24 21:32:15 crc kubenswrapper[4915]: I1124 21:32:15.153318 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/65a8e33a-e2b6-45a0-a989-ea41d3223442-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq\" (UID: \"65a8e33a-e2b6-45a0-a989-ea41d3223442\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq" Nov 24 21:32:15 crc kubenswrapper[4915]: I1124 21:32:15.153422 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z879\" (UniqueName: \"kubernetes.io/projected/65a8e33a-e2b6-45a0-a989-ea41d3223442-kube-api-access-5z879\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq\" (UID: \"65a8e33a-e2b6-45a0-a989-ea41d3223442\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq" Nov 24 21:32:15 crc kubenswrapper[4915]: I1124 21:32:15.153686 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/65a8e33a-e2b6-45a0-a989-ea41d3223442-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq\" (UID: \"65a8e33a-e2b6-45a0-a989-ea41d3223442\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq" Nov 24 21:32:15 crc kubenswrapper[4915]: I1124 21:32:15.154194 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/65a8e33a-e2b6-45a0-a989-ea41d3223442-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq\" (UID: \"65a8e33a-e2b6-45a0-a989-ea41d3223442\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq" Nov 24 21:32:15 crc kubenswrapper[4915]: I1124 21:32:15.175528 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z879\" (UniqueName: \"kubernetes.io/projected/65a8e33a-e2b6-45a0-a989-ea41d3223442-kube-api-access-5z879\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq\" (UID: \"65a8e33a-e2b6-45a0-a989-ea41d3223442\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq" Nov 24 21:32:15 crc kubenswrapper[4915]: I1124 21:32:15.259317 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq" Nov 24 21:32:15 crc kubenswrapper[4915]: I1124 21:32:15.279196 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v"] Nov 24 21:32:15 crc kubenswrapper[4915]: W1124 21:32:15.284170 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1fb0a11_1a7c_48b6_ae22_75332ce05f0e.slice/crio-559d74421b944c0e30642358ba4c9179cb63d0c16a699a02e14f1bc13fb211f4 WatchSource:0}: Error finding container 559d74421b944c0e30642358ba4c9179cb63d0c16a699a02e14f1bc13fb211f4: Status 404 returned error can't find the container with id 559d74421b944c0e30642358ba4c9179cb63d0c16a699a02e14f1bc13fb211f4 Nov 24 21:32:15 crc kubenswrapper[4915]: I1124 21:32:15.541356 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq"] Nov 24 21:32:16 crc kubenswrapper[4915]: I1124 21:32:16.092037 4915 generic.go:334] "Generic (PLEG): container finished" podID="65a8e33a-e2b6-45a0-a989-ea41d3223442" containerID="4027e47731dd75dbdb87d070004f468ceda39899a81eddcbf86d3bb729f66948" exitCode=0 Nov 24 21:32:16 crc kubenswrapper[4915]: I1124 21:32:16.092106 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq" event={"ID":"65a8e33a-e2b6-45a0-a989-ea41d3223442","Type":"ContainerDied","Data":"4027e47731dd75dbdb87d070004f468ceda39899a81eddcbf86d3bb729f66948"} Nov 24 21:32:16 crc kubenswrapper[4915]: I1124 21:32:16.092804 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq" event={"ID":"65a8e33a-e2b6-45a0-a989-ea41d3223442","Type":"ContainerStarted","Data":"a65c4d49a6dc77858eb0d0d89538dceeebb7fa28b1b6873c022d2781434eb2bc"} Nov 24 21:32:16 crc kubenswrapper[4915]: I1124 21:32:16.095324 4915 generic.go:334] "Generic (PLEG): container finished" podID="e1fb0a11-1a7c-48b6-ae22-75332ce05f0e" containerID="33e49bb8bb06d3d7768e25c217aa74946cee32ab6fd5f02333bc2f052ae7f4c8" exitCode=0 Nov 24 21:32:16 crc kubenswrapper[4915]: I1124 21:32:16.095590 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v" event={"ID":"e1fb0a11-1a7c-48b6-ae22-75332ce05f0e","Type":"ContainerDied","Data":"33e49bb8bb06d3d7768e25c217aa74946cee32ab6fd5f02333bc2f052ae7f4c8"} Nov 24 21:32:16 crc kubenswrapper[4915]: I1124 21:32:16.095639 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v" event={"ID":"e1fb0a11-1a7c-48b6-ae22-75332ce05f0e","Type":"ContainerStarted","Data":"559d74421b944c0e30642358ba4c9179cb63d0c16a699a02e14f1bc13fb211f4"} Nov 24 21:32:17 crc kubenswrapper[4915]: I1124 21:32:17.012541 4915 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 24 21:32:18 crc kubenswrapper[4915]: I1124 21:32:18.117561 4915 generic.go:334] "Generic (PLEG): container finished" podID="65a8e33a-e2b6-45a0-a989-ea41d3223442" containerID="4617e360b22c18c1b97002695bdf8b4d185482bd26cb94d5cbd5fb3eec771796" exitCode=0 Nov 24 21:32:18 crc kubenswrapper[4915]: I1124 21:32:18.117707 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq" event={"ID":"65a8e33a-e2b6-45a0-a989-ea41d3223442","Type":"ContainerDied","Data":"4617e360b22c18c1b97002695bdf8b4d185482bd26cb94d5cbd5fb3eec771796"} Nov 24 21:32:18 crc kubenswrapper[4915]: I1124 21:32:18.121379 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v" event={"ID":"e1fb0a11-1a7c-48b6-ae22-75332ce05f0e","Type":"ContainerDied","Data":"be6058225b2108db17416e59d127f6d9297533f881855039ec6395b36d3504f6"} Nov 24 21:32:18 crc kubenswrapper[4915]: I1124 21:32:18.121243 4915 generic.go:334] "Generic (PLEG): container finished" podID="e1fb0a11-1a7c-48b6-ae22-75332ce05f0e" containerID="be6058225b2108db17416e59d127f6d9297533f881855039ec6395b36d3504f6" exitCode=0 Nov 24 21:32:18 crc kubenswrapper[4915]: I1124 21:32:18.288524 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2tlz2"] Nov 24 21:32:18 crc kubenswrapper[4915]: I1124 21:32:18.290107 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2tlz2" Nov 24 21:32:18 crc kubenswrapper[4915]: I1124 21:32:18.304133 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2tlz2"] Nov 24 21:32:18 crc kubenswrapper[4915]: I1124 21:32:18.411129 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckgf8\" (UniqueName: \"kubernetes.io/projected/53952644-4f63-46d2-bd1c-1d2cbb536fb2-kube-api-access-ckgf8\") pod \"redhat-operators-2tlz2\" (UID: \"53952644-4f63-46d2-bd1c-1d2cbb536fb2\") " pod="openshift-marketplace/redhat-operators-2tlz2" Nov 24 21:32:18 crc kubenswrapper[4915]: I1124 21:32:18.411440 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53952644-4f63-46d2-bd1c-1d2cbb536fb2-catalog-content\") pod \"redhat-operators-2tlz2\" (UID: \"53952644-4f63-46d2-bd1c-1d2cbb536fb2\") " pod="openshift-marketplace/redhat-operators-2tlz2" Nov 24 21:32:18 crc kubenswrapper[4915]: I1124 21:32:18.411609 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53952644-4f63-46d2-bd1c-1d2cbb536fb2-utilities\") pod \"redhat-operators-2tlz2\" (UID: \"53952644-4f63-46d2-bd1c-1d2cbb536fb2\") " pod="openshift-marketplace/redhat-operators-2tlz2" Nov 24 21:32:18 crc kubenswrapper[4915]: I1124 21:32:18.513307 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckgf8\" (UniqueName: \"kubernetes.io/projected/53952644-4f63-46d2-bd1c-1d2cbb536fb2-kube-api-access-ckgf8\") pod \"redhat-operators-2tlz2\" (UID: \"53952644-4f63-46d2-bd1c-1d2cbb536fb2\") " pod="openshift-marketplace/redhat-operators-2tlz2" Nov 24 21:32:18 crc kubenswrapper[4915]: I1124 21:32:18.513372 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53952644-4f63-46d2-bd1c-1d2cbb536fb2-catalog-content\") pod \"redhat-operators-2tlz2\" (UID: \"53952644-4f63-46d2-bd1c-1d2cbb536fb2\") " pod="openshift-marketplace/redhat-operators-2tlz2" Nov 24 21:32:18 crc kubenswrapper[4915]: I1124 21:32:18.513475 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53952644-4f63-46d2-bd1c-1d2cbb536fb2-utilities\") pod \"redhat-operators-2tlz2\" (UID: \"53952644-4f63-46d2-bd1c-1d2cbb536fb2\") " pod="openshift-marketplace/redhat-operators-2tlz2" Nov 24 21:32:18 crc kubenswrapper[4915]: I1124 21:32:18.514193 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53952644-4f63-46d2-bd1c-1d2cbb536fb2-utilities\") pod \"redhat-operators-2tlz2\" (UID: \"53952644-4f63-46d2-bd1c-1d2cbb536fb2\") " pod="openshift-marketplace/redhat-operators-2tlz2" Nov 24 21:32:18 crc kubenswrapper[4915]: I1124 21:32:18.514264 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53952644-4f63-46d2-bd1c-1d2cbb536fb2-catalog-content\") pod \"redhat-operators-2tlz2\" (UID: \"53952644-4f63-46d2-bd1c-1d2cbb536fb2\") " pod="openshift-marketplace/redhat-operators-2tlz2" Nov 24 21:32:18 crc kubenswrapper[4915]: I1124 21:32:18.541959 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckgf8\" (UniqueName: \"kubernetes.io/projected/53952644-4f63-46d2-bd1c-1d2cbb536fb2-kube-api-access-ckgf8\") pod \"redhat-operators-2tlz2\" (UID: \"53952644-4f63-46d2-bd1c-1d2cbb536fb2\") " pod="openshift-marketplace/redhat-operators-2tlz2" Nov 24 21:32:18 crc kubenswrapper[4915]: I1124 21:32:18.607861 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2tlz2" Nov 24 21:32:19 crc kubenswrapper[4915]: I1124 21:32:19.066592 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2tlz2"] Nov 24 21:32:19 crc kubenswrapper[4915]: I1124 21:32:19.144350 4915 generic.go:334] "Generic (PLEG): container finished" podID="65a8e33a-e2b6-45a0-a989-ea41d3223442" containerID="603e2cb6296550c5301ba0b918059cb94f08ed19e7a6e23fe6380217359fe638" exitCode=0 Nov 24 21:32:19 crc kubenswrapper[4915]: I1124 21:32:19.144427 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq" event={"ID":"65a8e33a-e2b6-45a0-a989-ea41d3223442","Type":"ContainerDied","Data":"603e2cb6296550c5301ba0b918059cb94f08ed19e7a6e23fe6380217359fe638"} Nov 24 21:32:19 crc kubenswrapper[4915]: I1124 21:32:19.146289 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tlz2" event={"ID":"53952644-4f63-46d2-bd1c-1d2cbb536fb2","Type":"ContainerStarted","Data":"96c3880cc6c7df14cbac90273c6e9f4738f57aab4b504c99597d19bd8f35c26e"} Nov 24 21:32:19 crc kubenswrapper[4915]: I1124 21:32:19.148391 4915 generic.go:334] "Generic (PLEG): container finished" podID="e1fb0a11-1a7c-48b6-ae22-75332ce05f0e" containerID="8689fb46e4e7710e6004b44e9709dc5e9a0cf2b95db995ff5d5f4210dbf1af7c" exitCode=0 Nov 24 21:32:19 crc kubenswrapper[4915]: I1124 21:32:19.148419 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v" event={"ID":"e1fb0a11-1a7c-48b6-ae22-75332ce05f0e","Type":"ContainerDied","Data":"8689fb46e4e7710e6004b44e9709dc5e9a0cf2b95db995ff5d5f4210dbf1af7c"} Nov 24 21:32:20 crc kubenswrapper[4915]: I1124 21:32:20.158226 4915 generic.go:334] "Generic (PLEG): container finished" podID="53952644-4f63-46d2-bd1c-1d2cbb536fb2" containerID="2c92becbe0966fadceeb0d9510bf6663a14f6a8fea4dd05f2bfd4ee1365bb4a7" exitCode=0 Nov 24 21:32:20 crc kubenswrapper[4915]: I1124 21:32:20.158914 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tlz2" event={"ID":"53952644-4f63-46d2-bd1c-1d2cbb536fb2","Type":"ContainerDied","Data":"2c92becbe0966fadceeb0d9510bf6663a14f6a8fea4dd05f2bfd4ee1365bb4a7"} Nov 24 21:32:20 crc kubenswrapper[4915]: I1124 21:32:20.508668 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v" Nov 24 21:32:20 crc kubenswrapper[4915]: I1124 21:32:20.643570 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e1fb0a11-1a7c-48b6-ae22-75332ce05f0e-bundle\") pod \"e1fb0a11-1a7c-48b6-ae22-75332ce05f0e\" (UID: \"e1fb0a11-1a7c-48b6-ae22-75332ce05f0e\") " Nov 24 21:32:20 crc kubenswrapper[4915]: I1124 21:32:20.644116 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e1fb0a11-1a7c-48b6-ae22-75332ce05f0e-util\") pod \"e1fb0a11-1a7c-48b6-ae22-75332ce05f0e\" (UID: \"e1fb0a11-1a7c-48b6-ae22-75332ce05f0e\") " Nov 24 21:32:20 crc kubenswrapper[4915]: I1124 21:32:20.644155 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94stb\" (UniqueName: \"kubernetes.io/projected/e1fb0a11-1a7c-48b6-ae22-75332ce05f0e-kube-api-access-94stb\") pod \"e1fb0a11-1a7c-48b6-ae22-75332ce05f0e\" (UID: \"e1fb0a11-1a7c-48b6-ae22-75332ce05f0e\") " Nov 24 21:32:20 crc kubenswrapper[4915]: I1124 21:32:20.644544 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1fb0a11-1a7c-48b6-ae22-75332ce05f0e-bundle" (OuterVolumeSpecName: "bundle") pod "e1fb0a11-1a7c-48b6-ae22-75332ce05f0e" (UID: "e1fb0a11-1a7c-48b6-ae22-75332ce05f0e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:32:20 crc kubenswrapper[4915]: I1124 21:32:20.659336 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1fb0a11-1a7c-48b6-ae22-75332ce05f0e-kube-api-access-94stb" (OuterVolumeSpecName: "kube-api-access-94stb") pod "e1fb0a11-1a7c-48b6-ae22-75332ce05f0e" (UID: "e1fb0a11-1a7c-48b6-ae22-75332ce05f0e"). InnerVolumeSpecName "kube-api-access-94stb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:32:20 crc kubenswrapper[4915]: I1124 21:32:20.662037 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq" Nov 24 21:32:20 crc kubenswrapper[4915]: I1124 21:32:20.746123 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94stb\" (UniqueName: \"kubernetes.io/projected/e1fb0a11-1a7c-48b6-ae22-75332ce05f0e-kube-api-access-94stb\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:20 crc kubenswrapper[4915]: I1124 21:32:20.746186 4915 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e1fb0a11-1a7c-48b6-ae22-75332ce05f0e-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:20 crc kubenswrapper[4915]: I1124 21:32:20.847481 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z879\" (UniqueName: \"kubernetes.io/projected/65a8e33a-e2b6-45a0-a989-ea41d3223442-kube-api-access-5z879\") pod \"65a8e33a-e2b6-45a0-a989-ea41d3223442\" (UID: \"65a8e33a-e2b6-45a0-a989-ea41d3223442\") " Nov 24 21:32:20 crc kubenswrapper[4915]: I1124 21:32:20.847621 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/65a8e33a-e2b6-45a0-a989-ea41d3223442-bundle\") pod \"65a8e33a-e2b6-45a0-a989-ea41d3223442\" (UID: \"65a8e33a-e2b6-45a0-a989-ea41d3223442\") " Nov 24 21:32:20 crc kubenswrapper[4915]: I1124 21:32:20.847646 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/65a8e33a-e2b6-45a0-a989-ea41d3223442-util\") pod \"65a8e33a-e2b6-45a0-a989-ea41d3223442\" (UID: \"65a8e33a-e2b6-45a0-a989-ea41d3223442\") " Nov 24 21:32:20 crc kubenswrapper[4915]: I1124 21:32:20.848903 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65a8e33a-e2b6-45a0-a989-ea41d3223442-bundle" (OuterVolumeSpecName: "bundle") pod "65a8e33a-e2b6-45a0-a989-ea41d3223442" (UID: "65a8e33a-e2b6-45a0-a989-ea41d3223442"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:32:20 crc kubenswrapper[4915]: I1124 21:32:20.850353 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65a8e33a-e2b6-45a0-a989-ea41d3223442-kube-api-access-5z879" (OuterVolumeSpecName: "kube-api-access-5z879") pod "65a8e33a-e2b6-45a0-a989-ea41d3223442" (UID: "65a8e33a-e2b6-45a0-a989-ea41d3223442"). InnerVolumeSpecName "kube-api-access-5z879". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:32:20 crc kubenswrapper[4915]: I1124 21:32:20.862547 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65a8e33a-e2b6-45a0-a989-ea41d3223442-util" (OuterVolumeSpecName: "util") pod "65a8e33a-e2b6-45a0-a989-ea41d3223442" (UID: "65a8e33a-e2b6-45a0-a989-ea41d3223442"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:32:20 crc kubenswrapper[4915]: I1124 21:32:20.894451 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1fb0a11-1a7c-48b6-ae22-75332ce05f0e-util" (OuterVolumeSpecName: "util") pod "e1fb0a11-1a7c-48b6-ae22-75332ce05f0e" (UID: "e1fb0a11-1a7c-48b6-ae22-75332ce05f0e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:32:20 crc kubenswrapper[4915]: I1124 21:32:20.949215 4915 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e1fb0a11-1a7c-48b6-ae22-75332ce05f0e-util\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:20 crc kubenswrapper[4915]: I1124 21:32:20.949287 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z879\" (UniqueName: \"kubernetes.io/projected/65a8e33a-e2b6-45a0-a989-ea41d3223442-kube-api-access-5z879\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:20 crc kubenswrapper[4915]: I1124 21:32:20.949307 4915 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/65a8e33a-e2b6-45a0-a989-ea41d3223442-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:20 crc kubenswrapper[4915]: I1124 21:32:20.949325 4915 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/65a8e33a-e2b6-45a0-a989-ea41d3223442-util\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:21 crc kubenswrapper[4915]: I1124 21:32:21.171990 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq" event={"ID":"65a8e33a-e2b6-45a0-a989-ea41d3223442","Type":"ContainerDied","Data":"a65c4d49a6dc77858eb0d0d89538dceeebb7fa28b1b6873c022d2781434eb2bc"} Nov 24 21:32:21 crc kubenswrapper[4915]: I1124 21:32:21.172070 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a65c4d49a6dc77858eb0d0d89538dceeebb7fa28b1b6873c022d2781434eb2bc" Nov 24 21:32:21 crc kubenswrapper[4915]: I1124 21:32:21.172074 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq" Nov 24 21:32:21 crc kubenswrapper[4915]: I1124 21:32:21.182097 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v" event={"ID":"e1fb0a11-1a7c-48b6-ae22-75332ce05f0e","Type":"ContainerDied","Data":"559d74421b944c0e30642358ba4c9179cb63d0c16a699a02e14f1bc13fb211f4"} Nov 24 21:32:21 crc kubenswrapper[4915]: I1124 21:32:21.182225 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="559d74421b944c0e30642358ba4c9179cb63d0c16a699a02e14f1bc13fb211f4" Nov 24 21:32:21 crc kubenswrapper[4915]: I1124 21:32:21.182412 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v" Nov 24 21:32:21 crc kubenswrapper[4915]: E1124 21:32:21.334958 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65a8e33a_e2b6_45a0_a989_ea41d3223442.slice/crio-a65c4d49a6dc77858eb0d0d89538dceeebb7fa28b1b6873c022d2781434eb2bc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1fb0a11_1a7c_48b6_ae22_75332ce05f0e.slice\": RecentStats: unable to find data in memory cache]" Nov 24 21:32:23 crc kubenswrapper[4915]: I1124 21:32:23.199184 4915 generic.go:334] "Generic (PLEG): container finished" podID="53952644-4f63-46d2-bd1c-1d2cbb536fb2" containerID="e333a9af10a589d3c3154243a03fa8b622d37feea58b662205f81d3ab67361f5" exitCode=0 Nov 24 21:32:23 crc kubenswrapper[4915]: I1124 21:32:23.199233 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tlz2" event={"ID":"53952644-4f63-46d2-bd1c-1d2cbb536fb2","Type":"ContainerDied","Data":"e333a9af10a589d3c3154243a03fa8b622d37feea58b662205f81d3ab67361f5"} Nov 24 21:32:24 crc kubenswrapper[4915]: I1124 21:32:24.210985 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tlz2" event={"ID":"53952644-4f63-46d2-bd1c-1d2cbb536fb2","Type":"ContainerStarted","Data":"bc8f3ef01af3582302a106c1365a57161386b382d26640c89b4be785fa0ecee0"} Nov 24 21:32:24 crc kubenswrapper[4915]: I1124 21:32:24.247001 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2tlz2" podStartSLOduration=2.666606213 podStartE2EDuration="6.24696884s" podCreationTimestamp="2025-11-24 21:32:18 +0000 UTC" firstStartedPulling="2025-11-24 21:32:20.162209099 +0000 UTC m=+758.478461282" lastFinishedPulling="2025-11-24 21:32:23.742571736 +0000 UTC m=+762.058823909" observedRunningTime="2025-11-24 21:32:24.239411736 +0000 UTC m=+762.555663909" watchObservedRunningTime="2025-11-24 21:32:24.24696884 +0000 UTC m=+762.563221033" Nov 24 21:32:24 crc kubenswrapper[4915]: I1124 21:32:24.327558 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:32:24 crc kubenswrapper[4915]: I1124 21:32:24.327983 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:32:28 crc kubenswrapper[4915]: I1124 21:32:28.609563 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2tlz2" Nov 24 21:32:28 crc kubenswrapper[4915]: I1124 21:32:28.610123 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2tlz2" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.392403 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-65b8d94b4b-6kr4m"] Nov 24 21:32:29 crc kubenswrapper[4915]: E1124 21:32:29.393259 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1fb0a11-1a7c-48b6-ae22-75332ce05f0e" containerName="util" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.393302 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1fb0a11-1a7c-48b6-ae22-75332ce05f0e" containerName="util" Nov 24 21:32:29 crc kubenswrapper[4915]: E1124 21:32:29.393318 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65a8e33a-e2b6-45a0-a989-ea41d3223442" containerName="extract" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.393327 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a8e33a-e2b6-45a0-a989-ea41d3223442" containerName="extract" Nov 24 21:32:29 crc kubenswrapper[4915]: E1124 21:32:29.393363 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1fb0a11-1a7c-48b6-ae22-75332ce05f0e" containerName="extract" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.393374 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1fb0a11-1a7c-48b6-ae22-75332ce05f0e" containerName="extract" Nov 24 21:32:29 crc kubenswrapper[4915]: E1124 21:32:29.393386 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65a8e33a-e2b6-45a0-a989-ea41d3223442" containerName="pull" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.393394 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a8e33a-e2b6-45a0-a989-ea41d3223442" containerName="pull" Nov 24 21:32:29 crc kubenswrapper[4915]: E1124 21:32:29.393407 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1fb0a11-1a7c-48b6-ae22-75332ce05f0e" containerName="pull" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.393414 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1fb0a11-1a7c-48b6-ae22-75332ce05f0e" containerName="pull" Nov 24 21:32:29 crc kubenswrapper[4915]: E1124 21:32:29.393456 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65a8e33a-e2b6-45a0-a989-ea41d3223442" containerName="util" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.393467 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a8e33a-e2b6-45a0-a989-ea41d3223442" containerName="util" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.393648 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1fb0a11-1a7c-48b6-ae22-75332ce05f0e" containerName="extract" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.393686 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="65a8e33a-e2b6-45a0-a989-ea41d3223442" containerName="extract" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.394868 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-65b8d94b4b-6kr4m" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.398126 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.398617 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.398866 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.403997 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.405340 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.406217 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-d9khp" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.410467 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-65b8d94b4b-6kr4m"] Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.462875 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf2549dc-2e6e-464a-9d5b-4631dcfe9e74-webhook-cert\") pod \"loki-operator-controller-manager-65b8d94b4b-6kr4m\" (UID: \"cf2549dc-2e6e-464a-9d5b-4631dcfe9e74\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65b8d94b4b-6kr4m" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.462927 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztfz5\" (UniqueName: \"kubernetes.io/projected/cf2549dc-2e6e-464a-9d5b-4631dcfe9e74-kube-api-access-ztfz5\") pod \"loki-operator-controller-manager-65b8d94b4b-6kr4m\" (UID: \"cf2549dc-2e6e-464a-9d5b-4631dcfe9e74\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65b8d94b4b-6kr4m" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.463000 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/cf2549dc-2e6e-464a-9d5b-4631dcfe9e74-manager-config\") pod \"loki-operator-controller-manager-65b8d94b4b-6kr4m\" (UID: \"cf2549dc-2e6e-464a-9d5b-4631dcfe9e74\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65b8d94b4b-6kr4m" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.463151 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf2549dc-2e6e-464a-9d5b-4631dcfe9e74-apiservice-cert\") pod \"loki-operator-controller-manager-65b8d94b4b-6kr4m\" (UID: \"cf2549dc-2e6e-464a-9d5b-4631dcfe9e74\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65b8d94b4b-6kr4m" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.463245 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cf2549dc-2e6e-464a-9d5b-4631dcfe9e74-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-65b8d94b4b-6kr4m\" (UID: \"cf2549dc-2e6e-464a-9d5b-4631dcfe9e74\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65b8d94b4b-6kr4m" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.563885 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf2549dc-2e6e-464a-9d5b-4631dcfe9e74-webhook-cert\") pod \"loki-operator-controller-manager-65b8d94b4b-6kr4m\" (UID: \"cf2549dc-2e6e-464a-9d5b-4631dcfe9e74\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65b8d94b4b-6kr4m" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.563926 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztfz5\" (UniqueName: \"kubernetes.io/projected/cf2549dc-2e6e-464a-9d5b-4631dcfe9e74-kube-api-access-ztfz5\") pod \"loki-operator-controller-manager-65b8d94b4b-6kr4m\" (UID: \"cf2549dc-2e6e-464a-9d5b-4631dcfe9e74\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65b8d94b4b-6kr4m" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.563966 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/cf2549dc-2e6e-464a-9d5b-4631dcfe9e74-manager-config\") pod \"loki-operator-controller-manager-65b8d94b4b-6kr4m\" (UID: \"cf2549dc-2e6e-464a-9d5b-4631dcfe9e74\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65b8d94b4b-6kr4m" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.564025 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf2549dc-2e6e-464a-9d5b-4631dcfe9e74-apiservice-cert\") pod \"loki-operator-controller-manager-65b8d94b4b-6kr4m\" (UID: \"cf2549dc-2e6e-464a-9d5b-4631dcfe9e74\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65b8d94b4b-6kr4m" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.564073 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cf2549dc-2e6e-464a-9d5b-4631dcfe9e74-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-65b8d94b4b-6kr4m\" (UID: \"cf2549dc-2e6e-464a-9d5b-4631dcfe9e74\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65b8d94b4b-6kr4m" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.564866 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/cf2549dc-2e6e-464a-9d5b-4631dcfe9e74-manager-config\") pod \"loki-operator-controller-manager-65b8d94b4b-6kr4m\" (UID: \"cf2549dc-2e6e-464a-9d5b-4631dcfe9e74\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65b8d94b4b-6kr4m" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.570243 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf2549dc-2e6e-464a-9d5b-4631dcfe9e74-webhook-cert\") pod \"loki-operator-controller-manager-65b8d94b4b-6kr4m\" (UID: \"cf2549dc-2e6e-464a-9d5b-4631dcfe9e74\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65b8d94b4b-6kr4m" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.573449 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cf2549dc-2e6e-464a-9d5b-4631dcfe9e74-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-65b8d94b4b-6kr4m\" (UID: \"cf2549dc-2e6e-464a-9d5b-4631dcfe9e74\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65b8d94b4b-6kr4m" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.580546 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf2549dc-2e6e-464a-9d5b-4631dcfe9e74-apiservice-cert\") pod \"loki-operator-controller-manager-65b8d94b4b-6kr4m\" (UID: \"cf2549dc-2e6e-464a-9d5b-4631dcfe9e74\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65b8d94b4b-6kr4m" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.611073 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztfz5\" (UniqueName: \"kubernetes.io/projected/cf2549dc-2e6e-464a-9d5b-4631dcfe9e74-kube-api-access-ztfz5\") pod \"loki-operator-controller-manager-65b8d94b4b-6kr4m\" (UID: \"cf2549dc-2e6e-464a-9d5b-4631dcfe9e74\") " pod="openshift-operators-redhat/loki-operator-controller-manager-65b8d94b4b-6kr4m" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.647871 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2tlz2" podUID="53952644-4f63-46d2-bd1c-1d2cbb536fb2" containerName="registry-server" probeResult="failure" output=< Nov 24 21:32:29 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Nov 24 21:32:29 crc kubenswrapper[4915]: > Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.717079 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-65b8d94b4b-6kr4m" Nov 24 21:32:29 crc kubenswrapper[4915]: I1124 21:32:29.988588 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-65b8d94b4b-6kr4m"] Nov 24 21:32:29 crc kubenswrapper[4915]: W1124 21:32:29.997145 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf2549dc_2e6e_464a_9d5b_4631dcfe9e74.slice/crio-a0845c41d895e58f01a5b82c4a5d0007ef31b5978b8c5e15a7183341993c1b08 WatchSource:0}: Error finding container a0845c41d895e58f01a5b82c4a5d0007ef31b5978b8c5e15a7183341993c1b08: Status 404 returned error can't find the container with id a0845c41d895e58f01a5b82c4a5d0007ef31b5978b8c5e15a7183341993c1b08 Nov 24 21:32:30 crc kubenswrapper[4915]: I1124 21:32:30.252154 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-65b8d94b4b-6kr4m" event={"ID":"cf2549dc-2e6e-464a-9d5b-4631dcfe9e74","Type":"ContainerStarted","Data":"a0845c41d895e58f01a5b82c4a5d0007ef31b5978b8c5e15a7183341993c1b08"} Nov 24 21:32:34 crc kubenswrapper[4915]: I1124 21:32:34.387478 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-9pcwh"] Nov 24 21:32:34 crc kubenswrapper[4915]: I1124 21:32:34.388753 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-ff9846bd-9pcwh" Nov 24 21:32:34 crc kubenswrapper[4915]: I1124 21:32:34.392038 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Nov 24 21:32:34 crc kubenswrapper[4915]: I1124 21:32:34.392412 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Nov 24 21:32:34 crc kubenswrapper[4915]: I1124 21:32:34.392628 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-xw8pl" Nov 24 21:32:34 crc kubenswrapper[4915]: I1124 21:32:34.407527 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-9pcwh"] Nov 24 21:32:34 crc kubenswrapper[4915]: I1124 21:32:34.457670 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n482\" (UniqueName: \"kubernetes.io/projected/e75a5a1e-8a27-41b2-8b34-86e0981dae10-kube-api-access-2n482\") pod \"cluster-logging-operator-ff9846bd-9pcwh\" (UID: \"e75a5a1e-8a27-41b2-8b34-86e0981dae10\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-9pcwh" Nov 24 21:32:34 crc kubenswrapper[4915]: I1124 21:32:34.559372 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2n482\" (UniqueName: \"kubernetes.io/projected/e75a5a1e-8a27-41b2-8b34-86e0981dae10-kube-api-access-2n482\") pod \"cluster-logging-operator-ff9846bd-9pcwh\" (UID: \"e75a5a1e-8a27-41b2-8b34-86e0981dae10\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-9pcwh" Nov 24 21:32:34 crc kubenswrapper[4915]: I1124 21:32:34.583830 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2n482\" (UniqueName: \"kubernetes.io/projected/e75a5a1e-8a27-41b2-8b34-86e0981dae10-kube-api-access-2n482\") pod \"cluster-logging-operator-ff9846bd-9pcwh\" (UID: \"e75a5a1e-8a27-41b2-8b34-86e0981dae10\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-9pcwh" Nov 24 21:32:34 crc kubenswrapper[4915]: I1124 21:32:34.718678 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-ff9846bd-9pcwh" Nov 24 21:32:35 crc kubenswrapper[4915]: I1124 21:32:35.641208 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-9pcwh"] Nov 24 21:32:35 crc kubenswrapper[4915]: W1124 21:32:35.655292 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode75a5a1e_8a27_41b2_8b34_86e0981dae10.slice/crio-8318293810a8b6548599cefae0ea59033e81c549a8e28f8a5ebc0d84e8d58055 WatchSource:0}: Error finding container 8318293810a8b6548599cefae0ea59033e81c549a8e28f8a5ebc0d84e8d58055: Status 404 returned error can't find the container with id 8318293810a8b6548599cefae0ea59033e81c549a8e28f8a5ebc0d84e8d58055 Nov 24 21:32:36 crc kubenswrapper[4915]: I1124 21:32:36.347171 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-65b8d94b4b-6kr4m" event={"ID":"cf2549dc-2e6e-464a-9d5b-4631dcfe9e74","Type":"ContainerStarted","Data":"4bb8fe0823e9f9368fd6987208480129180fe7bffbeac665e2034f2c41585d13"} Nov 24 21:32:36 crc kubenswrapper[4915]: I1124 21:32:36.348290 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-ff9846bd-9pcwh" event={"ID":"e75a5a1e-8a27-41b2-8b34-86e0981dae10","Type":"ContainerStarted","Data":"8318293810a8b6548599cefae0ea59033e81c549a8e28f8a5ebc0d84e8d58055"} Nov 24 21:32:38 crc kubenswrapper[4915]: I1124 21:32:38.653372 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2tlz2" Nov 24 21:32:38 crc kubenswrapper[4915]: I1124 21:32:38.709207 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2tlz2" Nov 24 21:32:41 crc kubenswrapper[4915]: I1124 21:32:41.876117 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2tlz2"] Nov 24 21:32:41 crc kubenswrapper[4915]: I1124 21:32:41.876725 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2tlz2" podUID="53952644-4f63-46d2-bd1c-1d2cbb536fb2" containerName="registry-server" containerID="cri-o://bc8f3ef01af3582302a106c1365a57161386b382d26640c89b4be785fa0ecee0" gracePeriod=2 Nov 24 21:32:42 crc kubenswrapper[4915]: I1124 21:32:42.392958 4915 generic.go:334] "Generic (PLEG): container finished" podID="53952644-4f63-46d2-bd1c-1d2cbb536fb2" containerID="bc8f3ef01af3582302a106c1365a57161386b382d26640c89b4be785fa0ecee0" exitCode=0 Nov 24 21:32:42 crc kubenswrapper[4915]: I1124 21:32:42.393001 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tlz2" event={"ID":"53952644-4f63-46d2-bd1c-1d2cbb536fb2","Type":"ContainerDied","Data":"bc8f3ef01af3582302a106c1365a57161386b382d26640c89b4be785fa0ecee0"} Nov 24 21:32:45 crc kubenswrapper[4915]: I1124 21:32:45.584453 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2tlz2" Nov 24 21:32:45 crc kubenswrapper[4915]: I1124 21:32:45.730280 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53952644-4f63-46d2-bd1c-1d2cbb536fb2-utilities\") pod \"53952644-4f63-46d2-bd1c-1d2cbb536fb2\" (UID: \"53952644-4f63-46d2-bd1c-1d2cbb536fb2\") " Nov 24 21:32:45 crc kubenswrapper[4915]: I1124 21:32:45.730646 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53952644-4f63-46d2-bd1c-1d2cbb536fb2-catalog-content\") pod \"53952644-4f63-46d2-bd1c-1d2cbb536fb2\" (UID: \"53952644-4f63-46d2-bd1c-1d2cbb536fb2\") " Nov 24 21:32:45 crc kubenswrapper[4915]: I1124 21:32:45.730721 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckgf8\" (UniqueName: \"kubernetes.io/projected/53952644-4f63-46d2-bd1c-1d2cbb536fb2-kube-api-access-ckgf8\") pod \"53952644-4f63-46d2-bd1c-1d2cbb536fb2\" (UID: \"53952644-4f63-46d2-bd1c-1d2cbb536fb2\") " Nov 24 21:32:45 crc kubenswrapper[4915]: I1124 21:32:45.732508 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53952644-4f63-46d2-bd1c-1d2cbb536fb2-utilities" (OuterVolumeSpecName: "utilities") pod "53952644-4f63-46d2-bd1c-1d2cbb536fb2" (UID: "53952644-4f63-46d2-bd1c-1d2cbb536fb2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:32:45 crc kubenswrapper[4915]: I1124 21:32:45.736395 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53952644-4f63-46d2-bd1c-1d2cbb536fb2-kube-api-access-ckgf8" (OuterVolumeSpecName: "kube-api-access-ckgf8") pod "53952644-4f63-46d2-bd1c-1d2cbb536fb2" (UID: "53952644-4f63-46d2-bd1c-1d2cbb536fb2"). InnerVolumeSpecName "kube-api-access-ckgf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:32:45 crc kubenswrapper[4915]: I1124 21:32:45.832266 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckgf8\" (UniqueName: \"kubernetes.io/projected/53952644-4f63-46d2-bd1c-1d2cbb536fb2-kube-api-access-ckgf8\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:45 crc kubenswrapper[4915]: I1124 21:32:45.832300 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53952644-4f63-46d2-bd1c-1d2cbb536fb2-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:45 crc kubenswrapper[4915]: I1124 21:32:45.845297 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53952644-4f63-46d2-bd1c-1d2cbb536fb2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53952644-4f63-46d2-bd1c-1d2cbb536fb2" (UID: "53952644-4f63-46d2-bd1c-1d2cbb536fb2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:32:45 crc kubenswrapper[4915]: I1124 21:32:45.933283 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53952644-4f63-46d2-bd1c-1d2cbb536fb2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:32:46 crc kubenswrapper[4915]: I1124 21:32:46.421953 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tlz2" event={"ID":"53952644-4f63-46d2-bd1c-1d2cbb536fb2","Type":"ContainerDied","Data":"96c3880cc6c7df14cbac90273c6e9f4738f57aab4b504c99597d19bd8f35c26e"} Nov 24 21:32:46 crc kubenswrapper[4915]: I1124 21:32:46.422360 4915 scope.go:117] "RemoveContainer" containerID="bc8f3ef01af3582302a106c1365a57161386b382d26640c89b4be785fa0ecee0" Nov 24 21:32:46 crc kubenswrapper[4915]: I1124 21:32:46.422010 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2tlz2" Nov 24 21:32:46 crc kubenswrapper[4915]: I1124 21:32:46.424910 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-65b8d94b4b-6kr4m" event={"ID":"cf2549dc-2e6e-464a-9d5b-4631dcfe9e74","Type":"ContainerStarted","Data":"d1275a462653f8b975bfc3c64174d4a35104e3a6082f362091c0e61f26124279"} Nov 24 21:32:46 crc kubenswrapper[4915]: I1124 21:32:46.425105 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-65b8d94b4b-6kr4m" Nov 24 21:32:46 crc kubenswrapper[4915]: I1124 21:32:46.436087 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-ff9846bd-9pcwh" event={"ID":"e75a5a1e-8a27-41b2-8b34-86e0981dae10","Type":"ContainerStarted","Data":"1531a6e2bd65c158df2fbd196466045ce64b288a2277cbbc46553aa6a07814a7"} Nov 24 21:32:46 crc kubenswrapper[4915]: I1124 21:32:46.436162 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-65b8d94b4b-6kr4m" Nov 24 21:32:46 crc kubenswrapper[4915]: I1124 21:32:46.454694 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-65b8d94b4b-6kr4m" podStartSLOduration=1.8154394649999999 podStartE2EDuration="17.454675141s" podCreationTimestamp="2025-11-24 21:32:29 +0000 UTC" firstStartedPulling="2025-11-24 21:32:30.000705861 +0000 UTC m=+768.316958034" lastFinishedPulling="2025-11-24 21:32:45.639941537 +0000 UTC m=+783.956193710" observedRunningTime="2025-11-24 21:32:46.452604385 +0000 UTC m=+784.768856558" watchObservedRunningTime="2025-11-24 21:32:46.454675141 +0000 UTC m=+784.770927314" Nov 24 21:32:46 crc kubenswrapper[4915]: I1124 21:32:46.461406 4915 scope.go:117] "RemoveContainer" containerID="e333a9af10a589d3c3154243a03fa8b622d37feea58b662205f81d3ab67361f5" Nov 24 21:32:46 crc kubenswrapper[4915]: I1124 21:32:46.476729 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-ff9846bd-9pcwh" podStartSLOduration=2.6048751279999998 podStartE2EDuration="12.476709477s" podCreationTimestamp="2025-11-24 21:32:34 +0000 UTC" firstStartedPulling="2025-11-24 21:32:35.658593695 +0000 UTC m=+773.974845868" lastFinishedPulling="2025-11-24 21:32:45.530428044 +0000 UTC m=+783.846680217" observedRunningTime="2025-11-24 21:32:46.472631547 +0000 UTC m=+784.788883720" watchObservedRunningTime="2025-11-24 21:32:46.476709477 +0000 UTC m=+784.792961650" Nov 24 21:32:46 crc kubenswrapper[4915]: I1124 21:32:46.497430 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2tlz2"] Nov 24 21:32:46 crc kubenswrapper[4915]: I1124 21:32:46.504078 4915 scope.go:117] "RemoveContainer" containerID="2c92becbe0966fadceeb0d9510bf6663a14f6a8fea4dd05f2bfd4ee1365bb4a7" Nov 24 21:32:46 crc kubenswrapper[4915]: I1124 21:32:46.513304 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2tlz2"] Nov 24 21:32:48 crc kubenswrapper[4915]: I1124 21:32:48.434675 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53952644-4f63-46d2-bd1c-1d2cbb536fb2" path="/var/lib/kubelet/pods/53952644-4f63-46d2-bd1c-1d2cbb536fb2/volumes" Nov 24 21:32:50 crc kubenswrapper[4915]: I1124 21:32:50.302460 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Nov 24 21:32:50 crc kubenswrapper[4915]: E1124 21:32:50.303054 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53952644-4f63-46d2-bd1c-1d2cbb536fb2" containerName="extract-utilities" Nov 24 21:32:50 crc kubenswrapper[4915]: I1124 21:32:50.303067 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="53952644-4f63-46d2-bd1c-1d2cbb536fb2" containerName="extract-utilities" Nov 24 21:32:50 crc kubenswrapper[4915]: E1124 21:32:50.303080 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53952644-4f63-46d2-bd1c-1d2cbb536fb2" containerName="extract-content" Nov 24 21:32:50 crc kubenswrapper[4915]: I1124 21:32:50.303086 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="53952644-4f63-46d2-bd1c-1d2cbb536fb2" containerName="extract-content" Nov 24 21:32:50 crc kubenswrapper[4915]: E1124 21:32:50.303102 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53952644-4f63-46d2-bd1c-1d2cbb536fb2" containerName="registry-server" Nov 24 21:32:50 crc kubenswrapper[4915]: I1124 21:32:50.303109 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="53952644-4f63-46d2-bd1c-1d2cbb536fb2" containerName="registry-server" Nov 24 21:32:50 crc kubenswrapper[4915]: I1124 21:32:50.303226 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="53952644-4f63-46d2-bd1c-1d2cbb536fb2" containerName="registry-server" Nov 24 21:32:50 crc kubenswrapper[4915]: I1124 21:32:50.303641 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Nov 24 21:32:50 crc kubenswrapper[4915]: I1124 21:32:50.305997 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Nov 24 21:32:50 crc kubenswrapper[4915]: I1124 21:32:50.306099 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Nov 24 21:32:50 crc kubenswrapper[4915]: I1124 21:32:50.317933 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Nov 24 21:32:50 crc kubenswrapper[4915]: I1124 21:32:50.493390 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f23de451-a1d3-46e8-823a-e85d6972947c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f23de451-a1d3-46e8-823a-e85d6972947c\") pod \"minio\" (UID: \"7a9c4ae4-edf1-4af8-8a09-a29ea6b31b96\") " pod="minio-dev/minio" Nov 24 21:32:50 crc kubenswrapper[4915]: I1124 21:32:50.493508 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7snf2\" (UniqueName: \"kubernetes.io/projected/7a9c4ae4-edf1-4af8-8a09-a29ea6b31b96-kube-api-access-7snf2\") pod \"minio\" (UID: \"7a9c4ae4-edf1-4af8-8a09-a29ea6b31b96\") " pod="minio-dev/minio" Nov 24 21:32:50 crc kubenswrapper[4915]: I1124 21:32:50.595121 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f23de451-a1d3-46e8-823a-e85d6972947c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f23de451-a1d3-46e8-823a-e85d6972947c\") pod \"minio\" (UID: \"7a9c4ae4-edf1-4af8-8a09-a29ea6b31b96\") " pod="minio-dev/minio" Nov 24 21:32:50 crc kubenswrapper[4915]: I1124 21:32:50.595244 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7snf2\" (UniqueName: \"kubernetes.io/projected/7a9c4ae4-edf1-4af8-8a09-a29ea6b31b96-kube-api-access-7snf2\") pod \"minio\" (UID: \"7a9c4ae4-edf1-4af8-8a09-a29ea6b31b96\") " pod="minio-dev/minio" Nov 24 21:32:50 crc kubenswrapper[4915]: I1124 21:32:50.597481 4915 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 21:32:50 crc kubenswrapper[4915]: I1124 21:32:50.597517 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f23de451-a1d3-46e8-823a-e85d6972947c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f23de451-a1d3-46e8-823a-e85d6972947c\") pod \"minio\" (UID: \"7a9c4ae4-edf1-4af8-8a09-a29ea6b31b96\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/399125db1bdf24c99d3961bc8cde5ea53b5a3b8545042d8228b1c7f7e4a327bf/globalmount\"" pod="minio-dev/minio" Nov 24 21:32:50 crc kubenswrapper[4915]: I1124 21:32:50.618261 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7snf2\" (UniqueName: \"kubernetes.io/projected/7a9c4ae4-edf1-4af8-8a09-a29ea6b31b96-kube-api-access-7snf2\") pod \"minio\" (UID: \"7a9c4ae4-edf1-4af8-8a09-a29ea6b31b96\") " pod="minio-dev/minio" Nov 24 21:32:50 crc kubenswrapper[4915]: I1124 21:32:50.631705 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f23de451-a1d3-46e8-823a-e85d6972947c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f23de451-a1d3-46e8-823a-e85d6972947c\") pod \"minio\" (UID: \"7a9c4ae4-edf1-4af8-8a09-a29ea6b31b96\") " pod="minio-dev/minio" Nov 24 21:32:50 crc kubenswrapper[4915]: I1124 21:32:50.924007 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Nov 24 21:32:51 crc kubenswrapper[4915]: I1124 21:32:51.395794 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Nov 24 21:32:51 crc kubenswrapper[4915]: I1124 21:32:51.461263 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"7a9c4ae4-edf1-4af8-8a09-a29ea6b31b96","Type":"ContainerStarted","Data":"a86226cd6bd5bd30a7007be464aa53f7ea6e73e1631da5059effba4df5320437"} Nov 24 21:32:54 crc kubenswrapper[4915]: I1124 21:32:54.326888 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:32:54 crc kubenswrapper[4915]: I1124 21:32:54.327327 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:32:54 crc kubenswrapper[4915]: I1124 21:32:54.327371 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 21:32:54 crc kubenswrapper[4915]: I1124 21:32:54.327982 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"615393d21d21ae1d445108dc4b58018415536dc2737ece31d186b4e6013b73e9"} pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:32:54 crc kubenswrapper[4915]: I1124 21:32:54.328030 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" containerID="cri-o://615393d21d21ae1d445108dc4b58018415536dc2737ece31d186b4e6013b73e9" gracePeriod=600 Nov 24 21:32:54 crc kubenswrapper[4915]: I1124 21:32:54.482809 4915 generic.go:334] "Generic (PLEG): container finished" podID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerID="615393d21d21ae1d445108dc4b58018415536dc2737ece31d186b4e6013b73e9" exitCode=0 Nov 24 21:32:54 crc kubenswrapper[4915]: I1124 21:32:54.482872 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerDied","Data":"615393d21d21ae1d445108dc4b58018415536dc2737ece31d186b4e6013b73e9"} Nov 24 21:32:54 crc kubenswrapper[4915]: I1124 21:32:54.482926 4915 scope.go:117] "RemoveContainer" containerID="57ea3717b75122ffe3155ca58b5c8ae3efe0bf8c9a3d96219f40084a7d067ef2" Nov 24 21:32:55 crc kubenswrapper[4915]: I1124 21:32:55.495413 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerStarted","Data":"9b807af914a662edc8043def20fa4b712cbac16789b5da03da771b483217896d"} Nov 24 21:32:55 crc kubenswrapper[4915]: I1124 21:32:55.497456 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"7a9c4ae4-edf1-4af8-8a09-a29ea6b31b96","Type":"ContainerStarted","Data":"873897574fb9362f5803f11fa42abd66a3d9cb2aa6105481f3b5f972dd206492"} Nov 24 21:32:55 crc kubenswrapper[4915]: I1124 21:32:55.529540 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=3.6378874850000003 podStartE2EDuration="7.529508151s" podCreationTimestamp="2025-11-24 21:32:48 +0000 UTC" firstStartedPulling="2025-11-24 21:32:51.411231872 +0000 UTC m=+789.727484045" lastFinishedPulling="2025-11-24 21:32:55.302852528 +0000 UTC m=+793.619104711" observedRunningTime="2025-11-24 21:32:55.527751294 +0000 UTC m=+793.844003467" watchObservedRunningTime="2025-11-24 21:32:55.529508151 +0000 UTC m=+793.845760324" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.078415 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-5qxx5"] Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.080747 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-76cc67bf56-5qxx5" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.085361 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.085681 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-tc56w" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.086071 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.093947 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.093768 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-5qxx5"] Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.095508 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.236109 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8add4aa-1636-46a6-9bb9-050f2c4a456f-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-5qxx5\" (UID: \"d8add4aa-1636-46a6-9bb9-050f2c4a456f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-5qxx5" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.236210 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/d8add4aa-1636-46a6-9bb9-050f2c4a456f-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-5qxx5\" (UID: \"d8add4aa-1636-46a6-9bb9-050f2c4a456f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-5qxx5" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.236247 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mq59\" (UniqueName: \"kubernetes.io/projected/d8add4aa-1636-46a6-9bb9-050f2c4a456f-kube-api-access-6mq59\") pod \"logging-loki-distributor-76cc67bf56-5qxx5\" (UID: \"d8add4aa-1636-46a6-9bb9-050f2c4a456f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-5qxx5" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.236296 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/d8add4aa-1636-46a6-9bb9-050f2c4a456f-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-5qxx5\" (UID: \"d8add4aa-1636-46a6-9bb9-050f2c4a456f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-5qxx5" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.236454 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8add4aa-1636-46a6-9bb9-050f2c4a456f-config\") pod \"logging-loki-distributor-76cc67bf56-5qxx5\" (UID: \"d8add4aa-1636-46a6-9bb9-050f2c4a456f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-5qxx5" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.293688 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-r4hdw"] Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.294538 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5895d59bb8-r4hdw" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.298602 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.298741 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.298883 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.312654 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-r4hdw"] Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.338266 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8add4aa-1636-46a6-9bb9-050f2c4a456f-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-5qxx5\" (UID: \"d8add4aa-1636-46a6-9bb9-050f2c4a456f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-5qxx5" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.338315 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mq59\" (UniqueName: \"kubernetes.io/projected/d8add4aa-1636-46a6-9bb9-050f2c4a456f-kube-api-access-6mq59\") pod \"logging-loki-distributor-76cc67bf56-5qxx5\" (UID: \"d8add4aa-1636-46a6-9bb9-050f2c4a456f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-5qxx5" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.338367 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/d8add4aa-1636-46a6-9bb9-050f2c4a456f-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-5qxx5\" (UID: \"d8add4aa-1636-46a6-9bb9-050f2c4a456f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-5qxx5" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.338393 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/d8add4aa-1636-46a6-9bb9-050f2c4a456f-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-5qxx5\" (UID: \"d8add4aa-1636-46a6-9bb9-050f2c4a456f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-5qxx5" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.338450 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8add4aa-1636-46a6-9bb9-050f2c4a456f-config\") pod \"logging-loki-distributor-76cc67bf56-5qxx5\" (UID: \"d8add4aa-1636-46a6-9bb9-050f2c4a456f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-5qxx5" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.339848 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8add4aa-1636-46a6-9bb9-050f2c4a456f-config\") pod \"logging-loki-distributor-76cc67bf56-5qxx5\" (UID: \"d8add4aa-1636-46a6-9bb9-050f2c4a456f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-5qxx5" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.340340 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8add4aa-1636-46a6-9bb9-050f2c4a456f-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-5qxx5\" (UID: \"d8add4aa-1636-46a6-9bb9-050f2c4a456f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-5qxx5" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.350377 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/d8add4aa-1636-46a6-9bb9-050f2c4a456f-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-5qxx5\" (UID: \"d8add4aa-1636-46a6-9bb9-050f2c4a456f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-5qxx5" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.350818 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/d8add4aa-1636-46a6-9bb9-050f2c4a456f-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-5qxx5\" (UID: \"d8add4aa-1636-46a6-9bb9-050f2c4a456f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-5qxx5" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.374456 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mq59\" (UniqueName: \"kubernetes.io/projected/d8add4aa-1636-46a6-9bb9-050f2c4a456f-kube-api-access-6mq59\") pod \"logging-loki-distributor-76cc67bf56-5qxx5\" (UID: \"d8add4aa-1636-46a6-9bb9-050f2c4a456f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-5qxx5" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.375134 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-4knt6"] Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.376041 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-4knt6" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.385252 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.385462 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.401224 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-4knt6"] Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.405662 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-76cc67bf56-5qxx5" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.441641 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-r4hdw\" (UID: \"1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-r4hdw" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.442023 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfcwh\" (UniqueName: \"kubernetes.io/projected/1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f-kube-api-access-mfcwh\") pod \"logging-loki-querier-5895d59bb8-r4hdw\" (UID: \"1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-r4hdw" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.442086 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-r4hdw\" (UID: \"1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-r4hdw" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.442137 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f-config\") pod \"logging-loki-querier-5895d59bb8-r4hdw\" (UID: \"1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-r4hdw" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.442154 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-r4hdw\" (UID: \"1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-r4hdw" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.442173 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-r4hdw\" (UID: \"1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-r4hdw" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.473377 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w"] Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.474341 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.490876 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.493663 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.493872 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.494053 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.494167 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.498481 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt"] Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.499564 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.502721 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-445zz" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.505018 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w"] Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.511747 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt"] Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.549963 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cd510cd-fbfb-4351-88fa-149952989968-config\") pod \"logging-loki-query-frontend-84558f7c9f-4knt6\" (UID: \"7cd510cd-fbfb-4351-88fa-149952989968\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-4knt6" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.550022 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/7cd510cd-fbfb-4351-88fa-149952989968-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-4knt6\" (UID: \"7cd510cd-fbfb-4351-88fa-149952989968\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-4knt6" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.550051 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12be2daf-d31e-4bbb-921f-15a90d8db057-logging-loki-ca-bundle\") pod \"logging-loki-gateway-85f6fc88b5-bnc8w\" (UID: \"12be2daf-d31e-4bbb-921f-15a90d8db057\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.550084 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-r4hdw\" (UID: \"1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-r4hdw" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.550102 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f-config\") pod \"logging-loki-querier-5895d59bb8-r4hdw\" (UID: \"1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-r4hdw" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.550116 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-r4hdw\" (UID: \"1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-r4hdw" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.550485 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12be2daf-d31e-4bbb-921f-15a90d8db057-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-85f6fc88b5-bnc8w\" (UID: \"12be2daf-d31e-4bbb-921f-15a90d8db057\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.550530 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/12be2daf-d31e-4bbb-921f-15a90d8db057-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-85f6fc88b5-bnc8w\" (UID: \"12be2daf-d31e-4bbb-921f-15a90d8db057\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.550568 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-r4hdw\" (UID: \"1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-r4hdw" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.550588 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97f4q\" (UniqueName: \"kubernetes.io/projected/12be2daf-d31e-4bbb-921f-15a90d8db057-kube-api-access-97f4q\") pod \"logging-loki-gateway-85f6fc88b5-bnc8w\" (UID: \"12be2daf-d31e-4bbb-921f-15a90d8db057\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.550636 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/12be2daf-d31e-4bbb-921f-15a90d8db057-rbac\") pod \"logging-loki-gateway-85f6fc88b5-bnc8w\" (UID: \"12be2daf-d31e-4bbb-921f-15a90d8db057\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.550677 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/7cd510cd-fbfb-4351-88fa-149952989968-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-4knt6\" (UID: \"7cd510cd-fbfb-4351-88fa-149952989968\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-4knt6" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.550710 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfcwh\" (UniqueName: \"kubernetes.io/projected/1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f-kube-api-access-mfcwh\") pod \"logging-loki-querier-5895d59bb8-r4hdw\" (UID: \"1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-r4hdw" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.550753 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cd510cd-fbfb-4351-88fa-149952989968-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-4knt6\" (UID: \"7cd510cd-fbfb-4351-88fa-149952989968\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-4knt6" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.550772 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/12be2daf-d31e-4bbb-921f-15a90d8db057-tls-secret\") pod \"logging-loki-gateway-85f6fc88b5-bnc8w\" (UID: \"12be2daf-d31e-4bbb-921f-15a90d8db057\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.550830 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/12be2daf-d31e-4bbb-921f-15a90d8db057-lokistack-gateway\") pod \"logging-loki-gateway-85f6fc88b5-bnc8w\" (UID: \"12be2daf-d31e-4bbb-921f-15a90d8db057\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.550875 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/12be2daf-d31e-4bbb-921f-15a90d8db057-tenants\") pod \"logging-loki-gateway-85f6fc88b5-bnc8w\" (UID: \"12be2daf-d31e-4bbb-921f-15a90d8db057\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.550897 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz97h\" (UniqueName: \"kubernetes.io/projected/7cd510cd-fbfb-4351-88fa-149952989968-kube-api-access-bz97h\") pod \"logging-loki-query-frontend-84558f7c9f-4knt6\" (UID: \"7cd510cd-fbfb-4351-88fa-149952989968\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-4knt6" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.550937 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-r4hdw\" (UID: \"1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-r4hdw" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.551295 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f-config\") pod \"logging-loki-querier-5895d59bb8-r4hdw\" (UID: \"1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-r4hdw" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.552055 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-r4hdw\" (UID: \"1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-r4hdw" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.556557 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-r4hdw\" (UID: \"1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-r4hdw" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.557453 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-r4hdw\" (UID: \"1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-r4hdw" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.562354 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-r4hdw\" (UID: \"1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-r4hdw" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.580243 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfcwh\" (UniqueName: \"kubernetes.io/projected/1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f-kube-api-access-mfcwh\") pod \"logging-loki-querier-5895d59bb8-r4hdw\" (UID: \"1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-r4hdw" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.609194 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5895d59bb8-r4hdw" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.653355 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/7cd510cd-fbfb-4351-88fa-149952989968-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-4knt6\" (UID: \"7cd510cd-fbfb-4351-88fa-149952989968\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-4knt6" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.653409 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cd510cd-fbfb-4351-88fa-149952989968-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-4knt6\" (UID: \"7cd510cd-fbfb-4351-88fa-149952989968\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-4knt6" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.653431 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/12be2daf-d31e-4bbb-921f-15a90d8db057-tls-secret\") pod \"logging-loki-gateway-85f6fc88b5-bnc8w\" (UID: \"12be2daf-d31e-4bbb-921f-15a90d8db057\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.653458 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/12be2daf-d31e-4bbb-921f-15a90d8db057-lokistack-gateway\") pod \"logging-loki-gateway-85f6fc88b5-bnc8w\" (UID: \"12be2daf-d31e-4bbb-921f-15a90d8db057\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.653479 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/cd4feec8-19ce-4e0a-8b72-62a5630a13cd-rbac\") pod \"logging-loki-gateway-85f6fc88b5-2ltkt\" (UID: \"cd4feec8-19ce-4e0a-8b72-62a5630a13cd\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.653508 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/cd4feec8-19ce-4e0a-8b72-62a5630a13cd-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-85f6fc88b5-2ltkt\" (UID: \"cd4feec8-19ce-4e0a-8b72-62a5630a13cd\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.653536 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/12be2daf-d31e-4bbb-921f-15a90d8db057-tenants\") pod \"logging-loki-gateway-85f6fc88b5-bnc8w\" (UID: \"12be2daf-d31e-4bbb-921f-15a90d8db057\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.653557 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz97h\" (UniqueName: \"kubernetes.io/projected/7cd510cd-fbfb-4351-88fa-149952989968-kube-api-access-bz97h\") pod \"logging-loki-query-frontend-84558f7c9f-4knt6\" (UID: \"7cd510cd-fbfb-4351-88fa-149952989968\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-4knt6" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.653575 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbszc\" (UniqueName: \"kubernetes.io/projected/cd4feec8-19ce-4e0a-8b72-62a5630a13cd-kube-api-access-rbszc\") pod \"logging-loki-gateway-85f6fc88b5-2ltkt\" (UID: \"cd4feec8-19ce-4e0a-8b72-62a5630a13cd\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.653593 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/cd4feec8-19ce-4e0a-8b72-62a5630a13cd-lokistack-gateway\") pod \"logging-loki-gateway-85f6fc88b5-2ltkt\" (UID: \"cd4feec8-19ce-4e0a-8b72-62a5630a13cd\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.653629 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cd510cd-fbfb-4351-88fa-149952989968-config\") pod \"logging-loki-query-frontend-84558f7c9f-4knt6\" (UID: \"7cd510cd-fbfb-4351-88fa-149952989968\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-4knt6" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.653646 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/7cd510cd-fbfb-4351-88fa-149952989968-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-4knt6\" (UID: \"7cd510cd-fbfb-4351-88fa-149952989968\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-4knt6" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.653662 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12be2daf-d31e-4bbb-921f-15a90d8db057-logging-loki-ca-bundle\") pod \"logging-loki-gateway-85f6fc88b5-bnc8w\" (UID: \"12be2daf-d31e-4bbb-921f-15a90d8db057\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.653686 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd4feec8-19ce-4e0a-8b72-62a5630a13cd-logging-loki-ca-bundle\") pod \"logging-loki-gateway-85f6fc88b5-2ltkt\" (UID: \"cd4feec8-19ce-4e0a-8b72-62a5630a13cd\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.653704 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd4feec8-19ce-4e0a-8b72-62a5630a13cd-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-85f6fc88b5-2ltkt\" (UID: \"cd4feec8-19ce-4e0a-8b72-62a5630a13cd\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.653727 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/cd4feec8-19ce-4e0a-8b72-62a5630a13cd-tenants\") pod \"logging-loki-gateway-85f6fc88b5-2ltkt\" (UID: \"cd4feec8-19ce-4e0a-8b72-62a5630a13cd\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.653747 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12be2daf-d31e-4bbb-921f-15a90d8db057-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-85f6fc88b5-bnc8w\" (UID: \"12be2daf-d31e-4bbb-921f-15a90d8db057\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.653764 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/12be2daf-d31e-4bbb-921f-15a90d8db057-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-85f6fc88b5-bnc8w\" (UID: \"12be2daf-d31e-4bbb-921f-15a90d8db057\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.653803 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97f4q\" (UniqueName: \"kubernetes.io/projected/12be2daf-d31e-4bbb-921f-15a90d8db057-kube-api-access-97f4q\") pod \"logging-loki-gateway-85f6fc88b5-bnc8w\" (UID: \"12be2daf-d31e-4bbb-921f-15a90d8db057\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.653826 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/cd4feec8-19ce-4e0a-8b72-62a5630a13cd-tls-secret\") pod \"logging-loki-gateway-85f6fc88b5-2ltkt\" (UID: \"cd4feec8-19ce-4e0a-8b72-62a5630a13cd\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.653843 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/12be2daf-d31e-4bbb-921f-15a90d8db057-rbac\") pod \"logging-loki-gateway-85f6fc88b5-bnc8w\" (UID: \"12be2daf-d31e-4bbb-921f-15a90d8db057\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.654436 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/12be2daf-d31e-4bbb-921f-15a90d8db057-lokistack-gateway\") pod \"logging-loki-gateway-85f6fc88b5-bnc8w\" (UID: \"12be2daf-d31e-4bbb-921f-15a90d8db057\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.654711 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/12be2daf-d31e-4bbb-921f-15a90d8db057-rbac\") pod \"logging-loki-gateway-85f6fc88b5-bnc8w\" (UID: \"12be2daf-d31e-4bbb-921f-15a90d8db057\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.655124 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cd510cd-fbfb-4351-88fa-149952989968-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-4knt6\" (UID: \"7cd510cd-fbfb-4351-88fa-149952989968\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-4knt6" Nov 24 21:33:00 crc kubenswrapper[4915]: E1124 21:33:00.655210 4915 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Nov 24 21:33:00 crc kubenswrapper[4915]: E1124 21:33:00.655253 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12be2daf-d31e-4bbb-921f-15a90d8db057-tls-secret podName:12be2daf-d31e-4bbb-921f-15a90d8db057 nodeName:}" failed. No retries permitted until 2025-11-24 21:33:01.155239769 +0000 UTC m=+799.471491932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/12be2daf-d31e-4bbb-921f-15a90d8db057-tls-secret") pod "logging-loki-gateway-85f6fc88b5-bnc8w" (UID: "12be2daf-d31e-4bbb-921f-15a90d8db057") : secret "logging-loki-gateway-http" not found Nov 24 21:33:00 crc kubenswrapper[4915]: E1124 21:33:00.655489 4915 configmap.go:193] Couldn't get configMap openshift-logging/logging-loki-gateway-ca-bundle: configmap "logging-loki-gateway-ca-bundle" not found Nov 24 21:33:00 crc kubenswrapper[4915]: E1124 21:33:00.655518 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/12be2daf-d31e-4bbb-921f-15a90d8db057-logging-loki-gateway-ca-bundle podName:12be2daf-d31e-4bbb-921f-15a90d8db057 nodeName:}" failed. No retries permitted until 2025-11-24 21:33:01.155511466 +0000 UTC m=+799.471763639 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "logging-loki-gateway-ca-bundle" (UniqueName: "kubernetes.io/configmap/12be2daf-d31e-4bbb-921f-15a90d8db057-logging-loki-gateway-ca-bundle") pod "logging-loki-gateway-85f6fc88b5-bnc8w" (UID: "12be2daf-d31e-4bbb-921f-15a90d8db057") : configmap "logging-loki-gateway-ca-bundle" not found Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.656243 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12be2daf-d31e-4bbb-921f-15a90d8db057-logging-loki-ca-bundle\") pod \"logging-loki-gateway-85f6fc88b5-bnc8w\" (UID: \"12be2daf-d31e-4bbb-921f-15a90d8db057\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.656811 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7cd510cd-fbfb-4351-88fa-149952989968-config\") pod \"logging-loki-query-frontend-84558f7c9f-4knt6\" (UID: \"7cd510cd-fbfb-4351-88fa-149952989968\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-4knt6" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.659671 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/7cd510cd-fbfb-4351-88fa-149952989968-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-4knt6\" (UID: \"7cd510cd-fbfb-4351-88fa-149952989968\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-4knt6" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.661357 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/7cd510cd-fbfb-4351-88fa-149952989968-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-4knt6\" (UID: \"7cd510cd-fbfb-4351-88fa-149952989968\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-4knt6" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.661496 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/12be2daf-d31e-4bbb-921f-15a90d8db057-tenants\") pod \"logging-loki-gateway-85f6fc88b5-bnc8w\" (UID: \"12be2daf-d31e-4bbb-921f-15a90d8db057\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.673603 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/12be2daf-d31e-4bbb-921f-15a90d8db057-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-85f6fc88b5-bnc8w\" (UID: \"12be2daf-d31e-4bbb-921f-15a90d8db057\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.678219 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97f4q\" (UniqueName: \"kubernetes.io/projected/12be2daf-d31e-4bbb-921f-15a90d8db057-kube-api-access-97f4q\") pod \"logging-loki-gateway-85f6fc88b5-bnc8w\" (UID: \"12be2daf-d31e-4bbb-921f-15a90d8db057\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.678258 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz97h\" (UniqueName: \"kubernetes.io/projected/7cd510cd-fbfb-4351-88fa-149952989968-kube-api-access-bz97h\") pod \"logging-loki-query-frontend-84558f7c9f-4knt6\" (UID: \"7cd510cd-fbfb-4351-88fa-149952989968\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-4knt6" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.755456 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/cd4feec8-19ce-4e0a-8b72-62a5630a13cd-tls-secret\") pod \"logging-loki-gateway-85f6fc88b5-2ltkt\" (UID: \"cd4feec8-19ce-4e0a-8b72-62a5630a13cd\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.755816 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/cd4feec8-19ce-4e0a-8b72-62a5630a13cd-rbac\") pod \"logging-loki-gateway-85f6fc88b5-2ltkt\" (UID: \"cd4feec8-19ce-4e0a-8b72-62a5630a13cd\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.755836 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/cd4feec8-19ce-4e0a-8b72-62a5630a13cd-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-85f6fc88b5-2ltkt\" (UID: \"cd4feec8-19ce-4e0a-8b72-62a5630a13cd\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.755882 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbszc\" (UniqueName: \"kubernetes.io/projected/cd4feec8-19ce-4e0a-8b72-62a5630a13cd-kube-api-access-rbszc\") pod \"logging-loki-gateway-85f6fc88b5-2ltkt\" (UID: \"cd4feec8-19ce-4e0a-8b72-62a5630a13cd\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.755911 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/cd4feec8-19ce-4e0a-8b72-62a5630a13cd-lokistack-gateway\") pod \"logging-loki-gateway-85f6fc88b5-2ltkt\" (UID: \"cd4feec8-19ce-4e0a-8b72-62a5630a13cd\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.755959 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd4feec8-19ce-4e0a-8b72-62a5630a13cd-logging-loki-ca-bundle\") pod \"logging-loki-gateway-85f6fc88b5-2ltkt\" (UID: \"cd4feec8-19ce-4e0a-8b72-62a5630a13cd\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.755985 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd4feec8-19ce-4e0a-8b72-62a5630a13cd-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-85f6fc88b5-2ltkt\" (UID: \"cd4feec8-19ce-4e0a-8b72-62a5630a13cd\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.756011 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/cd4feec8-19ce-4e0a-8b72-62a5630a13cd-tenants\") pod \"logging-loki-gateway-85f6fc88b5-2ltkt\" (UID: \"cd4feec8-19ce-4e0a-8b72-62a5630a13cd\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" Nov 24 21:33:00 crc kubenswrapper[4915]: E1124 21:33:00.755669 4915 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Nov 24 21:33:00 crc kubenswrapper[4915]: E1124 21:33:00.757010 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd4feec8-19ce-4e0a-8b72-62a5630a13cd-tls-secret podName:cd4feec8-19ce-4e0a-8b72-62a5630a13cd nodeName:}" failed. No retries permitted until 2025-11-24 21:33:01.256990082 +0000 UTC m=+799.573242325 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/cd4feec8-19ce-4e0a-8b72-62a5630a13cd-tls-secret") pod "logging-loki-gateway-85f6fc88b5-2ltkt" (UID: "cd4feec8-19ce-4e0a-8b72-62a5630a13cd") : secret "logging-loki-gateway-http" not found Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.758456 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/cd4feec8-19ce-4e0a-8b72-62a5630a13cd-lokistack-gateway\") pod \"logging-loki-gateway-85f6fc88b5-2ltkt\" (UID: \"cd4feec8-19ce-4e0a-8b72-62a5630a13cd\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.758520 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/cd4feec8-19ce-4e0a-8b72-62a5630a13cd-rbac\") pod \"logging-loki-gateway-85f6fc88b5-2ltkt\" (UID: \"cd4feec8-19ce-4e0a-8b72-62a5630a13cd\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.760110 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd4feec8-19ce-4e0a-8b72-62a5630a13cd-logging-loki-ca-bundle\") pod \"logging-loki-gateway-85f6fc88b5-2ltkt\" (UID: \"cd4feec8-19ce-4e0a-8b72-62a5630a13cd\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.761491 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd4feec8-19ce-4e0a-8b72-62a5630a13cd-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-85f6fc88b5-2ltkt\" (UID: \"cd4feec8-19ce-4e0a-8b72-62a5630a13cd\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.762212 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/cd4feec8-19ce-4e0a-8b72-62a5630a13cd-tenants\") pod \"logging-loki-gateway-85f6fc88b5-2ltkt\" (UID: \"cd4feec8-19ce-4e0a-8b72-62a5630a13cd\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.763482 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/cd4feec8-19ce-4e0a-8b72-62a5630a13cd-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-85f6fc88b5-2ltkt\" (UID: \"cd4feec8-19ce-4e0a-8b72-62a5630a13cd\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.774278 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbszc\" (UniqueName: \"kubernetes.io/projected/cd4feec8-19ce-4e0a-8b72-62a5630a13cd-kube-api-access-rbszc\") pod \"logging-loki-gateway-85f6fc88b5-2ltkt\" (UID: \"cd4feec8-19ce-4e0a-8b72-62a5630a13cd\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.803578 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-4knt6" Nov 24 21:33:00 crc kubenswrapper[4915]: I1124 21:33:00.938752 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-5qxx5"] Nov 24 21:33:00 crc kubenswrapper[4915]: W1124 21:33:00.966320 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8add4aa_1636_46a6_9bb9_050f2c4a456f.slice/crio-9d4754bb248d7dabc38cf1d56e80e337a4e5351d887c6a2eaf6d11078562b096 WatchSource:0}: Error finding container 9d4754bb248d7dabc38cf1d56e80e337a4e5351d887c6a2eaf6d11078562b096: Status 404 returned error can't find the container with id 9d4754bb248d7dabc38cf1d56e80e337a4e5351d887c6a2eaf6d11078562b096 Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.136976 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-r4hdw"] Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.180322 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-4knt6"] Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.184598 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12be2daf-d31e-4bbb-921f-15a90d8db057-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-85f6fc88b5-bnc8w\" (UID: \"12be2daf-d31e-4bbb-921f-15a90d8db057\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.184699 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/12be2daf-d31e-4bbb-921f-15a90d8db057-tls-secret\") pod \"logging-loki-gateway-85f6fc88b5-bnc8w\" (UID: \"12be2daf-d31e-4bbb-921f-15a90d8db057\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.185742 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12be2daf-d31e-4bbb-921f-15a90d8db057-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-85f6fc88b5-bnc8w\" (UID: \"12be2daf-d31e-4bbb-921f-15a90d8db057\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:01 crc kubenswrapper[4915]: W1124 21:33:01.189617 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cd510cd_fbfb_4351_88fa_149952989968.slice/crio-14dbed71e9b94c01712968addb8ed8a5debf2866d6b544de972d9cfb81239b2c WatchSource:0}: Error finding container 14dbed71e9b94c01712968addb8ed8a5debf2866d6b544de972d9cfb81239b2c: Status 404 returned error can't find the container with id 14dbed71e9b94c01712968addb8ed8a5debf2866d6b544de972d9cfb81239b2c Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.192492 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/12be2daf-d31e-4bbb-921f-15a90d8db057-tls-secret\") pod \"logging-loki-gateway-85f6fc88b5-bnc8w\" (UID: \"12be2daf-d31e-4bbb-921f-15a90d8db057\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.279614 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.280697 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.283580 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.283766 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.288397 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.289352 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/cd4feec8-19ce-4e0a-8b72-62a5630a13cd-tls-secret\") pod \"logging-loki-gateway-85f6fc88b5-2ltkt\" (UID: \"cd4feec8-19ce-4e0a-8b72-62a5630a13cd\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.292650 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/cd4feec8-19ce-4e0a-8b72-62a5630a13cd-tls-secret\") pod \"logging-loki-gateway-85f6fc88b5-2ltkt\" (UID: \"cd4feec8-19ce-4e0a-8b72-62a5630a13cd\") " pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.324337 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.325359 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.327046 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.327879 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.334017 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.390594 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/3159fb3d-ea27-4141-97d1-0924f1854801-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"3159fb3d-ea27-4141-97d1-0924f1854801\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.390654 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8bbdd28a-cf94-4079-af89-a3895134420f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8bbdd28a-cf94-4079-af89-a3895134420f\") pod \"logging-loki-ingester-0\" (UID: \"3159fb3d-ea27-4141-97d1-0924f1854801\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.390678 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/3159fb3d-ea27-4141-97d1-0924f1854801-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"3159fb3d-ea27-4141-97d1-0924f1854801\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.390696 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3159fb3d-ea27-4141-97d1-0924f1854801-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"3159fb3d-ea27-4141-97d1-0924f1854801\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.390718 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/3159fb3d-ea27-4141-97d1-0924f1854801-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"3159fb3d-ea27-4141-97d1-0924f1854801\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.390753 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3159fb3d-ea27-4141-97d1-0924f1854801-config\") pod \"logging-loki-ingester-0\" (UID: \"3159fb3d-ea27-4141-97d1-0924f1854801\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.390912 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7626e716-abfd-46ef-8277-aa5daf6ed915\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7626e716-abfd-46ef-8277-aa5daf6ed915\") pod \"logging-loki-ingester-0\" (UID: \"3159fb3d-ea27-4141-97d1-0924f1854801\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.390954 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dr47\" (UniqueName: \"kubernetes.io/projected/3159fb3d-ea27-4141-97d1-0924f1854801-kube-api-access-8dr47\") pod \"logging-loki-ingester-0\" (UID: \"3159fb3d-ea27-4141-97d1-0924f1854801\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.416256 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.421989 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.423137 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.425152 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.425321 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.427701 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.431454 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.492001 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d7e6e018-97d5-4be8-8cfc-5b75a1143a3b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7e6e018-97d5-4be8-8cfc-5b75a1143a3b\") pod \"logging-loki-compactor-0\" (UID: \"8e687b84-4a5d-4a03-b384-35db23ba77cb\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.492079 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e687b84-4a5d-4a03-b384-35db23ba77cb-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"8e687b84-4a5d-4a03-b384-35db23ba77cb\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.492108 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e687b84-4a5d-4a03-b384-35db23ba77cb-config\") pod \"logging-loki-compactor-0\" (UID: \"8e687b84-4a5d-4a03-b384-35db23ba77cb\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.492132 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvzq5\" (UniqueName: \"kubernetes.io/projected/8e687b84-4a5d-4a03-b384-35db23ba77cb-kube-api-access-vvzq5\") pod \"logging-loki-compactor-0\" (UID: \"8e687b84-4a5d-4a03-b384-35db23ba77cb\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.492156 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/3159fb3d-ea27-4141-97d1-0924f1854801-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"3159fb3d-ea27-4141-97d1-0924f1854801\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.492217 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8bbdd28a-cf94-4079-af89-a3895134420f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8bbdd28a-cf94-4079-af89-a3895134420f\") pod \"logging-loki-ingester-0\" (UID: \"3159fb3d-ea27-4141-97d1-0924f1854801\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.492238 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/8e687b84-4a5d-4a03-b384-35db23ba77cb-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"8e687b84-4a5d-4a03-b384-35db23ba77cb\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.492261 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/3159fb3d-ea27-4141-97d1-0924f1854801-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"3159fb3d-ea27-4141-97d1-0924f1854801\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.492281 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3159fb3d-ea27-4141-97d1-0924f1854801-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"3159fb3d-ea27-4141-97d1-0924f1854801\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.492297 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/8e687b84-4a5d-4a03-b384-35db23ba77cb-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"8e687b84-4a5d-4a03-b384-35db23ba77cb\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.492319 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/3159fb3d-ea27-4141-97d1-0924f1854801-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"3159fb3d-ea27-4141-97d1-0924f1854801\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.492379 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3159fb3d-ea27-4141-97d1-0924f1854801-config\") pod \"logging-loki-ingester-0\" (UID: \"3159fb3d-ea27-4141-97d1-0924f1854801\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.492397 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8e687b84-4a5d-4a03-b384-35db23ba77cb-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"8e687b84-4a5d-4a03-b384-35db23ba77cb\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.492418 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7626e716-abfd-46ef-8277-aa5daf6ed915\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7626e716-abfd-46ef-8277-aa5daf6ed915\") pod \"logging-loki-ingester-0\" (UID: \"3159fb3d-ea27-4141-97d1-0924f1854801\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.492435 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dr47\" (UniqueName: \"kubernetes.io/projected/3159fb3d-ea27-4141-97d1-0924f1854801-kube-api-access-8dr47\") pod \"logging-loki-ingester-0\" (UID: \"3159fb3d-ea27-4141-97d1-0924f1854801\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.493586 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3159fb3d-ea27-4141-97d1-0924f1854801-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"3159fb3d-ea27-4141-97d1-0924f1854801\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.493687 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3159fb3d-ea27-4141-97d1-0924f1854801-config\") pod \"logging-loki-ingester-0\" (UID: \"3159fb3d-ea27-4141-97d1-0924f1854801\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.496071 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/3159fb3d-ea27-4141-97d1-0924f1854801-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"3159fb3d-ea27-4141-97d1-0924f1854801\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.496142 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/3159fb3d-ea27-4141-97d1-0924f1854801-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"3159fb3d-ea27-4141-97d1-0924f1854801\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.501765 4915 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.501791 4915 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.501823 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7626e716-abfd-46ef-8277-aa5daf6ed915\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7626e716-abfd-46ef-8277-aa5daf6ed915\") pod \"logging-loki-ingester-0\" (UID: \"3159fb3d-ea27-4141-97d1-0924f1854801\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9ecd2c0691f7fa32d40cc400a029af4d376a6351e1fff5cebaef492063d70fc6/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.501826 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8bbdd28a-cf94-4079-af89-a3895134420f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8bbdd28a-cf94-4079-af89-a3895134420f\") pod \"logging-loki-ingester-0\" (UID: \"3159fb3d-ea27-4141-97d1-0924f1854801\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/64918122ad9398561b9b3b517a2023477f573ee6cf8292142aebeb50144b8ee4/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.505936 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/3159fb3d-ea27-4141-97d1-0924f1854801-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"3159fb3d-ea27-4141-97d1-0924f1854801\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.520338 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dr47\" (UniqueName: \"kubernetes.io/projected/3159fb3d-ea27-4141-97d1-0924f1854801-kube-api-access-8dr47\") pod \"logging-loki-ingester-0\" (UID: \"3159fb3d-ea27-4141-97d1-0924f1854801\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.564219 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7626e716-abfd-46ef-8277-aa5daf6ed915\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7626e716-abfd-46ef-8277-aa5daf6ed915\") pod \"logging-loki-ingester-0\" (UID: \"3159fb3d-ea27-4141-97d1-0924f1854801\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.566182 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8bbdd28a-cf94-4079-af89-a3895134420f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8bbdd28a-cf94-4079-af89-a3895134420f\") pod \"logging-loki-ingester-0\" (UID: \"3159fb3d-ea27-4141-97d1-0924f1854801\") " pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.569060 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-4knt6" event={"ID":"7cd510cd-fbfb-4351-88fa-149952989968","Type":"ContainerStarted","Data":"14dbed71e9b94c01712968addb8ed8a5debf2866d6b544de972d9cfb81239b2c"} Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.570162 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5895d59bb8-r4hdw" event={"ID":"1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f","Type":"ContainerStarted","Data":"3df6412716b2a73b62d682d6984f675c6f5dfaa814c237504aefcbc785ecf91e"} Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.570918 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-76cc67bf56-5qxx5" event={"ID":"d8add4aa-1636-46a6-9bb9-050f2c4a456f","Type":"ContainerStarted","Data":"9d4754bb248d7dabc38cf1d56e80e337a4e5351d887c6a2eaf6d11078562b096"} Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.593841 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/8e687b84-4a5d-4a03-b384-35db23ba77cb-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"8e687b84-4a5d-4a03-b384-35db23ba77cb\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.593957 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8e687b84-4a5d-4a03-b384-35db23ba77cb-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"8e687b84-4a5d-4a03-b384-35db23ba77cb\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.594016 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-25199898-cc26-469b-b525-f82c5399a0a5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-25199898-cc26-469b-b525-f82c5399a0a5\") pod \"logging-loki-index-gateway-0\" (UID: \"a4ba5fbb-400a-47eb-9bad-fee76800d021\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.594058 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/a4ba5fbb-400a-47eb-9bad-fee76800d021-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"a4ba5fbb-400a-47eb-9bad-fee76800d021\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.594106 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e687b84-4a5d-4a03-b384-35db23ba77cb-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"8e687b84-4a5d-4a03-b384-35db23ba77cb\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.594137 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvzq5\" (UniqueName: \"kubernetes.io/projected/8e687b84-4a5d-4a03-b384-35db23ba77cb-kube-api-access-vvzq5\") pod \"logging-loki-compactor-0\" (UID: \"8e687b84-4a5d-4a03-b384-35db23ba77cb\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.594165 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4ba5fbb-400a-47eb-9bad-fee76800d021-config\") pod \"logging-loki-index-gateway-0\" (UID: \"a4ba5fbb-400a-47eb-9bad-fee76800d021\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.594199 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/a4ba5fbb-400a-47eb-9bad-fee76800d021-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"a4ba5fbb-400a-47eb-9bad-fee76800d021\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.594229 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjwb4\" (UniqueName: \"kubernetes.io/projected/a4ba5fbb-400a-47eb-9bad-fee76800d021-kube-api-access-xjwb4\") pod \"logging-loki-index-gateway-0\" (UID: \"a4ba5fbb-400a-47eb-9bad-fee76800d021\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.594268 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d7e6e018-97d5-4be8-8cfc-5b75a1143a3b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7e6e018-97d5-4be8-8cfc-5b75a1143a3b\") pod \"logging-loki-compactor-0\" (UID: \"8e687b84-4a5d-4a03-b384-35db23ba77cb\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.594298 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4ba5fbb-400a-47eb-9bad-fee76800d021-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"a4ba5fbb-400a-47eb-9bad-fee76800d021\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.594316 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/a4ba5fbb-400a-47eb-9bad-fee76800d021-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"a4ba5fbb-400a-47eb-9bad-fee76800d021\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.594341 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e687b84-4a5d-4a03-b384-35db23ba77cb-config\") pod \"logging-loki-compactor-0\" (UID: \"8e687b84-4a5d-4a03-b384-35db23ba77cb\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.594371 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/8e687b84-4a5d-4a03-b384-35db23ba77cb-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"8e687b84-4a5d-4a03-b384-35db23ba77cb\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.595431 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e687b84-4a5d-4a03-b384-35db23ba77cb-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"8e687b84-4a5d-4a03-b384-35db23ba77cb\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.595530 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e687b84-4a5d-4a03-b384-35db23ba77cb-config\") pod \"logging-loki-compactor-0\" (UID: \"8e687b84-4a5d-4a03-b384-35db23ba77cb\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.599036 4915 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.599069 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d7e6e018-97d5-4be8-8cfc-5b75a1143a3b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7e6e018-97d5-4be8-8cfc-5b75a1143a3b\") pod \"logging-loki-compactor-0\" (UID: \"8e687b84-4a5d-4a03-b384-35db23ba77cb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4ebb8f2e1ebf18f84f582ae8d242d5fd6267695bf251373c623509ee9c6e2752/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.599131 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/8e687b84-4a5d-4a03-b384-35db23ba77cb-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"8e687b84-4a5d-4a03-b384-35db23ba77cb\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.600904 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/8e687b84-4a5d-4a03-b384-35db23ba77cb-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"8e687b84-4a5d-4a03-b384-35db23ba77cb\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.601075 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/8e687b84-4a5d-4a03-b384-35db23ba77cb-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"8e687b84-4a5d-4a03-b384-35db23ba77cb\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.601162 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.612320 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvzq5\" (UniqueName: \"kubernetes.io/projected/8e687b84-4a5d-4a03-b384-35db23ba77cb-kube-api-access-vvzq5\") pod \"logging-loki-compactor-0\" (UID: \"8e687b84-4a5d-4a03-b384-35db23ba77cb\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.628330 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d7e6e018-97d5-4be8-8cfc-5b75a1143a3b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d7e6e018-97d5-4be8-8cfc-5b75a1143a3b\") pod \"logging-loki-compactor-0\" (UID: \"8e687b84-4a5d-4a03-b384-35db23ba77cb\") " pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.649007 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.695452 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/a4ba5fbb-400a-47eb-9bad-fee76800d021-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"a4ba5fbb-400a-47eb-9bad-fee76800d021\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.695509 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjwb4\" (UniqueName: \"kubernetes.io/projected/a4ba5fbb-400a-47eb-9bad-fee76800d021-kube-api-access-xjwb4\") pod \"logging-loki-index-gateway-0\" (UID: \"a4ba5fbb-400a-47eb-9bad-fee76800d021\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.695561 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4ba5fbb-400a-47eb-9bad-fee76800d021-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"a4ba5fbb-400a-47eb-9bad-fee76800d021\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.695579 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/a4ba5fbb-400a-47eb-9bad-fee76800d021-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"a4ba5fbb-400a-47eb-9bad-fee76800d021\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.695630 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-25199898-cc26-469b-b525-f82c5399a0a5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-25199898-cc26-469b-b525-f82c5399a0a5\") pod \"logging-loki-index-gateway-0\" (UID: \"a4ba5fbb-400a-47eb-9bad-fee76800d021\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.695659 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/a4ba5fbb-400a-47eb-9bad-fee76800d021-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"a4ba5fbb-400a-47eb-9bad-fee76800d021\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.695688 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4ba5fbb-400a-47eb-9bad-fee76800d021-config\") pod \"logging-loki-index-gateway-0\" (UID: \"a4ba5fbb-400a-47eb-9bad-fee76800d021\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.697202 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4ba5fbb-400a-47eb-9bad-fee76800d021-config\") pod \"logging-loki-index-gateway-0\" (UID: \"a4ba5fbb-400a-47eb-9bad-fee76800d021\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.698171 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4ba5fbb-400a-47eb-9bad-fee76800d021-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"a4ba5fbb-400a-47eb-9bad-fee76800d021\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.701702 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/a4ba5fbb-400a-47eb-9bad-fee76800d021-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"a4ba5fbb-400a-47eb-9bad-fee76800d021\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.703350 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/a4ba5fbb-400a-47eb-9bad-fee76800d021-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"a4ba5fbb-400a-47eb-9bad-fee76800d021\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.715451 4915 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.715536 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-25199898-cc26-469b-b525-f82c5399a0a5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-25199898-cc26-469b-b525-f82c5399a0a5\") pod \"logging-loki-index-gateway-0\" (UID: \"a4ba5fbb-400a-47eb-9bad-fee76800d021\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3201f56c5a676cfb14646001f4e0dd4bc8873f21faa3800fc00757426f0a5df2/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.718981 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjwb4\" (UniqueName: \"kubernetes.io/projected/a4ba5fbb-400a-47eb-9bad-fee76800d021-kube-api-access-xjwb4\") pod \"logging-loki-index-gateway-0\" (UID: \"a4ba5fbb-400a-47eb-9bad-fee76800d021\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.719153 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/a4ba5fbb-400a-47eb-9bad-fee76800d021-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"a4ba5fbb-400a-47eb-9bad-fee76800d021\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.750824 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-25199898-cc26-469b-b525-f82c5399a0a5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-25199898-cc26-469b-b525-f82c5399a0a5\") pod \"logging-loki-index-gateway-0\" (UID: \"a4ba5fbb-400a-47eb-9bad-fee76800d021\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.793537 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.865818 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w"] Nov 24 21:33:01 crc kubenswrapper[4915]: I1124 21:33:01.927941 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt"] Nov 24 21:33:01 crc kubenswrapper[4915]: W1124 21:33:01.934935 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd4feec8_19ce_4e0a_8b72_62a5630a13cd.slice/crio-ac3b91c76d9f6e215285d24a63a39dc0792809f1f14e9a9455cdce38b5585017 WatchSource:0}: Error finding container ac3b91c76d9f6e215285d24a63a39dc0792809f1f14e9a9455cdce38b5585017: Status 404 returned error can't find the container with id ac3b91c76d9f6e215285d24a63a39dc0792809f1f14e9a9455cdce38b5585017 Nov 24 21:33:02 crc kubenswrapper[4915]: I1124 21:33:02.075508 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 24 21:33:02 crc kubenswrapper[4915]: I1124 21:33:02.080077 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 24 21:33:02 crc kubenswrapper[4915]: W1124 21:33:02.081760 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3159fb3d_ea27_4141_97d1_0924f1854801.slice/crio-df689e4d18d29f91a95607f4bbb75aab402df2f4a317384b2587ae5a236bf1ed WatchSource:0}: Error finding container df689e4d18d29f91a95607f4bbb75aab402df2f4a317384b2587ae5a236bf1ed: Status 404 returned error can't find the container with id df689e4d18d29f91a95607f4bbb75aab402df2f4a317384b2587ae5a236bf1ed Nov 24 21:33:02 crc kubenswrapper[4915]: W1124 21:33:02.086022 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e687b84_4a5d_4a03_b384_35db23ba77cb.slice/crio-ccff5b74140a0badaae4faba41e86cac230bf1f1c1ad5eec9e6f694cf9222c36 WatchSource:0}: Error finding container ccff5b74140a0badaae4faba41e86cac230bf1f1c1ad5eec9e6f694cf9222c36: Status 404 returned error can't find the container with id ccff5b74140a0badaae4faba41e86cac230bf1f1c1ad5eec9e6f694cf9222c36 Nov 24 21:33:02 crc kubenswrapper[4915]: I1124 21:33:02.212152 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 24 21:33:02 crc kubenswrapper[4915]: W1124 21:33:02.220347 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4ba5fbb_400a_47eb_9bad_fee76800d021.slice/crio-6adaf035cd6e0a79ec3f76acef15f0481e567cd357d24b532033ef39ee2afb8d WatchSource:0}: Error finding container 6adaf035cd6e0a79ec3f76acef15f0481e567cd357d24b532033ef39ee2afb8d: Status 404 returned error can't find the container with id 6adaf035cd6e0a79ec3f76acef15f0481e567cd357d24b532033ef39ee2afb8d Nov 24 21:33:02 crc kubenswrapper[4915]: I1124 21:33:02.592003 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"8e687b84-4a5d-4a03-b384-35db23ba77cb","Type":"ContainerStarted","Data":"ccff5b74140a0badaae4faba41e86cac230bf1f1c1ad5eec9e6f694cf9222c36"} Nov 24 21:33:02 crc kubenswrapper[4915]: I1124 21:33:02.592984 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" event={"ID":"12be2daf-d31e-4bbb-921f-15a90d8db057","Type":"ContainerStarted","Data":"1f14bd0d5ceb306fa71facc9a08d0ef8495b43bb4b9152a08ac18f9f8151f8e5"} Nov 24 21:33:02 crc kubenswrapper[4915]: I1124 21:33:02.594520 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"3159fb3d-ea27-4141-97d1-0924f1854801","Type":"ContainerStarted","Data":"df689e4d18d29f91a95607f4bbb75aab402df2f4a317384b2587ae5a236bf1ed"} Nov 24 21:33:02 crc kubenswrapper[4915]: I1124 21:33:02.595234 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" event={"ID":"cd4feec8-19ce-4e0a-8b72-62a5630a13cd","Type":"ContainerStarted","Data":"ac3b91c76d9f6e215285d24a63a39dc0792809f1f14e9a9455cdce38b5585017"} Nov 24 21:33:02 crc kubenswrapper[4915]: I1124 21:33:02.597951 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"a4ba5fbb-400a-47eb-9bad-fee76800d021","Type":"ContainerStarted","Data":"6adaf035cd6e0a79ec3f76acef15f0481e567cd357d24b532033ef39ee2afb8d"} Nov 24 21:33:08 crc kubenswrapper[4915]: I1124 21:33:08.650188 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"3159fb3d-ea27-4141-97d1-0924f1854801","Type":"ContainerStarted","Data":"22989ce73c882b8d828edf4c7fbc745678edde56925ae363db9e7b68d6699e14"} Nov 24 21:33:08 crc kubenswrapper[4915]: I1124 21:33:08.651002 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:33:08 crc kubenswrapper[4915]: I1124 21:33:08.651762 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" event={"ID":"cd4feec8-19ce-4e0a-8b72-62a5630a13cd","Type":"ContainerStarted","Data":"bc977dcec3d0bedad2da773fb8f47e4ae84366373126ac219422e3d309f8cbb6"} Nov 24 21:33:08 crc kubenswrapper[4915]: I1124 21:33:08.655458 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"a4ba5fbb-400a-47eb-9bad-fee76800d021","Type":"ContainerStarted","Data":"a30bbd539363dc035a1760994bfd478c83713f54d020325a38944edf18ac268f"} Nov 24 21:33:08 crc kubenswrapper[4915]: I1124 21:33:08.655592 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:33:08 crc kubenswrapper[4915]: I1124 21:33:08.657689 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-4knt6" event={"ID":"7cd510cd-fbfb-4351-88fa-149952989968","Type":"ContainerStarted","Data":"dd4cef0e4a9a57bf7c065ded3a245dcb66844f878aafb7a615f081aa734b2b07"} Nov 24 21:33:08 crc kubenswrapper[4915]: I1124 21:33:08.657818 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-4knt6" Nov 24 21:33:08 crc kubenswrapper[4915]: I1124 21:33:08.659515 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"8e687b84-4a5d-4a03-b384-35db23ba77cb","Type":"ContainerStarted","Data":"936a22e8619b1fdbc16146f7be4c1004efda6f7ec4c8cb66fbaaef7aaa549b2e"} Nov 24 21:33:08 crc kubenswrapper[4915]: I1124 21:33:08.659671 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:33:08 crc kubenswrapper[4915]: I1124 21:33:08.661285 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" event={"ID":"12be2daf-d31e-4bbb-921f-15a90d8db057","Type":"ContainerStarted","Data":"afa75b45944e074db34a9cad1f05cabde1541fc832b3e94fd8efae604af02175"} Nov 24 21:33:08 crc kubenswrapper[4915]: I1124 21:33:08.663486 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5895d59bb8-r4hdw" event={"ID":"1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f","Type":"ContainerStarted","Data":"2ebbd4aed95c3a1e54c0272eb76c93d41d3215a7b6d0109b50b4cf91328b1789"} Nov 24 21:33:08 crc kubenswrapper[4915]: I1124 21:33:08.663623 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-5895d59bb8-r4hdw" Nov 24 21:33:08 crc kubenswrapper[4915]: I1124 21:33:08.665127 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-76cc67bf56-5qxx5" event={"ID":"d8add4aa-1636-46a6-9bb9-050f2c4a456f","Type":"ContainerStarted","Data":"fa534835517769a6eae368ce0536b89fe8e978239750ef548d9e46e499439f57"} Nov 24 21:33:08 crc kubenswrapper[4915]: I1124 21:33:08.665325 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-76cc67bf56-5qxx5" Nov 24 21:33:08 crc kubenswrapper[4915]: I1124 21:33:08.682250 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.20211797 podStartE2EDuration="8.682216977s" podCreationTimestamp="2025-11-24 21:33:00 +0000 UTC" firstStartedPulling="2025-11-24 21:33:02.085024035 +0000 UTC m=+800.401276208" lastFinishedPulling="2025-11-24 21:33:07.565123032 +0000 UTC m=+805.881375215" observedRunningTime="2025-11-24 21:33:08.676261706 +0000 UTC m=+806.992513909" watchObservedRunningTime="2025-11-24 21:33:08.682216977 +0000 UTC m=+806.998469230" Nov 24 21:33:08 crc kubenswrapper[4915]: I1124 21:33:08.706155 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-5895d59bb8-r4hdw" podStartSLOduration=2.311012839 podStartE2EDuration="8.706133714s" podCreationTimestamp="2025-11-24 21:33:00 +0000 UTC" firstStartedPulling="2025-11-24 21:33:01.149513652 +0000 UTC m=+799.465765825" lastFinishedPulling="2025-11-24 21:33:07.544634517 +0000 UTC m=+805.860886700" observedRunningTime="2025-11-24 21:33:08.699057232 +0000 UTC m=+807.015309405" watchObservedRunningTime="2025-11-24 21:33:08.706133714 +0000 UTC m=+807.022385887" Nov 24 21:33:08 crc kubenswrapper[4915]: I1124 21:33:08.722139 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.230350225 podStartE2EDuration="8.722114337s" podCreationTimestamp="2025-11-24 21:33:00 +0000 UTC" firstStartedPulling="2025-11-24 21:33:02.222538416 +0000 UTC m=+800.538790609" lastFinishedPulling="2025-11-24 21:33:07.714302538 +0000 UTC m=+806.030554721" observedRunningTime="2025-11-24 21:33:08.716843714 +0000 UTC m=+807.033095877" watchObservedRunningTime="2025-11-24 21:33:08.722114337 +0000 UTC m=+807.038366550" Nov 24 21:33:08 crc kubenswrapper[4915]: I1124 21:33:08.745764 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-4knt6" podStartSLOduration=2.313697622 podStartE2EDuration="8.745749726s" podCreationTimestamp="2025-11-24 21:33:00 +0000 UTC" firstStartedPulling="2025-11-24 21:33:01.192235789 +0000 UTC m=+799.508487962" lastFinishedPulling="2025-11-24 21:33:07.624287883 +0000 UTC m=+805.940540066" observedRunningTime="2025-11-24 21:33:08.741740838 +0000 UTC m=+807.057993011" watchObservedRunningTime="2025-11-24 21:33:08.745749726 +0000 UTC m=+807.062001899" Nov 24 21:33:08 crc kubenswrapper[4915]: I1124 21:33:08.774036 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.007442014 podStartE2EDuration="8.774011451s" podCreationTimestamp="2025-11-24 21:33:00 +0000 UTC" firstStartedPulling="2025-11-24 21:33:02.088648904 +0000 UTC m=+800.404901077" lastFinishedPulling="2025-11-24 21:33:07.855218341 +0000 UTC m=+806.171470514" observedRunningTime="2025-11-24 21:33:08.76587558 +0000 UTC m=+807.082127783" watchObservedRunningTime="2025-11-24 21:33:08.774011451 +0000 UTC m=+807.090263634" Nov 24 21:33:08 crc kubenswrapper[4915]: I1124 21:33:08.801953 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-76cc67bf56-5qxx5" podStartSLOduration=2.010822196 podStartE2EDuration="8.801930596s" podCreationTimestamp="2025-11-24 21:33:00 +0000 UTC" firstStartedPulling="2025-11-24 21:33:00.970078247 +0000 UTC m=+799.286330420" lastFinishedPulling="2025-11-24 21:33:07.761186627 +0000 UTC m=+806.077438820" observedRunningTime="2025-11-24 21:33:08.798956146 +0000 UTC m=+807.115208339" watchObservedRunningTime="2025-11-24 21:33:08.801930596 +0000 UTC m=+807.118182789" Nov 24 21:33:11 crc kubenswrapper[4915]: I1124 21:33:11.689965 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" event={"ID":"12be2daf-d31e-4bbb-921f-15a90d8db057","Type":"ContainerStarted","Data":"3fd700007bba925c9c23e6c41af50ccd9e620ce62cf3565dc1c81375947094e3"} Nov 24 21:33:11 crc kubenswrapper[4915]: I1124 21:33:11.692679 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:11 crc kubenswrapper[4915]: I1124 21:33:11.692707 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:11 crc kubenswrapper[4915]: I1124 21:33:11.695712 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" event={"ID":"cd4feec8-19ce-4e0a-8b72-62a5630a13cd","Type":"ContainerStarted","Data":"842569b7302903658a140cc2cd11967a0000673dcdb83dce70e824e4f561e844"} Nov 24 21:33:11 crc kubenswrapper[4915]: I1124 21:33:11.696222 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" Nov 24 21:33:11 crc kubenswrapper[4915]: I1124 21:33:11.696287 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" Nov 24 21:33:11 crc kubenswrapper[4915]: I1124 21:33:11.702644 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:11 crc kubenswrapper[4915]: I1124 21:33:11.704011 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" Nov 24 21:33:11 crc kubenswrapper[4915]: I1124 21:33:11.705015 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" Nov 24 21:33:11 crc kubenswrapper[4915]: I1124 21:33:11.707251 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" Nov 24 21:33:11 crc kubenswrapper[4915]: I1124 21:33:11.716022 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-85f6fc88b5-bnc8w" podStartSLOduration=2.940671276 podStartE2EDuration="11.716008553s" podCreationTimestamp="2025-11-24 21:33:00 +0000 UTC" firstStartedPulling="2025-11-24 21:33:01.900002249 +0000 UTC m=+800.216254422" lastFinishedPulling="2025-11-24 21:33:10.675339456 +0000 UTC m=+808.991591699" observedRunningTime="2025-11-24 21:33:11.712293323 +0000 UTC m=+810.028545506" watchObservedRunningTime="2025-11-24 21:33:11.716008553 +0000 UTC m=+810.032260726" Nov 24 21:33:11 crc kubenswrapper[4915]: I1124 21:33:11.733227 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-85f6fc88b5-2ltkt" podStartSLOduration=3.004406371 podStartE2EDuration="11.733205238s" podCreationTimestamp="2025-11-24 21:33:00 +0000 UTC" firstStartedPulling="2025-11-24 21:33:01.938196253 +0000 UTC m=+800.254448426" lastFinishedPulling="2025-11-24 21:33:10.66699511 +0000 UTC m=+808.983247293" observedRunningTime="2025-11-24 21:33:11.729282722 +0000 UTC m=+810.045534895" watchObservedRunningTime="2025-11-24 21:33:11.733205238 +0000 UTC m=+810.049457421" Nov 24 21:33:17 crc kubenswrapper[4915]: I1124 21:33:17.425512 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5cq6n"] Nov 24 21:33:17 crc kubenswrapper[4915]: I1124 21:33:17.427666 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5cq6n" Nov 24 21:33:17 crc kubenswrapper[4915]: I1124 21:33:17.458765 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5cq6n"] Nov 24 21:33:17 crc kubenswrapper[4915]: I1124 21:33:17.484571 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e-catalog-content\") pod \"certified-operators-5cq6n\" (UID: \"e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e\") " pod="openshift-marketplace/certified-operators-5cq6n" Nov 24 21:33:17 crc kubenswrapper[4915]: I1124 21:33:17.484614 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6jpj\" (UniqueName: \"kubernetes.io/projected/e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e-kube-api-access-m6jpj\") pod \"certified-operators-5cq6n\" (UID: \"e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e\") " pod="openshift-marketplace/certified-operators-5cq6n" Nov 24 21:33:17 crc kubenswrapper[4915]: I1124 21:33:17.484729 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e-utilities\") pod \"certified-operators-5cq6n\" (UID: \"e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e\") " pod="openshift-marketplace/certified-operators-5cq6n" Nov 24 21:33:17 crc kubenswrapper[4915]: I1124 21:33:17.586405 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e-utilities\") pod \"certified-operators-5cq6n\" (UID: \"e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e\") " pod="openshift-marketplace/certified-operators-5cq6n" Nov 24 21:33:17 crc kubenswrapper[4915]: I1124 21:33:17.586511 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e-catalog-content\") pod \"certified-operators-5cq6n\" (UID: \"e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e\") " pod="openshift-marketplace/certified-operators-5cq6n" Nov 24 21:33:17 crc kubenswrapper[4915]: I1124 21:33:17.586550 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6jpj\" (UniqueName: \"kubernetes.io/projected/e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e-kube-api-access-m6jpj\") pod \"certified-operators-5cq6n\" (UID: \"e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e\") " pod="openshift-marketplace/certified-operators-5cq6n" Nov 24 21:33:17 crc kubenswrapper[4915]: I1124 21:33:17.586982 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e-utilities\") pod \"certified-operators-5cq6n\" (UID: \"e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e\") " pod="openshift-marketplace/certified-operators-5cq6n" Nov 24 21:33:17 crc kubenswrapper[4915]: I1124 21:33:17.587072 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e-catalog-content\") pod \"certified-operators-5cq6n\" (UID: \"e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e\") " pod="openshift-marketplace/certified-operators-5cq6n" Nov 24 21:33:17 crc kubenswrapper[4915]: I1124 21:33:17.610400 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6jpj\" (UniqueName: \"kubernetes.io/projected/e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e-kube-api-access-m6jpj\") pod \"certified-operators-5cq6n\" (UID: \"e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e\") " pod="openshift-marketplace/certified-operators-5cq6n" Nov 24 21:33:17 crc kubenswrapper[4915]: I1124 21:33:17.762230 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5cq6n" Nov 24 21:33:18 crc kubenswrapper[4915]: I1124 21:33:18.201858 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5cq6n"] Nov 24 21:33:18 crc kubenswrapper[4915]: I1124 21:33:18.742709 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cq6n" event={"ID":"e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e","Type":"ContainerStarted","Data":"4e4dd8bf5b6ee252d5c36a0293b89a1fbe3934ce7629965e2937e19def656d6e"} Nov 24 21:33:19 crc kubenswrapper[4915]: I1124 21:33:19.749185 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cq6n" event={"ID":"e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e","Type":"ContainerStarted","Data":"7ec97f063cb47da37514d36eee697e4b15a9a3b254071ff1cb441f8e230ccda5"} Nov 24 21:33:20 crc kubenswrapper[4915]: I1124 21:33:20.758147 4915 generic.go:334] "Generic (PLEG): container finished" podID="e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e" containerID="7ec97f063cb47da37514d36eee697e4b15a9a3b254071ff1cb441f8e230ccda5" exitCode=0 Nov 24 21:33:20 crc kubenswrapper[4915]: I1124 21:33:20.758180 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cq6n" event={"ID":"e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e","Type":"ContainerDied","Data":"7ec97f063cb47da37514d36eee697e4b15a9a3b254071ff1cb441f8e230ccda5"} Nov 24 21:33:21 crc kubenswrapper[4915]: I1124 21:33:21.768516 4915 generic.go:334] "Generic (PLEG): container finished" podID="e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e" containerID="a56a768729b534a598740ce6685b7a7a6d80eb13454c398af69654d2f5203115" exitCode=0 Nov 24 21:33:21 crc kubenswrapper[4915]: I1124 21:33:21.768941 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cq6n" event={"ID":"e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e","Type":"ContainerDied","Data":"a56a768729b534a598740ce6685b7a7a6d80eb13454c398af69654d2f5203115"} Nov 24 21:33:22 crc kubenswrapper[4915]: I1124 21:33:22.776840 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cq6n" event={"ID":"e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e","Type":"ContainerStarted","Data":"9301232e97c2fca102b8a5abd2a6c9ca6da2bd839158b1531e753918909534e7"} Nov 24 21:33:22 crc kubenswrapper[4915]: I1124 21:33:22.795745 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5cq6n" podStartSLOduration=4.344908684 podStartE2EDuration="5.795718889s" podCreationTimestamp="2025-11-24 21:33:17 +0000 UTC" firstStartedPulling="2025-11-24 21:33:20.760595895 +0000 UTC m=+819.076848068" lastFinishedPulling="2025-11-24 21:33:22.2114061 +0000 UTC m=+820.527658273" observedRunningTime="2025-11-24 21:33:22.791570777 +0000 UTC m=+821.107822960" watchObservedRunningTime="2025-11-24 21:33:22.795718889 +0000 UTC m=+821.111971062" Nov 24 21:33:27 crc kubenswrapper[4915]: I1124 21:33:27.762495 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5cq6n" Nov 24 21:33:27 crc kubenswrapper[4915]: I1124 21:33:27.766278 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5cq6n" Nov 24 21:33:27 crc kubenswrapper[4915]: I1124 21:33:27.819039 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5cq6n" Nov 24 21:33:27 crc kubenswrapper[4915]: I1124 21:33:27.903111 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5cq6n" Nov 24 21:33:28 crc kubenswrapper[4915]: I1124 21:33:28.082142 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5cq6n"] Nov 24 21:33:29 crc kubenswrapper[4915]: I1124 21:33:29.862647 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5cq6n" podUID="e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e" containerName="registry-server" containerID="cri-o://9301232e97c2fca102b8a5abd2a6c9ca6da2bd839158b1531e753918909534e7" gracePeriod=2 Nov 24 21:33:30 crc kubenswrapper[4915]: I1124 21:33:30.418594 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-76cc67bf56-5qxx5" Nov 24 21:33:30 crc kubenswrapper[4915]: I1124 21:33:30.616617 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-5895d59bb8-r4hdw" Nov 24 21:33:30 crc kubenswrapper[4915]: I1124 21:33:30.867393 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-4knt6" Nov 24 21:33:30 crc kubenswrapper[4915]: I1124 21:33:30.869529 4915 generic.go:334] "Generic (PLEG): container finished" podID="e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e" containerID="9301232e97c2fca102b8a5abd2a6c9ca6da2bd839158b1531e753918909534e7" exitCode=0 Nov 24 21:33:30 crc kubenswrapper[4915]: I1124 21:33:30.869566 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cq6n" event={"ID":"e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e","Type":"ContainerDied","Data":"9301232e97c2fca102b8a5abd2a6c9ca6da2bd839158b1531e753918909534e7"} Nov 24 21:33:30 crc kubenswrapper[4915]: I1124 21:33:30.869590 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cq6n" event={"ID":"e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e","Type":"ContainerDied","Data":"4e4dd8bf5b6ee252d5c36a0293b89a1fbe3934ce7629965e2937e19def656d6e"} Nov 24 21:33:30 crc kubenswrapper[4915]: I1124 21:33:30.869601 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e4dd8bf5b6ee252d5c36a0293b89a1fbe3934ce7629965e2937e19def656d6e" Nov 24 21:33:30 crc kubenswrapper[4915]: I1124 21:33:30.926019 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5cq6n" Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.057237 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6jpj\" (UniqueName: \"kubernetes.io/projected/e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e-kube-api-access-m6jpj\") pod \"e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e\" (UID: \"e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e\") " Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.057338 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e-catalog-content\") pod \"e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e\" (UID: \"e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e\") " Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.057380 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e-utilities\") pod \"e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e\" (UID: \"e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e\") " Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.058232 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e-utilities" (OuterVolumeSpecName: "utilities") pod "e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e" (UID: "e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.058506 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.063417 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e-kube-api-access-m6jpj" (OuterVolumeSpecName: "kube-api-access-m6jpj") pod "e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e" (UID: "e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e"). InnerVolumeSpecName "kube-api-access-m6jpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.112692 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e" (UID: "e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.159730 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.159762 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6jpj\" (UniqueName: \"kubernetes.io/projected/e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e-kube-api-access-m6jpj\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.609041 4915 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.609597 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="3159fb3d-ea27-4141-97d1-0924f1854801" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.658085 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.804030 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.808084 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dqkr4"] Nov 24 21:33:31 crc kubenswrapper[4915]: E1124 21:33:31.808475 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e" containerName="registry-server" Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.808500 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e" containerName="registry-server" Nov 24 21:33:31 crc kubenswrapper[4915]: E1124 21:33:31.808522 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e" containerName="extract-utilities" Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.808531 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e" containerName="extract-utilities" Nov 24 21:33:31 crc kubenswrapper[4915]: E1124 21:33:31.808554 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e" containerName="extract-content" Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.808563 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e" containerName="extract-content" Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.808717 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e" containerName="registry-server" Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.810385 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dqkr4" Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.831713 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dqkr4"] Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.869989 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e80a6352-2025-4de0-9408-9a7684429e5c-catalog-content\") pod \"community-operators-dqkr4\" (UID: \"e80a6352-2025-4de0-9408-9a7684429e5c\") " pod="openshift-marketplace/community-operators-dqkr4" Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.870677 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdf2s\" (UniqueName: \"kubernetes.io/projected/e80a6352-2025-4de0-9408-9a7684429e5c-kube-api-access-qdf2s\") pod \"community-operators-dqkr4\" (UID: \"e80a6352-2025-4de0-9408-9a7684429e5c\") " pod="openshift-marketplace/community-operators-dqkr4" Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.870802 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e80a6352-2025-4de0-9408-9a7684429e5c-utilities\") pod \"community-operators-dqkr4\" (UID: \"e80a6352-2025-4de0-9408-9a7684429e5c\") " pod="openshift-marketplace/community-operators-dqkr4" Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.882114 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5cq6n" Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.915606 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5cq6n"] Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.923731 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5cq6n"] Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.972115 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e80a6352-2025-4de0-9408-9a7684429e5c-catalog-content\") pod \"community-operators-dqkr4\" (UID: \"e80a6352-2025-4de0-9408-9a7684429e5c\") " pod="openshift-marketplace/community-operators-dqkr4" Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.972400 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdf2s\" (UniqueName: \"kubernetes.io/projected/e80a6352-2025-4de0-9408-9a7684429e5c-kube-api-access-qdf2s\") pod \"community-operators-dqkr4\" (UID: \"e80a6352-2025-4de0-9408-9a7684429e5c\") " pod="openshift-marketplace/community-operators-dqkr4" Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.972592 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e80a6352-2025-4de0-9408-9a7684429e5c-utilities\") pod \"community-operators-dqkr4\" (UID: \"e80a6352-2025-4de0-9408-9a7684429e5c\") " pod="openshift-marketplace/community-operators-dqkr4" Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.972649 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e80a6352-2025-4de0-9408-9a7684429e5c-catalog-content\") pod \"community-operators-dqkr4\" (UID: \"e80a6352-2025-4de0-9408-9a7684429e5c\") " pod="openshift-marketplace/community-operators-dqkr4" Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.973210 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e80a6352-2025-4de0-9408-9a7684429e5c-utilities\") pod \"community-operators-dqkr4\" (UID: \"e80a6352-2025-4de0-9408-9a7684429e5c\") " pod="openshift-marketplace/community-operators-dqkr4" Nov 24 21:33:31 crc kubenswrapper[4915]: I1124 21:33:31.992523 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdf2s\" (UniqueName: \"kubernetes.io/projected/e80a6352-2025-4de0-9408-9a7684429e5c-kube-api-access-qdf2s\") pod \"community-operators-dqkr4\" (UID: \"e80a6352-2025-4de0-9408-9a7684429e5c\") " pod="openshift-marketplace/community-operators-dqkr4" Nov 24 21:33:32 crc kubenswrapper[4915]: I1124 21:33:32.132107 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dqkr4" Nov 24 21:33:32 crc kubenswrapper[4915]: I1124 21:33:32.436722 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e" path="/var/lib/kubelet/pods/e43f5e8d-bb32-4b9e-9d2d-856ea08e8a1e/volumes" Nov 24 21:33:32 crc kubenswrapper[4915]: I1124 21:33:32.602040 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dqkr4"] Nov 24 21:33:32 crc kubenswrapper[4915]: W1124 21:33:32.608049 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode80a6352_2025_4de0_9408_9a7684429e5c.slice/crio-6930ee4a598b60c1267f32eec421f9a570906b3a8957b83bac661fd41f63e05b WatchSource:0}: Error finding container 6930ee4a598b60c1267f32eec421f9a570906b3a8957b83bac661fd41f63e05b: Status 404 returned error can't find the container with id 6930ee4a598b60c1267f32eec421f9a570906b3a8957b83bac661fd41f63e05b Nov 24 21:33:32 crc kubenswrapper[4915]: I1124 21:33:32.900965 4915 generic.go:334] "Generic (PLEG): container finished" podID="e80a6352-2025-4de0-9408-9a7684429e5c" containerID="865a276c77dabb586e441e3ba83ee4a165a612e0fe58e0a2ed871dbe9d63258b" exitCode=0 Nov 24 21:33:32 crc kubenswrapper[4915]: I1124 21:33:32.901093 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqkr4" event={"ID":"e80a6352-2025-4de0-9408-9a7684429e5c","Type":"ContainerDied","Data":"865a276c77dabb586e441e3ba83ee4a165a612e0fe58e0a2ed871dbe9d63258b"} Nov 24 21:33:32 crc kubenswrapper[4915]: I1124 21:33:32.901208 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqkr4" event={"ID":"e80a6352-2025-4de0-9408-9a7684429e5c","Type":"ContainerStarted","Data":"6930ee4a598b60c1267f32eec421f9a570906b3a8957b83bac661fd41f63e05b"} Nov 24 21:33:33 crc kubenswrapper[4915]: I1124 21:33:33.910317 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqkr4" event={"ID":"e80a6352-2025-4de0-9408-9a7684429e5c","Type":"ContainerStarted","Data":"ad3fff0966bb81f3a493252167b0043fae131238397ce1680ff26830dee4b292"} Nov 24 21:33:34 crc kubenswrapper[4915]: I1124 21:33:34.919136 4915 generic.go:334] "Generic (PLEG): container finished" podID="e80a6352-2025-4de0-9408-9a7684429e5c" containerID="ad3fff0966bb81f3a493252167b0043fae131238397ce1680ff26830dee4b292" exitCode=0 Nov 24 21:33:34 crc kubenswrapper[4915]: I1124 21:33:34.919223 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqkr4" event={"ID":"e80a6352-2025-4de0-9408-9a7684429e5c","Type":"ContainerDied","Data":"ad3fff0966bb81f3a493252167b0043fae131238397ce1680ff26830dee4b292"} Nov 24 21:33:35 crc kubenswrapper[4915]: I1124 21:33:35.926355 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqkr4" event={"ID":"e80a6352-2025-4de0-9408-9a7684429e5c","Type":"ContainerStarted","Data":"b0f9b33557f1c29b852aa8cd7b771d7c8152ac13daafdd41418014d553f51e9a"} Nov 24 21:33:35 crc kubenswrapper[4915]: I1124 21:33:35.941851 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dqkr4" podStartSLOduration=2.56107552 podStartE2EDuration="4.941832147s" podCreationTimestamp="2025-11-24 21:33:31 +0000 UTC" firstStartedPulling="2025-11-24 21:33:32.907214379 +0000 UTC m=+831.223466552" lastFinishedPulling="2025-11-24 21:33:35.287970976 +0000 UTC m=+833.604223179" observedRunningTime="2025-11-24 21:33:35.939856983 +0000 UTC m=+834.256109186" watchObservedRunningTime="2025-11-24 21:33:35.941832147 +0000 UTC m=+834.258084340" Nov 24 21:33:41 crc kubenswrapper[4915]: I1124 21:33:41.609848 4915 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Nov 24 21:33:41 crc kubenswrapper[4915]: I1124 21:33:41.610400 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="3159fb3d-ea27-4141-97d1-0924f1854801" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 21:33:42 crc kubenswrapper[4915]: I1124 21:33:42.132276 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dqkr4" Nov 24 21:33:42 crc kubenswrapper[4915]: I1124 21:33:42.133003 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dqkr4" Nov 24 21:33:42 crc kubenswrapper[4915]: I1124 21:33:42.181722 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dqkr4" Nov 24 21:33:43 crc kubenswrapper[4915]: I1124 21:33:43.024956 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dqkr4" Nov 24 21:33:43 crc kubenswrapper[4915]: I1124 21:33:43.078248 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dqkr4"] Nov 24 21:33:45 crc kubenswrapper[4915]: I1124 21:33:45.011010 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dqkr4" podUID="e80a6352-2025-4de0-9408-9a7684429e5c" containerName="registry-server" containerID="cri-o://b0f9b33557f1c29b852aa8cd7b771d7c8152ac13daafdd41418014d553f51e9a" gracePeriod=2 Nov 24 21:33:45 crc kubenswrapper[4915]: I1124 21:33:45.486914 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dqkr4" Nov 24 21:33:45 crc kubenswrapper[4915]: I1124 21:33:45.655181 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e80a6352-2025-4de0-9408-9a7684429e5c-utilities\") pod \"e80a6352-2025-4de0-9408-9a7684429e5c\" (UID: \"e80a6352-2025-4de0-9408-9a7684429e5c\") " Nov 24 21:33:45 crc kubenswrapper[4915]: I1124 21:33:45.655656 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e80a6352-2025-4de0-9408-9a7684429e5c-catalog-content\") pod \"e80a6352-2025-4de0-9408-9a7684429e5c\" (UID: \"e80a6352-2025-4de0-9408-9a7684429e5c\") " Nov 24 21:33:45 crc kubenswrapper[4915]: I1124 21:33:45.655690 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdf2s\" (UniqueName: \"kubernetes.io/projected/e80a6352-2025-4de0-9408-9a7684429e5c-kube-api-access-qdf2s\") pod \"e80a6352-2025-4de0-9408-9a7684429e5c\" (UID: \"e80a6352-2025-4de0-9408-9a7684429e5c\") " Nov 24 21:33:45 crc kubenswrapper[4915]: I1124 21:33:45.656280 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e80a6352-2025-4de0-9408-9a7684429e5c-utilities" (OuterVolumeSpecName: "utilities") pod "e80a6352-2025-4de0-9408-9a7684429e5c" (UID: "e80a6352-2025-4de0-9408-9a7684429e5c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:33:45 crc kubenswrapper[4915]: I1124 21:33:45.661412 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e80a6352-2025-4de0-9408-9a7684429e5c-kube-api-access-qdf2s" (OuterVolumeSpecName: "kube-api-access-qdf2s") pod "e80a6352-2025-4de0-9408-9a7684429e5c" (UID: "e80a6352-2025-4de0-9408-9a7684429e5c"). InnerVolumeSpecName "kube-api-access-qdf2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:33:45 crc kubenswrapper[4915]: I1124 21:33:45.704493 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e80a6352-2025-4de0-9408-9a7684429e5c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e80a6352-2025-4de0-9408-9a7684429e5c" (UID: "e80a6352-2025-4de0-9408-9a7684429e5c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:33:45 crc kubenswrapper[4915]: I1124 21:33:45.757379 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e80a6352-2025-4de0-9408-9a7684429e5c-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:45 crc kubenswrapper[4915]: I1124 21:33:45.757419 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdf2s\" (UniqueName: \"kubernetes.io/projected/e80a6352-2025-4de0-9408-9a7684429e5c-kube-api-access-qdf2s\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:45 crc kubenswrapper[4915]: I1124 21:33:45.757431 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e80a6352-2025-4de0-9408-9a7684429e5c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:33:46 crc kubenswrapper[4915]: I1124 21:33:46.022296 4915 generic.go:334] "Generic (PLEG): container finished" podID="e80a6352-2025-4de0-9408-9a7684429e5c" containerID="b0f9b33557f1c29b852aa8cd7b771d7c8152ac13daafdd41418014d553f51e9a" exitCode=0 Nov 24 21:33:46 crc kubenswrapper[4915]: I1124 21:33:46.022354 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqkr4" event={"ID":"e80a6352-2025-4de0-9408-9a7684429e5c","Type":"ContainerDied","Data":"b0f9b33557f1c29b852aa8cd7b771d7c8152ac13daafdd41418014d553f51e9a"} Nov 24 21:33:46 crc kubenswrapper[4915]: I1124 21:33:46.022452 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqkr4" event={"ID":"e80a6352-2025-4de0-9408-9a7684429e5c","Type":"ContainerDied","Data":"6930ee4a598b60c1267f32eec421f9a570906b3a8957b83bac661fd41f63e05b"} Nov 24 21:33:46 crc kubenswrapper[4915]: I1124 21:33:46.022480 4915 scope.go:117] "RemoveContainer" containerID="b0f9b33557f1c29b852aa8cd7b771d7c8152ac13daafdd41418014d553f51e9a" Nov 24 21:33:46 crc kubenswrapper[4915]: I1124 21:33:46.022456 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dqkr4" Nov 24 21:33:46 crc kubenswrapper[4915]: I1124 21:33:46.050860 4915 scope.go:117] "RemoveContainer" containerID="ad3fff0966bb81f3a493252167b0043fae131238397ce1680ff26830dee4b292" Nov 24 21:33:46 crc kubenswrapper[4915]: I1124 21:33:46.079922 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dqkr4"] Nov 24 21:33:46 crc kubenswrapper[4915]: I1124 21:33:46.088449 4915 scope.go:117] "RemoveContainer" containerID="865a276c77dabb586e441e3ba83ee4a165a612e0fe58e0a2ed871dbe9d63258b" Nov 24 21:33:46 crc kubenswrapper[4915]: I1124 21:33:46.092383 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dqkr4"] Nov 24 21:33:46 crc kubenswrapper[4915]: I1124 21:33:46.105099 4915 scope.go:117] "RemoveContainer" containerID="b0f9b33557f1c29b852aa8cd7b771d7c8152ac13daafdd41418014d553f51e9a" Nov 24 21:33:46 crc kubenswrapper[4915]: E1124 21:33:46.105482 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0f9b33557f1c29b852aa8cd7b771d7c8152ac13daafdd41418014d553f51e9a\": container with ID starting with b0f9b33557f1c29b852aa8cd7b771d7c8152ac13daafdd41418014d553f51e9a not found: ID does not exist" containerID="b0f9b33557f1c29b852aa8cd7b771d7c8152ac13daafdd41418014d553f51e9a" Nov 24 21:33:46 crc kubenswrapper[4915]: I1124 21:33:46.105519 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0f9b33557f1c29b852aa8cd7b771d7c8152ac13daafdd41418014d553f51e9a"} err="failed to get container status \"b0f9b33557f1c29b852aa8cd7b771d7c8152ac13daafdd41418014d553f51e9a\": rpc error: code = NotFound desc = could not find container \"b0f9b33557f1c29b852aa8cd7b771d7c8152ac13daafdd41418014d553f51e9a\": container with ID starting with b0f9b33557f1c29b852aa8cd7b771d7c8152ac13daafdd41418014d553f51e9a not found: ID does not exist" Nov 24 21:33:46 crc kubenswrapper[4915]: I1124 21:33:46.105541 4915 scope.go:117] "RemoveContainer" containerID="ad3fff0966bb81f3a493252167b0043fae131238397ce1680ff26830dee4b292" Nov 24 21:33:46 crc kubenswrapper[4915]: E1124 21:33:46.105940 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad3fff0966bb81f3a493252167b0043fae131238397ce1680ff26830dee4b292\": container with ID starting with ad3fff0966bb81f3a493252167b0043fae131238397ce1680ff26830dee4b292 not found: ID does not exist" containerID="ad3fff0966bb81f3a493252167b0043fae131238397ce1680ff26830dee4b292" Nov 24 21:33:46 crc kubenswrapper[4915]: I1124 21:33:46.105966 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad3fff0966bb81f3a493252167b0043fae131238397ce1680ff26830dee4b292"} err="failed to get container status \"ad3fff0966bb81f3a493252167b0043fae131238397ce1680ff26830dee4b292\": rpc error: code = NotFound desc = could not find container \"ad3fff0966bb81f3a493252167b0043fae131238397ce1680ff26830dee4b292\": container with ID starting with ad3fff0966bb81f3a493252167b0043fae131238397ce1680ff26830dee4b292 not found: ID does not exist" Nov 24 21:33:46 crc kubenswrapper[4915]: I1124 21:33:46.105984 4915 scope.go:117] "RemoveContainer" containerID="865a276c77dabb586e441e3ba83ee4a165a612e0fe58e0a2ed871dbe9d63258b" Nov 24 21:33:46 crc kubenswrapper[4915]: E1124 21:33:46.106399 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"865a276c77dabb586e441e3ba83ee4a165a612e0fe58e0a2ed871dbe9d63258b\": container with ID starting with 865a276c77dabb586e441e3ba83ee4a165a612e0fe58e0a2ed871dbe9d63258b not found: ID does not exist" containerID="865a276c77dabb586e441e3ba83ee4a165a612e0fe58e0a2ed871dbe9d63258b" Nov 24 21:33:46 crc kubenswrapper[4915]: I1124 21:33:46.106457 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"865a276c77dabb586e441e3ba83ee4a165a612e0fe58e0a2ed871dbe9d63258b"} err="failed to get container status \"865a276c77dabb586e441e3ba83ee4a165a612e0fe58e0a2ed871dbe9d63258b\": rpc error: code = NotFound desc = could not find container \"865a276c77dabb586e441e3ba83ee4a165a612e0fe58e0a2ed871dbe9d63258b\": container with ID starting with 865a276c77dabb586e441e3ba83ee4a165a612e0fe58e0a2ed871dbe9d63258b not found: ID does not exist" Nov 24 21:33:46 crc kubenswrapper[4915]: I1124 21:33:46.437605 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e80a6352-2025-4de0-9408-9a7684429e5c" path="/var/lib/kubelet/pods/e80a6352-2025-4de0-9408-9a7684429e5c/volumes" Nov 24 21:33:51 crc kubenswrapper[4915]: I1124 21:33:51.606162 4915 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Nov 24 21:33:51 crc kubenswrapper[4915]: I1124 21:33:51.606886 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="3159fb3d-ea27-4141-97d1-0924f1854801" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 21:34:01 crc kubenswrapper[4915]: I1124 21:34:01.619553 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Nov 24 21:34:17 crc kubenswrapper[4915]: I1124 21:34:17.638332 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8ml6f"] Nov 24 21:34:17 crc kubenswrapper[4915]: E1124 21:34:17.639911 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e80a6352-2025-4de0-9408-9a7684429e5c" containerName="registry-server" Nov 24 21:34:17 crc kubenswrapper[4915]: I1124 21:34:17.639994 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e80a6352-2025-4de0-9408-9a7684429e5c" containerName="registry-server" Nov 24 21:34:17 crc kubenswrapper[4915]: E1124 21:34:17.640097 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e80a6352-2025-4de0-9408-9a7684429e5c" containerName="extract-content" Nov 24 21:34:17 crc kubenswrapper[4915]: I1124 21:34:17.640116 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e80a6352-2025-4de0-9408-9a7684429e5c" containerName="extract-content" Nov 24 21:34:17 crc kubenswrapper[4915]: E1124 21:34:17.640203 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e80a6352-2025-4de0-9408-9a7684429e5c" containerName="extract-utilities" Nov 24 21:34:17 crc kubenswrapper[4915]: I1124 21:34:17.640270 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e80a6352-2025-4de0-9408-9a7684429e5c" containerName="extract-utilities" Nov 24 21:34:17 crc kubenswrapper[4915]: I1124 21:34:17.640828 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="e80a6352-2025-4de0-9408-9a7684429e5c" containerName="registry-server" Nov 24 21:34:17 crc kubenswrapper[4915]: I1124 21:34:17.645720 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8ml6f" Nov 24 21:34:17 crc kubenswrapper[4915]: I1124 21:34:17.661617 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8ml6f"] Nov 24 21:34:17 crc kubenswrapper[4915]: I1124 21:34:17.719587 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b67dx\" (UniqueName: \"kubernetes.io/projected/d05debec-784e-49ee-be98-09657f2c9ceb-kube-api-access-b67dx\") pod \"redhat-marketplace-8ml6f\" (UID: \"d05debec-784e-49ee-be98-09657f2c9ceb\") " pod="openshift-marketplace/redhat-marketplace-8ml6f" Nov 24 21:34:17 crc kubenswrapper[4915]: I1124 21:34:17.719643 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d05debec-784e-49ee-be98-09657f2c9ceb-utilities\") pod \"redhat-marketplace-8ml6f\" (UID: \"d05debec-784e-49ee-be98-09657f2c9ceb\") " pod="openshift-marketplace/redhat-marketplace-8ml6f" Nov 24 21:34:17 crc kubenswrapper[4915]: I1124 21:34:17.719679 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d05debec-784e-49ee-be98-09657f2c9ceb-catalog-content\") pod \"redhat-marketplace-8ml6f\" (UID: \"d05debec-784e-49ee-be98-09657f2c9ceb\") " pod="openshift-marketplace/redhat-marketplace-8ml6f" Nov 24 21:34:17 crc kubenswrapper[4915]: I1124 21:34:17.821066 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b67dx\" (UniqueName: \"kubernetes.io/projected/d05debec-784e-49ee-be98-09657f2c9ceb-kube-api-access-b67dx\") pod \"redhat-marketplace-8ml6f\" (UID: \"d05debec-784e-49ee-be98-09657f2c9ceb\") " pod="openshift-marketplace/redhat-marketplace-8ml6f" Nov 24 21:34:17 crc kubenswrapper[4915]: I1124 21:34:17.821449 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d05debec-784e-49ee-be98-09657f2c9ceb-utilities\") pod \"redhat-marketplace-8ml6f\" (UID: \"d05debec-784e-49ee-be98-09657f2c9ceb\") " pod="openshift-marketplace/redhat-marketplace-8ml6f" Nov 24 21:34:17 crc kubenswrapper[4915]: I1124 21:34:17.821495 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d05debec-784e-49ee-be98-09657f2c9ceb-catalog-content\") pod \"redhat-marketplace-8ml6f\" (UID: \"d05debec-784e-49ee-be98-09657f2c9ceb\") " pod="openshift-marketplace/redhat-marketplace-8ml6f" Nov 24 21:34:17 crc kubenswrapper[4915]: I1124 21:34:17.822139 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d05debec-784e-49ee-be98-09657f2c9ceb-catalog-content\") pod \"redhat-marketplace-8ml6f\" (UID: \"d05debec-784e-49ee-be98-09657f2c9ceb\") " pod="openshift-marketplace/redhat-marketplace-8ml6f" Nov 24 21:34:17 crc kubenswrapper[4915]: I1124 21:34:17.822748 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d05debec-784e-49ee-be98-09657f2c9ceb-utilities\") pod \"redhat-marketplace-8ml6f\" (UID: \"d05debec-784e-49ee-be98-09657f2c9ceb\") " pod="openshift-marketplace/redhat-marketplace-8ml6f" Nov 24 21:34:17 crc kubenswrapper[4915]: I1124 21:34:17.844254 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b67dx\" (UniqueName: \"kubernetes.io/projected/d05debec-784e-49ee-be98-09657f2c9ceb-kube-api-access-b67dx\") pod \"redhat-marketplace-8ml6f\" (UID: \"d05debec-784e-49ee-be98-09657f2c9ceb\") " pod="openshift-marketplace/redhat-marketplace-8ml6f" Nov 24 21:34:17 crc kubenswrapper[4915]: I1124 21:34:17.984716 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8ml6f" Nov 24 21:34:18 crc kubenswrapper[4915]: I1124 21:34:18.418538 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8ml6f"] Nov 24 21:34:19 crc kubenswrapper[4915]: I1124 21:34:19.303217 4915 generic.go:334] "Generic (PLEG): container finished" podID="d05debec-784e-49ee-be98-09657f2c9ceb" containerID="de33fdd027dade3ab75fa746fceb1f2094ffb1b570a4f946a03f42550ce25cb0" exitCode=0 Nov 24 21:34:19 crc kubenswrapper[4915]: I1124 21:34:19.303317 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ml6f" event={"ID":"d05debec-784e-49ee-be98-09657f2c9ceb","Type":"ContainerDied","Data":"de33fdd027dade3ab75fa746fceb1f2094ffb1b570a4f946a03f42550ce25cb0"} Nov 24 21:34:19 crc kubenswrapper[4915]: I1124 21:34:19.303376 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ml6f" event={"ID":"d05debec-784e-49ee-be98-09657f2c9ceb","Type":"ContainerStarted","Data":"9b4a918d0906b1a14f98effbcd0af5a3a1b698a301728e30d24dbdc5adeb68e6"} Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.259599 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-ngj6x"] Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.260635 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.262946 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.263087 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.265426 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-8p776" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.266301 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.268144 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.274614 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.311268 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-ngj6x"] Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.373531 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/4ab13369-460a-457f-a38c-2c8da2e9e81c-metrics\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.373584 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ab13369-460a-457f-a38c-2c8da2e9e81c-tmp\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.373622 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/4ab13369-460a-457f-a38c-2c8da2e9e81c-collector-token\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.373649 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ab13369-460a-457f-a38c-2c8da2e9e81c-trusted-ca\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.373667 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/4ab13369-460a-457f-a38c-2c8da2e9e81c-config-openshift-service-cacrt\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.373690 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/4ab13369-460a-457f-a38c-2c8da2e9e81c-sa-token\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.373704 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab13369-460a-457f-a38c-2c8da2e9e81c-config\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.373720 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/4ab13369-460a-457f-a38c-2c8da2e9e81c-entrypoint\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.373737 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/4ab13369-460a-457f-a38c-2c8da2e9e81c-collector-syslog-receiver\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.373754 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/4ab13369-460a-457f-a38c-2c8da2e9e81c-datadir\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.373790 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g6p6\" (UniqueName: \"kubernetes.io/projected/4ab13369-460a-457f-a38c-2c8da2e9e81c-kube-api-access-8g6p6\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.452609 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-ngj6x"] Nov 24 21:34:20 crc kubenswrapper[4915]: E1124 21:34:20.453257 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-8g6p6 metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-ngj6x" podUID="4ab13369-460a-457f-a38c-2c8da2e9e81c" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.475503 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/4ab13369-460a-457f-a38c-2c8da2e9e81c-metrics\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.475554 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ab13369-460a-457f-a38c-2c8da2e9e81c-tmp\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.475601 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/4ab13369-460a-457f-a38c-2c8da2e9e81c-collector-token\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.475636 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ab13369-460a-457f-a38c-2c8da2e9e81c-trusted-ca\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.475660 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/4ab13369-460a-457f-a38c-2c8da2e9e81c-config-openshift-service-cacrt\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.475686 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/4ab13369-460a-457f-a38c-2c8da2e9e81c-sa-token\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.475707 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab13369-460a-457f-a38c-2c8da2e9e81c-config\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.475724 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/4ab13369-460a-457f-a38c-2c8da2e9e81c-entrypoint\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.475751 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/4ab13369-460a-457f-a38c-2c8da2e9e81c-collector-syslog-receiver\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.475792 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/4ab13369-460a-457f-a38c-2c8da2e9e81c-datadir\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.475826 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g6p6\" (UniqueName: \"kubernetes.io/projected/4ab13369-460a-457f-a38c-2c8da2e9e81c-kube-api-access-8g6p6\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: E1124 21:34:20.476469 4915 secret.go:188] Couldn't get secret openshift-logging/collector-syslog-receiver: secret "collector-syslog-receiver" not found Nov 24 21:34:20 crc kubenswrapper[4915]: E1124 21:34:20.476530 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ab13369-460a-457f-a38c-2c8da2e9e81c-collector-syslog-receiver podName:4ab13369-460a-457f-a38c-2c8da2e9e81c nodeName:}" failed. No retries permitted until 2025-11-24 21:34:20.976511558 +0000 UTC m=+879.292763731 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "collector-syslog-receiver" (UniqueName: "kubernetes.io/secret/4ab13369-460a-457f-a38c-2c8da2e9e81c-collector-syslog-receiver") pod "collector-ngj6x" (UID: "4ab13369-460a-457f-a38c-2c8da2e9e81c") : secret "collector-syslog-receiver" not found Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.476554 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/4ab13369-460a-457f-a38c-2c8da2e9e81c-datadir\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: E1124 21:34:20.476916 4915 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Nov 24 21:34:20 crc kubenswrapper[4915]: E1124 21:34:20.476961 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ab13369-460a-457f-a38c-2c8da2e9e81c-metrics podName:4ab13369-460a-457f-a38c-2c8da2e9e81c nodeName:}" failed. No retries permitted until 2025-11-24 21:34:20.97694624 +0000 UTC m=+879.293198413 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/4ab13369-460a-457f-a38c-2c8da2e9e81c-metrics") pod "collector-ngj6x" (UID: "4ab13369-460a-457f-a38c-2c8da2e9e81c") : secret "collector-metrics" not found Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.477430 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/4ab13369-460a-457f-a38c-2c8da2e9e81c-config-openshift-service-cacrt\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.477486 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab13369-460a-457f-a38c-2c8da2e9e81c-config\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.478603 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ab13369-460a-457f-a38c-2c8da2e9e81c-trusted-ca\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.478696 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/4ab13369-460a-457f-a38c-2c8da2e9e81c-entrypoint\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.488820 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/4ab13369-460a-457f-a38c-2c8da2e9e81c-collector-token\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.490524 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ab13369-460a-457f-a38c-2c8da2e9e81c-tmp\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.494762 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/4ab13369-460a-457f-a38c-2c8da2e9e81c-sa-token\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.495091 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g6p6\" (UniqueName: \"kubernetes.io/projected/4ab13369-460a-457f-a38c-2c8da2e9e81c-kube-api-access-8g6p6\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.984424 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/4ab13369-460a-457f-a38c-2c8da2e9e81c-metrics\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.984532 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/4ab13369-460a-457f-a38c-2c8da2e9e81c-collector-syslog-receiver\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.989538 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/4ab13369-460a-457f-a38c-2c8da2e9e81c-metrics\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:20 crc kubenswrapper[4915]: I1124 21:34:20.997224 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/4ab13369-460a-457f-a38c-2c8da2e9e81c-collector-syslog-receiver\") pod \"collector-ngj6x\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " pod="openshift-logging/collector-ngj6x" Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.322914 4915 generic.go:334] "Generic (PLEG): container finished" podID="d05debec-784e-49ee-be98-09657f2c9ceb" containerID="fb96201ca01b84274351f123f65f09bad9c064d6f48ed801fa964df7dea4be5e" exitCode=0 Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.322976 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ml6f" event={"ID":"d05debec-784e-49ee-be98-09657f2c9ceb","Type":"ContainerDied","Data":"fb96201ca01b84274351f123f65f09bad9c064d6f48ed801fa964df7dea4be5e"} Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.323363 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-ngj6x" Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.339975 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-ngj6x" Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.491210 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/4ab13369-460a-457f-a38c-2c8da2e9e81c-collector-syslog-receiver\") pod \"4ab13369-460a-457f-a38c-2c8da2e9e81c\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.491272 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ab13369-460a-457f-a38c-2c8da2e9e81c-trusted-ca\") pod \"4ab13369-460a-457f-a38c-2c8da2e9e81c\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.491320 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/4ab13369-460a-457f-a38c-2c8da2e9e81c-datadir\") pod \"4ab13369-460a-457f-a38c-2c8da2e9e81c\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.491369 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ab13369-460a-457f-a38c-2c8da2e9e81c-tmp\") pod \"4ab13369-460a-457f-a38c-2c8da2e9e81c\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.491418 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab13369-460a-457f-a38c-2c8da2e9e81c-config\") pod \"4ab13369-460a-457f-a38c-2c8da2e9e81c\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.491452 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/4ab13369-460a-457f-a38c-2c8da2e9e81c-sa-token\") pod \"4ab13369-460a-457f-a38c-2c8da2e9e81c\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.491453 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ab13369-460a-457f-a38c-2c8da2e9e81c-datadir" (OuterVolumeSpecName: "datadir") pod "4ab13369-460a-457f-a38c-2c8da2e9e81c" (UID: "4ab13369-460a-457f-a38c-2c8da2e9e81c"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.491524 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/4ab13369-460a-457f-a38c-2c8da2e9e81c-metrics\") pod \"4ab13369-460a-457f-a38c-2c8da2e9e81c\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.491571 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/4ab13369-460a-457f-a38c-2c8da2e9e81c-entrypoint\") pod \"4ab13369-460a-457f-a38c-2c8da2e9e81c\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.491709 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8g6p6\" (UniqueName: \"kubernetes.io/projected/4ab13369-460a-457f-a38c-2c8da2e9e81c-kube-api-access-8g6p6\") pod \"4ab13369-460a-457f-a38c-2c8da2e9e81c\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.491755 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/4ab13369-460a-457f-a38c-2c8da2e9e81c-config-openshift-service-cacrt\") pod \"4ab13369-460a-457f-a38c-2c8da2e9e81c\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.491810 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/4ab13369-460a-457f-a38c-2c8da2e9e81c-collector-token\") pod \"4ab13369-460a-457f-a38c-2c8da2e9e81c\" (UID: \"4ab13369-460a-457f-a38c-2c8da2e9e81c\") " Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.492147 4915 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/4ab13369-460a-457f-a38c-2c8da2e9e81c-datadir\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.492389 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ab13369-460a-457f-a38c-2c8da2e9e81c-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "4ab13369-460a-457f-a38c-2c8da2e9e81c" (UID: "4ab13369-460a-457f-a38c-2c8da2e9e81c"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.492504 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ab13369-460a-457f-a38c-2c8da2e9e81c-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "4ab13369-460a-457f-a38c-2c8da2e9e81c" (UID: "4ab13369-460a-457f-a38c-2c8da2e9e81c"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.493154 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ab13369-460a-457f-a38c-2c8da2e9e81c-config" (OuterVolumeSpecName: "config") pod "4ab13369-460a-457f-a38c-2c8da2e9e81c" (UID: "4ab13369-460a-457f-a38c-2c8da2e9e81c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.493470 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ab13369-460a-457f-a38c-2c8da2e9e81c-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "4ab13369-460a-457f-a38c-2c8da2e9e81c" (UID: "4ab13369-460a-457f-a38c-2c8da2e9e81c"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.496302 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ab13369-460a-457f-a38c-2c8da2e9e81c-metrics" (OuterVolumeSpecName: "metrics") pod "4ab13369-460a-457f-a38c-2c8da2e9e81c" (UID: "4ab13369-460a-457f-a38c-2c8da2e9e81c"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.496368 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ab13369-460a-457f-a38c-2c8da2e9e81c-tmp" (OuterVolumeSpecName: "tmp") pod "4ab13369-460a-457f-a38c-2c8da2e9e81c" (UID: "4ab13369-460a-457f-a38c-2c8da2e9e81c"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.496666 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ab13369-460a-457f-a38c-2c8da2e9e81c-kube-api-access-8g6p6" (OuterVolumeSpecName: "kube-api-access-8g6p6") pod "4ab13369-460a-457f-a38c-2c8da2e9e81c" (UID: "4ab13369-460a-457f-a38c-2c8da2e9e81c"). InnerVolumeSpecName "kube-api-access-8g6p6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.496706 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ab13369-460a-457f-a38c-2c8da2e9e81c-sa-token" (OuterVolumeSpecName: "sa-token") pod "4ab13369-460a-457f-a38c-2c8da2e9e81c" (UID: "4ab13369-460a-457f-a38c-2c8da2e9e81c"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.502902 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ab13369-460a-457f-a38c-2c8da2e9e81c-collector-token" (OuterVolumeSpecName: "collector-token") pod "4ab13369-460a-457f-a38c-2c8da2e9e81c" (UID: "4ab13369-460a-457f-a38c-2c8da2e9e81c"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.503012 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ab13369-460a-457f-a38c-2c8da2e9e81c-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "4ab13369-460a-457f-a38c-2c8da2e9e81c" (UID: "4ab13369-460a-457f-a38c-2c8da2e9e81c"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.593108 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8g6p6\" (UniqueName: \"kubernetes.io/projected/4ab13369-460a-457f-a38c-2c8da2e9e81c-kube-api-access-8g6p6\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.593150 4915 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/4ab13369-460a-457f-a38c-2c8da2e9e81c-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.593161 4915 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/4ab13369-460a-457f-a38c-2c8da2e9e81c-collector-token\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.593170 4915 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/4ab13369-460a-457f-a38c-2c8da2e9e81c-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.593181 4915 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ab13369-460a-457f-a38c-2c8da2e9e81c-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.593189 4915 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ab13369-460a-457f-a38c-2c8da2e9e81c-tmp\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.593197 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab13369-460a-457f-a38c-2c8da2e9e81c-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.593205 4915 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/4ab13369-460a-457f-a38c-2c8da2e9e81c-sa-token\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.593215 4915 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/4ab13369-460a-457f-a38c-2c8da2e9e81c-metrics\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:21 crc kubenswrapper[4915]: I1124 21:34:21.593224 4915 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/4ab13369-460a-457f-a38c-2c8da2e9e81c-entrypoint\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.332385 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ml6f" event={"ID":"d05debec-784e-49ee-be98-09657f2c9ceb","Type":"ContainerStarted","Data":"ca01b7539d85d13ed2edf58e0d58fd86814fca9ed511edf2772967951f6df64e"} Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.332397 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-ngj6x" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.359869 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8ml6f" podStartSLOduration=2.699146639 podStartE2EDuration="5.359845554s" podCreationTimestamp="2025-11-24 21:34:17 +0000 UTC" firstStartedPulling="2025-11-24 21:34:19.306072345 +0000 UTC m=+877.622324518" lastFinishedPulling="2025-11-24 21:34:21.96677125 +0000 UTC m=+880.283023433" observedRunningTime="2025-11-24 21:34:22.357707456 +0000 UTC m=+880.673959629" watchObservedRunningTime="2025-11-24 21:34:22.359845554 +0000 UTC m=+880.676097727" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.399310 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-ngj6x"] Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.407631 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-ngj6x"] Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.418683 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-ftlx8"] Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.419847 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.421766 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.422740 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.422877 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.423197 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-8p776" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.424901 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.435389 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.438983 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ab13369-460a-457f-a38c-2c8da2e9e81c" path="/var/lib/kubelet/pods/4ab13369-460a-457f-a38c-2c8da2e9e81c/volumes" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.439361 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-ftlx8"] Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.506514 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/be6316b9-4bd5-4762-93bb-245771235e4d-config-openshift-service-cacrt\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.506576 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/be6316b9-4bd5-4762-93bb-245771235e4d-metrics\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.506602 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/be6316b9-4bd5-4762-93bb-245771235e4d-collector-token\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.506646 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/be6316b9-4bd5-4762-93bb-245771235e4d-entrypoint\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.506696 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/be6316b9-4bd5-4762-93bb-245771235e4d-datadir\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.506743 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be6316b9-4bd5-4762-93bb-245771235e4d-config\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.506847 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/be6316b9-4bd5-4762-93bb-245771235e4d-sa-token\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.506876 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/be6316b9-4bd5-4762-93bb-245771235e4d-tmp\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.506894 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/be6316b9-4bd5-4762-93bb-245771235e4d-collector-syslog-receiver\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.506937 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be6316b9-4bd5-4762-93bb-245771235e4d-trusted-ca\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.506992 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfvqp\" (UniqueName: \"kubernetes.io/projected/be6316b9-4bd5-4762-93bb-245771235e4d-kube-api-access-kfvqp\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.608547 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/be6316b9-4bd5-4762-93bb-245771235e4d-metrics\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.608609 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/be6316b9-4bd5-4762-93bb-245771235e4d-collector-token\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.608652 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/be6316b9-4bd5-4762-93bb-245771235e4d-entrypoint\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.608699 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/be6316b9-4bd5-4762-93bb-245771235e4d-datadir\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.608739 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be6316b9-4bd5-4762-93bb-245771235e4d-config\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.608798 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/be6316b9-4bd5-4762-93bb-245771235e4d-sa-token\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.608825 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/be6316b9-4bd5-4762-93bb-245771235e4d-tmp\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.608847 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/be6316b9-4bd5-4762-93bb-245771235e4d-collector-syslog-receiver\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.608871 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be6316b9-4bd5-4762-93bb-245771235e4d-trusted-ca\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.608905 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfvqp\" (UniqueName: \"kubernetes.io/projected/be6316b9-4bd5-4762-93bb-245771235e4d-kube-api-access-kfvqp\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.608932 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/be6316b9-4bd5-4762-93bb-245771235e4d-config-openshift-service-cacrt\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.609613 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/be6316b9-4bd5-4762-93bb-245771235e4d-config-openshift-service-cacrt\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.609759 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/be6316b9-4bd5-4762-93bb-245771235e4d-datadir\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.610617 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/be6316b9-4bd5-4762-93bb-245771235e4d-entrypoint\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.610705 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be6316b9-4bd5-4762-93bb-245771235e4d-trusted-ca\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.610969 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be6316b9-4bd5-4762-93bb-245771235e4d-config\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.615726 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/be6316b9-4bd5-4762-93bb-245771235e4d-metrics\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.616386 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/be6316b9-4bd5-4762-93bb-245771235e4d-collector-token\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.617225 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/be6316b9-4bd5-4762-93bb-245771235e4d-collector-syslog-receiver\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.620210 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/be6316b9-4bd5-4762-93bb-245771235e4d-tmp\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.632447 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/be6316b9-4bd5-4762-93bb-245771235e4d-sa-token\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.640497 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfvqp\" (UniqueName: \"kubernetes.io/projected/be6316b9-4bd5-4762-93bb-245771235e4d-kube-api-access-kfvqp\") pod \"collector-ftlx8\" (UID: \"be6316b9-4bd5-4762-93bb-245771235e4d\") " pod="openshift-logging/collector-ftlx8" Nov 24 21:34:22 crc kubenswrapper[4915]: I1124 21:34:22.740224 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-ftlx8" Nov 24 21:34:23 crc kubenswrapper[4915]: I1124 21:34:23.368980 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-ftlx8"] Nov 24 21:34:23 crc kubenswrapper[4915]: W1124 21:34:23.382028 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe6316b9_4bd5_4762_93bb_245771235e4d.slice/crio-7c3bdbfca90917ce16f7c31e07be3c80398c1b0d3271bd707fa337705f35e613 WatchSource:0}: Error finding container 7c3bdbfca90917ce16f7c31e07be3c80398c1b0d3271bd707fa337705f35e613: Status 404 returned error can't find the container with id 7c3bdbfca90917ce16f7c31e07be3c80398c1b0d3271bd707fa337705f35e613 Nov 24 21:34:24 crc kubenswrapper[4915]: I1124 21:34:24.352599 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-ftlx8" event={"ID":"be6316b9-4bd5-4762-93bb-245771235e4d","Type":"ContainerStarted","Data":"7c3bdbfca90917ce16f7c31e07be3c80398c1b0d3271bd707fa337705f35e613"} Nov 24 21:34:27 crc kubenswrapper[4915]: I1124 21:34:27.985023 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8ml6f" Nov 24 21:34:27 crc kubenswrapper[4915]: I1124 21:34:27.985371 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8ml6f" Nov 24 21:34:28 crc kubenswrapper[4915]: I1124 21:34:28.024799 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8ml6f" Nov 24 21:34:28 crc kubenswrapper[4915]: I1124 21:34:28.438279 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8ml6f" Nov 24 21:34:28 crc kubenswrapper[4915]: I1124 21:34:28.480668 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8ml6f"] Nov 24 21:34:29 crc kubenswrapper[4915]: I1124 21:34:29.389160 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-ftlx8" event={"ID":"be6316b9-4bd5-4762-93bb-245771235e4d","Type":"ContainerStarted","Data":"039863f3ae67bb24f0e19a1d46303df480e150e33b2d8314b6ea5f599afd3579"} Nov 24 21:34:29 crc kubenswrapper[4915]: I1124 21:34:29.410436 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-ftlx8" podStartSLOduration=1.845212064 podStartE2EDuration="7.410414846s" podCreationTimestamp="2025-11-24 21:34:22 +0000 UTC" firstStartedPulling="2025-11-24 21:34:23.392241198 +0000 UTC m=+881.708493371" lastFinishedPulling="2025-11-24 21:34:28.95744398 +0000 UTC m=+887.273696153" observedRunningTime="2025-11-24 21:34:29.407752714 +0000 UTC m=+887.724004897" watchObservedRunningTime="2025-11-24 21:34:29.410414846 +0000 UTC m=+887.726667029" Nov 24 21:34:30 crc kubenswrapper[4915]: I1124 21:34:30.395934 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8ml6f" podUID="d05debec-784e-49ee-be98-09657f2c9ceb" containerName="registry-server" containerID="cri-o://ca01b7539d85d13ed2edf58e0d58fd86814fca9ed511edf2772967951f6df64e" gracePeriod=2 Nov 24 21:34:31 crc kubenswrapper[4915]: I1124 21:34:31.286387 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8ml6f" Nov 24 21:34:31 crc kubenswrapper[4915]: I1124 21:34:31.350447 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d05debec-784e-49ee-be98-09657f2c9ceb-utilities\") pod \"d05debec-784e-49ee-be98-09657f2c9ceb\" (UID: \"d05debec-784e-49ee-be98-09657f2c9ceb\") " Nov 24 21:34:31 crc kubenswrapper[4915]: I1124 21:34:31.350579 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d05debec-784e-49ee-be98-09657f2c9ceb-catalog-content\") pod \"d05debec-784e-49ee-be98-09657f2c9ceb\" (UID: \"d05debec-784e-49ee-be98-09657f2c9ceb\") " Nov 24 21:34:31 crc kubenswrapper[4915]: I1124 21:34:31.350620 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b67dx\" (UniqueName: \"kubernetes.io/projected/d05debec-784e-49ee-be98-09657f2c9ceb-kube-api-access-b67dx\") pod \"d05debec-784e-49ee-be98-09657f2c9ceb\" (UID: \"d05debec-784e-49ee-be98-09657f2c9ceb\") " Nov 24 21:34:31 crc kubenswrapper[4915]: I1124 21:34:31.351208 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d05debec-784e-49ee-be98-09657f2c9ceb-utilities" (OuterVolumeSpecName: "utilities") pod "d05debec-784e-49ee-be98-09657f2c9ceb" (UID: "d05debec-784e-49ee-be98-09657f2c9ceb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:34:31 crc kubenswrapper[4915]: I1124 21:34:31.356916 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d05debec-784e-49ee-be98-09657f2c9ceb-kube-api-access-b67dx" (OuterVolumeSpecName: "kube-api-access-b67dx") pod "d05debec-784e-49ee-be98-09657f2c9ceb" (UID: "d05debec-784e-49ee-be98-09657f2c9ceb"). InnerVolumeSpecName "kube-api-access-b67dx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:34:31 crc kubenswrapper[4915]: I1124 21:34:31.371028 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d05debec-784e-49ee-be98-09657f2c9ceb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d05debec-784e-49ee-be98-09657f2c9ceb" (UID: "d05debec-784e-49ee-be98-09657f2c9ceb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:34:31 crc kubenswrapper[4915]: I1124 21:34:31.404109 4915 generic.go:334] "Generic (PLEG): container finished" podID="d05debec-784e-49ee-be98-09657f2c9ceb" containerID="ca01b7539d85d13ed2edf58e0d58fd86814fca9ed511edf2772967951f6df64e" exitCode=0 Nov 24 21:34:31 crc kubenswrapper[4915]: I1124 21:34:31.404175 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ml6f" event={"ID":"d05debec-784e-49ee-be98-09657f2c9ceb","Type":"ContainerDied","Data":"ca01b7539d85d13ed2edf58e0d58fd86814fca9ed511edf2772967951f6df64e"} Nov 24 21:34:31 crc kubenswrapper[4915]: I1124 21:34:31.404239 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8ml6f" event={"ID":"d05debec-784e-49ee-be98-09657f2c9ceb","Type":"ContainerDied","Data":"9b4a918d0906b1a14f98effbcd0af5a3a1b698a301728e30d24dbdc5adeb68e6"} Nov 24 21:34:31 crc kubenswrapper[4915]: I1124 21:34:31.404271 4915 scope.go:117] "RemoveContainer" containerID="ca01b7539d85d13ed2edf58e0d58fd86814fca9ed511edf2772967951f6df64e" Nov 24 21:34:31 crc kubenswrapper[4915]: I1124 21:34:31.405075 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8ml6f" Nov 24 21:34:31 crc kubenswrapper[4915]: I1124 21:34:31.443168 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8ml6f"] Nov 24 21:34:31 crc kubenswrapper[4915]: I1124 21:34:31.447053 4915 scope.go:117] "RemoveContainer" containerID="fb96201ca01b84274351f123f65f09bad9c064d6f48ed801fa964df7dea4be5e" Nov 24 21:34:31 crc kubenswrapper[4915]: I1124 21:34:31.450223 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8ml6f"] Nov 24 21:34:31 crc kubenswrapper[4915]: I1124 21:34:31.453524 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d05debec-784e-49ee-be98-09657f2c9ceb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:31 crc kubenswrapper[4915]: I1124 21:34:31.453568 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b67dx\" (UniqueName: \"kubernetes.io/projected/d05debec-784e-49ee-be98-09657f2c9ceb-kube-api-access-b67dx\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:31 crc kubenswrapper[4915]: I1124 21:34:31.453585 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d05debec-784e-49ee-be98-09657f2c9ceb-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:34:31 crc kubenswrapper[4915]: I1124 21:34:31.466524 4915 scope.go:117] "RemoveContainer" containerID="de33fdd027dade3ab75fa746fceb1f2094ffb1b570a4f946a03f42550ce25cb0" Nov 24 21:34:31 crc kubenswrapper[4915]: I1124 21:34:31.495498 4915 scope.go:117] "RemoveContainer" containerID="ca01b7539d85d13ed2edf58e0d58fd86814fca9ed511edf2772967951f6df64e" Nov 24 21:34:31 crc kubenswrapper[4915]: E1124 21:34:31.496132 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca01b7539d85d13ed2edf58e0d58fd86814fca9ed511edf2772967951f6df64e\": container with ID starting with ca01b7539d85d13ed2edf58e0d58fd86814fca9ed511edf2772967951f6df64e not found: ID does not exist" containerID="ca01b7539d85d13ed2edf58e0d58fd86814fca9ed511edf2772967951f6df64e" Nov 24 21:34:31 crc kubenswrapper[4915]: I1124 21:34:31.496162 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca01b7539d85d13ed2edf58e0d58fd86814fca9ed511edf2772967951f6df64e"} err="failed to get container status \"ca01b7539d85d13ed2edf58e0d58fd86814fca9ed511edf2772967951f6df64e\": rpc error: code = NotFound desc = could not find container \"ca01b7539d85d13ed2edf58e0d58fd86814fca9ed511edf2772967951f6df64e\": container with ID starting with ca01b7539d85d13ed2edf58e0d58fd86814fca9ed511edf2772967951f6df64e not found: ID does not exist" Nov 24 21:34:31 crc kubenswrapper[4915]: I1124 21:34:31.496201 4915 scope.go:117] "RemoveContainer" containerID="fb96201ca01b84274351f123f65f09bad9c064d6f48ed801fa964df7dea4be5e" Nov 24 21:34:31 crc kubenswrapper[4915]: E1124 21:34:31.496559 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb96201ca01b84274351f123f65f09bad9c064d6f48ed801fa964df7dea4be5e\": container with ID starting with fb96201ca01b84274351f123f65f09bad9c064d6f48ed801fa964df7dea4be5e not found: ID does not exist" containerID="fb96201ca01b84274351f123f65f09bad9c064d6f48ed801fa964df7dea4be5e" Nov 24 21:34:31 crc kubenswrapper[4915]: I1124 21:34:31.496578 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb96201ca01b84274351f123f65f09bad9c064d6f48ed801fa964df7dea4be5e"} err="failed to get container status \"fb96201ca01b84274351f123f65f09bad9c064d6f48ed801fa964df7dea4be5e\": rpc error: code = NotFound desc = could not find container \"fb96201ca01b84274351f123f65f09bad9c064d6f48ed801fa964df7dea4be5e\": container with ID starting with fb96201ca01b84274351f123f65f09bad9c064d6f48ed801fa964df7dea4be5e not found: ID does not exist" Nov 24 21:34:31 crc kubenswrapper[4915]: I1124 21:34:31.496590 4915 scope.go:117] "RemoveContainer" containerID="de33fdd027dade3ab75fa746fceb1f2094ffb1b570a4f946a03f42550ce25cb0" Nov 24 21:34:31 crc kubenswrapper[4915]: E1124 21:34:31.496895 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de33fdd027dade3ab75fa746fceb1f2094ffb1b570a4f946a03f42550ce25cb0\": container with ID starting with de33fdd027dade3ab75fa746fceb1f2094ffb1b570a4f946a03f42550ce25cb0 not found: ID does not exist" containerID="de33fdd027dade3ab75fa746fceb1f2094ffb1b570a4f946a03f42550ce25cb0" Nov 24 21:34:31 crc kubenswrapper[4915]: I1124 21:34:31.497014 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de33fdd027dade3ab75fa746fceb1f2094ffb1b570a4f946a03f42550ce25cb0"} err="failed to get container status \"de33fdd027dade3ab75fa746fceb1f2094ffb1b570a4f946a03f42550ce25cb0\": rpc error: code = NotFound desc = could not find container \"de33fdd027dade3ab75fa746fceb1f2094ffb1b570a4f946a03f42550ce25cb0\": container with ID starting with de33fdd027dade3ab75fa746fceb1f2094ffb1b570a4f946a03f42550ce25cb0 not found: ID does not exist" Nov 24 21:34:32 crc kubenswrapper[4915]: I1124 21:34:32.436322 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d05debec-784e-49ee-be98-09657f2c9ceb" path="/var/lib/kubelet/pods/d05debec-784e-49ee-be98-09657f2c9ceb/volumes" Nov 24 21:35:01 crc kubenswrapper[4915]: I1124 21:35:01.639363 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj"] Nov 24 21:35:01 crc kubenswrapper[4915]: E1124 21:35:01.640204 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d05debec-784e-49ee-be98-09657f2c9ceb" containerName="extract-utilities" Nov 24 21:35:01 crc kubenswrapper[4915]: I1124 21:35:01.640223 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="d05debec-784e-49ee-be98-09657f2c9ceb" containerName="extract-utilities" Nov 24 21:35:01 crc kubenswrapper[4915]: E1124 21:35:01.640259 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d05debec-784e-49ee-be98-09657f2c9ceb" containerName="registry-server" Nov 24 21:35:01 crc kubenswrapper[4915]: I1124 21:35:01.640266 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="d05debec-784e-49ee-be98-09657f2c9ceb" containerName="registry-server" Nov 24 21:35:01 crc kubenswrapper[4915]: E1124 21:35:01.640274 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d05debec-784e-49ee-be98-09657f2c9ceb" containerName="extract-content" Nov 24 21:35:01 crc kubenswrapper[4915]: I1124 21:35:01.640281 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="d05debec-784e-49ee-be98-09657f2c9ceb" containerName="extract-content" Nov 24 21:35:01 crc kubenswrapper[4915]: I1124 21:35:01.640393 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="d05debec-784e-49ee-be98-09657f2c9ceb" containerName="registry-server" Nov 24 21:35:01 crc kubenswrapper[4915]: I1124 21:35:01.641494 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj" Nov 24 21:35:01 crc kubenswrapper[4915]: I1124 21:35:01.643423 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 21:35:01 crc kubenswrapper[4915]: I1124 21:35:01.652537 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj"] Nov 24 21:35:01 crc kubenswrapper[4915]: I1124 21:35:01.740811 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdwtn\" (UniqueName: \"kubernetes.io/projected/fbb9899f-70f3-48f3-9bcf-d9d591ba184d-kube-api-access-fdwtn\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj\" (UID: \"fbb9899f-70f3-48f3-9bcf-d9d591ba184d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj" Nov 24 21:35:01 crc kubenswrapper[4915]: I1124 21:35:01.740859 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fbb9899f-70f3-48f3-9bcf-d9d591ba184d-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj\" (UID: \"fbb9899f-70f3-48f3-9bcf-d9d591ba184d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj" Nov 24 21:35:01 crc kubenswrapper[4915]: I1124 21:35:01.740914 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fbb9899f-70f3-48f3-9bcf-d9d591ba184d-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj\" (UID: \"fbb9899f-70f3-48f3-9bcf-d9d591ba184d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj" Nov 24 21:35:01 crc kubenswrapper[4915]: I1124 21:35:01.841903 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fbb9899f-70f3-48f3-9bcf-d9d591ba184d-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj\" (UID: \"fbb9899f-70f3-48f3-9bcf-d9d591ba184d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj" Nov 24 21:35:01 crc kubenswrapper[4915]: I1124 21:35:01.842015 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdwtn\" (UniqueName: \"kubernetes.io/projected/fbb9899f-70f3-48f3-9bcf-d9d591ba184d-kube-api-access-fdwtn\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj\" (UID: \"fbb9899f-70f3-48f3-9bcf-d9d591ba184d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj" Nov 24 21:35:01 crc kubenswrapper[4915]: I1124 21:35:01.842038 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fbb9899f-70f3-48f3-9bcf-d9d591ba184d-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj\" (UID: \"fbb9899f-70f3-48f3-9bcf-d9d591ba184d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj" Nov 24 21:35:01 crc kubenswrapper[4915]: I1124 21:35:01.842401 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fbb9899f-70f3-48f3-9bcf-d9d591ba184d-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj\" (UID: \"fbb9899f-70f3-48f3-9bcf-d9d591ba184d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj" Nov 24 21:35:01 crc kubenswrapper[4915]: I1124 21:35:01.842424 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fbb9899f-70f3-48f3-9bcf-d9d591ba184d-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj\" (UID: \"fbb9899f-70f3-48f3-9bcf-d9d591ba184d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj" Nov 24 21:35:01 crc kubenswrapper[4915]: I1124 21:35:01.862894 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdwtn\" (UniqueName: \"kubernetes.io/projected/fbb9899f-70f3-48f3-9bcf-d9d591ba184d-kube-api-access-fdwtn\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj\" (UID: \"fbb9899f-70f3-48f3-9bcf-d9d591ba184d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj" Nov 24 21:35:01 crc kubenswrapper[4915]: I1124 21:35:01.966999 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj" Nov 24 21:35:02 crc kubenswrapper[4915]: I1124 21:35:02.454680 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj"] Nov 24 21:35:02 crc kubenswrapper[4915]: I1124 21:35:02.697813 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj" event={"ID":"fbb9899f-70f3-48f3-9bcf-d9d591ba184d","Type":"ContainerStarted","Data":"abd999ec84be60db58c6f03d7c18e9252acec7935b2e0a5d8f6fdf33f3579758"} Nov 24 21:35:02 crc kubenswrapper[4915]: I1124 21:35:02.697877 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj" event={"ID":"fbb9899f-70f3-48f3-9bcf-d9d591ba184d","Type":"ContainerStarted","Data":"cad478444aea9f397990feb812c01728afea43431cc4a9cedb10fe48455d6df4"} Nov 24 21:35:03 crc kubenswrapper[4915]: I1124 21:35:03.711288 4915 generic.go:334] "Generic (PLEG): container finished" podID="fbb9899f-70f3-48f3-9bcf-d9d591ba184d" containerID="abd999ec84be60db58c6f03d7c18e9252acec7935b2e0a5d8f6fdf33f3579758" exitCode=0 Nov 24 21:35:03 crc kubenswrapper[4915]: I1124 21:35:03.711808 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj" event={"ID":"fbb9899f-70f3-48f3-9bcf-d9d591ba184d","Type":"ContainerDied","Data":"abd999ec84be60db58c6f03d7c18e9252acec7935b2e0a5d8f6fdf33f3579758"} Nov 24 21:35:06 crc kubenswrapper[4915]: I1124 21:35:06.734744 4915 generic.go:334] "Generic (PLEG): container finished" podID="fbb9899f-70f3-48f3-9bcf-d9d591ba184d" containerID="fb94f00e66aa4cb9a788c3500120030895e2e693eee3258c190137663d4d6ede" exitCode=0 Nov 24 21:35:06 crc kubenswrapper[4915]: I1124 21:35:06.736464 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj" event={"ID":"fbb9899f-70f3-48f3-9bcf-d9d591ba184d","Type":"ContainerDied","Data":"fb94f00e66aa4cb9a788c3500120030895e2e693eee3258c190137663d4d6ede"} Nov 24 21:35:07 crc kubenswrapper[4915]: I1124 21:35:07.748680 4915 generic.go:334] "Generic (PLEG): container finished" podID="fbb9899f-70f3-48f3-9bcf-d9d591ba184d" containerID="f444830e5e81bee8e293892917390b3fa929d66174c03f5e12078b4a6098654b" exitCode=0 Nov 24 21:35:07 crc kubenswrapper[4915]: I1124 21:35:07.748727 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj" event={"ID":"fbb9899f-70f3-48f3-9bcf-d9d591ba184d","Type":"ContainerDied","Data":"f444830e5e81bee8e293892917390b3fa929d66174c03f5e12078b4a6098654b"} Nov 24 21:35:09 crc kubenswrapper[4915]: I1124 21:35:09.006036 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj" Nov 24 21:35:09 crc kubenswrapper[4915]: I1124 21:35:09.077726 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdwtn\" (UniqueName: \"kubernetes.io/projected/fbb9899f-70f3-48f3-9bcf-d9d591ba184d-kube-api-access-fdwtn\") pod \"fbb9899f-70f3-48f3-9bcf-d9d591ba184d\" (UID: \"fbb9899f-70f3-48f3-9bcf-d9d591ba184d\") " Nov 24 21:35:09 crc kubenswrapper[4915]: I1124 21:35:09.077792 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fbb9899f-70f3-48f3-9bcf-d9d591ba184d-util\") pod \"fbb9899f-70f3-48f3-9bcf-d9d591ba184d\" (UID: \"fbb9899f-70f3-48f3-9bcf-d9d591ba184d\") " Nov 24 21:35:09 crc kubenswrapper[4915]: I1124 21:35:09.077968 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fbb9899f-70f3-48f3-9bcf-d9d591ba184d-bundle\") pod \"fbb9899f-70f3-48f3-9bcf-d9d591ba184d\" (UID: \"fbb9899f-70f3-48f3-9bcf-d9d591ba184d\") " Nov 24 21:35:09 crc kubenswrapper[4915]: I1124 21:35:09.078761 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbb9899f-70f3-48f3-9bcf-d9d591ba184d-bundle" (OuterVolumeSpecName: "bundle") pod "fbb9899f-70f3-48f3-9bcf-d9d591ba184d" (UID: "fbb9899f-70f3-48f3-9bcf-d9d591ba184d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:35:09 crc kubenswrapper[4915]: I1124 21:35:09.089174 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbb9899f-70f3-48f3-9bcf-d9d591ba184d-kube-api-access-fdwtn" (OuterVolumeSpecName: "kube-api-access-fdwtn") pod "fbb9899f-70f3-48f3-9bcf-d9d591ba184d" (UID: "fbb9899f-70f3-48f3-9bcf-d9d591ba184d"). InnerVolumeSpecName "kube-api-access-fdwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:35:09 crc kubenswrapper[4915]: I1124 21:35:09.089276 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbb9899f-70f3-48f3-9bcf-d9d591ba184d-util" (OuterVolumeSpecName: "util") pod "fbb9899f-70f3-48f3-9bcf-d9d591ba184d" (UID: "fbb9899f-70f3-48f3-9bcf-d9d591ba184d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:35:09 crc kubenswrapper[4915]: I1124 21:35:09.179505 4915 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fbb9899f-70f3-48f3-9bcf-d9d591ba184d-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:35:09 crc kubenswrapper[4915]: I1124 21:35:09.179546 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdwtn\" (UniqueName: \"kubernetes.io/projected/fbb9899f-70f3-48f3-9bcf-d9d591ba184d-kube-api-access-fdwtn\") on node \"crc\" DevicePath \"\"" Nov 24 21:35:09 crc kubenswrapper[4915]: I1124 21:35:09.179558 4915 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fbb9899f-70f3-48f3-9bcf-d9d591ba184d-util\") on node \"crc\" DevicePath \"\"" Nov 24 21:35:09 crc kubenswrapper[4915]: I1124 21:35:09.764601 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj" event={"ID":"fbb9899f-70f3-48f3-9bcf-d9d591ba184d","Type":"ContainerDied","Data":"cad478444aea9f397990feb812c01728afea43431cc4a9cedb10fe48455d6df4"} Nov 24 21:35:09 crc kubenswrapper[4915]: I1124 21:35:09.764653 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cad478444aea9f397990feb812c01728afea43431cc4a9cedb10fe48455d6df4" Nov 24 21:35:09 crc kubenswrapper[4915]: I1124 21:35:09.764694 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj" Nov 24 21:35:13 crc kubenswrapper[4915]: I1124 21:35:13.650656 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-7jsng"] Nov 24 21:35:13 crc kubenswrapper[4915]: E1124 21:35:13.652587 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb9899f-70f3-48f3-9bcf-d9d591ba184d" containerName="pull" Nov 24 21:35:13 crc kubenswrapper[4915]: I1124 21:35:13.652691 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb9899f-70f3-48f3-9bcf-d9d591ba184d" containerName="pull" Nov 24 21:35:13 crc kubenswrapper[4915]: E1124 21:35:13.652801 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb9899f-70f3-48f3-9bcf-d9d591ba184d" containerName="util" Nov 24 21:35:13 crc kubenswrapper[4915]: I1124 21:35:13.652889 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb9899f-70f3-48f3-9bcf-d9d591ba184d" containerName="util" Nov 24 21:35:13 crc kubenswrapper[4915]: E1124 21:35:13.652987 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb9899f-70f3-48f3-9bcf-d9d591ba184d" containerName="extract" Nov 24 21:35:13 crc kubenswrapper[4915]: I1124 21:35:13.653055 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb9899f-70f3-48f3-9bcf-d9d591ba184d" containerName="extract" Nov 24 21:35:13 crc kubenswrapper[4915]: I1124 21:35:13.653296 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbb9899f-70f3-48f3-9bcf-d9d591ba184d" containerName="extract" Nov 24 21:35:13 crc kubenswrapper[4915]: I1124 21:35:13.654028 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-7jsng" Nov 24 21:35:13 crc kubenswrapper[4915]: I1124 21:35:13.656608 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 24 21:35:13 crc kubenswrapper[4915]: I1124 21:35:13.656860 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 24 21:35:13 crc kubenswrapper[4915]: I1124 21:35:13.658452 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-7qnbw" Nov 24 21:35:13 crc kubenswrapper[4915]: I1124 21:35:13.664597 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-7jsng"] Nov 24 21:35:13 crc kubenswrapper[4915]: I1124 21:35:13.755880 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwfs9\" (UniqueName: \"kubernetes.io/projected/02d4045f-82d7-42b3-88eb-a4011970e80f-kube-api-access-pwfs9\") pod \"nmstate-operator-557fdffb88-7jsng\" (UID: \"02d4045f-82d7-42b3-88eb-a4011970e80f\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-7jsng" Nov 24 21:35:13 crc kubenswrapper[4915]: I1124 21:35:13.857271 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwfs9\" (UniqueName: \"kubernetes.io/projected/02d4045f-82d7-42b3-88eb-a4011970e80f-kube-api-access-pwfs9\") pod \"nmstate-operator-557fdffb88-7jsng\" (UID: \"02d4045f-82d7-42b3-88eb-a4011970e80f\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-7jsng" Nov 24 21:35:13 crc kubenswrapper[4915]: I1124 21:35:13.883648 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwfs9\" (UniqueName: \"kubernetes.io/projected/02d4045f-82d7-42b3-88eb-a4011970e80f-kube-api-access-pwfs9\") pod \"nmstate-operator-557fdffb88-7jsng\" (UID: \"02d4045f-82d7-42b3-88eb-a4011970e80f\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-7jsng" Nov 24 21:35:13 crc kubenswrapper[4915]: I1124 21:35:13.994517 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-7jsng" Nov 24 21:35:14 crc kubenswrapper[4915]: I1124 21:35:14.238143 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-7jsng"] Nov 24 21:35:14 crc kubenswrapper[4915]: I1124 21:35:14.800924 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-7jsng" event={"ID":"02d4045f-82d7-42b3-88eb-a4011970e80f","Type":"ContainerStarted","Data":"d953a3d1ed0776c8f4fc52ceb4305d90a2356a73a0bb2d9cd25917015d058490"} Nov 24 21:35:17 crc kubenswrapper[4915]: I1124 21:35:17.832914 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-7jsng" event={"ID":"02d4045f-82d7-42b3-88eb-a4011970e80f","Type":"ContainerStarted","Data":"09b4d61105ccf775f6918566dc53626168d24fbe66fbab1f602a2b649e70a26b"} Nov 24 21:35:17 crc kubenswrapper[4915]: I1124 21:35:17.858344 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-7jsng" podStartSLOduration=2.225627361 podStartE2EDuration="4.858305748s" podCreationTimestamp="2025-11-24 21:35:13 +0000 UTC" firstStartedPulling="2025-11-24 21:35:14.249220761 +0000 UTC m=+932.565472924" lastFinishedPulling="2025-11-24 21:35:16.881899138 +0000 UTC m=+935.198151311" observedRunningTime="2025-11-24 21:35:17.856419127 +0000 UTC m=+936.172671320" watchObservedRunningTime="2025-11-24 21:35:17.858305748 +0000 UTC m=+936.174557931" Nov 24 21:35:22 crc kubenswrapper[4915]: I1124 21:35:22.779806 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-srvwk"] Nov 24 21:35:22 crc kubenswrapper[4915]: I1124 21:35:22.781905 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-srvwk" Nov 24 21:35:22 crc kubenswrapper[4915]: I1124 21:35:22.784504 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-7zzkr" Nov 24 21:35:22 crc kubenswrapper[4915]: I1124 21:35:22.787142 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs8lm\" (UniqueName: \"kubernetes.io/projected/b329347f-4c7a-4973-b432-fb033406721f-kube-api-access-xs8lm\") pod \"nmstate-metrics-5dcf9c57c5-srvwk\" (UID: \"b329347f-4c7a-4973-b432-fb033406721f\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-srvwk" Nov 24 21:35:22 crc kubenswrapper[4915]: I1124 21:35:22.787943 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-tgnfp"] Nov 24 21:35:22 crc kubenswrapper[4915]: I1124 21:35:22.789129 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-tgnfp" Nov 24 21:35:22 crc kubenswrapper[4915]: I1124 21:35:22.790878 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 24 21:35:22 crc kubenswrapper[4915]: I1124 21:35:22.799803 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-srvwk"] Nov 24 21:35:22 crc kubenswrapper[4915]: I1124 21:35:22.807424 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-brgx6"] Nov 24 21:35:22 crc kubenswrapper[4915]: I1124 21:35:22.808666 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-brgx6" Nov 24 21:35:22 crc kubenswrapper[4915]: I1124 21:35:22.812937 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-tgnfp"] Nov 24 21:35:22 crc kubenswrapper[4915]: I1124 21:35:22.889644 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/deb64d76-5b5e-481c-8486-c387b0579aa5-dbus-socket\") pod \"nmstate-handler-brgx6\" (UID: \"deb64d76-5b5e-481c-8486-c387b0579aa5\") " pod="openshift-nmstate/nmstate-handler-brgx6" Nov 24 21:35:22 crc kubenswrapper[4915]: I1124 21:35:22.889699 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/deb64d76-5b5e-481c-8486-c387b0579aa5-nmstate-lock\") pod \"nmstate-handler-brgx6\" (UID: \"deb64d76-5b5e-481c-8486-c387b0579aa5\") " pod="openshift-nmstate/nmstate-handler-brgx6" Nov 24 21:35:22 crc kubenswrapper[4915]: I1124 21:35:22.889727 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/deb64d76-5b5e-481c-8486-c387b0579aa5-ovs-socket\") pod \"nmstate-handler-brgx6\" (UID: \"deb64d76-5b5e-481c-8486-c387b0579aa5\") " pod="openshift-nmstate/nmstate-handler-brgx6" Nov 24 21:35:22 crc kubenswrapper[4915]: I1124 21:35:22.889757 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f28e1e42-b484-49c7-877f-43fd48691994-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-tgnfp\" (UID: \"f28e1e42-b484-49c7-877f-43fd48691994\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-tgnfp" Nov 24 21:35:22 crc kubenswrapper[4915]: I1124 21:35:22.889817 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwnhk\" (UniqueName: \"kubernetes.io/projected/f28e1e42-b484-49c7-877f-43fd48691994-kube-api-access-gwnhk\") pod \"nmstate-webhook-6b89b748d8-tgnfp\" (UID: \"f28e1e42-b484-49c7-877f-43fd48691994\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-tgnfp" Nov 24 21:35:22 crc kubenswrapper[4915]: I1124 21:35:22.889840 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99rpj\" (UniqueName: \"kubernetes.io/projected/deb64d76-5b5e-481c-8486-c387b0579aa5-kube-api-access-99rpj\") pod \"nmstate-handler-brgx6\" (UID: \"deb64d76-5b5e-481c-8486-c387b0579aa5\") " pod="openshift-nmstate/nmstate-handler-brgx6" Nov 24 21:35:22 crc kubenswrapper[4915]: I1124 21:35:22.889866 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs8lm\" (UniqueName: \"kubernetes.io/projected/b329347f-4c7a-4973-b432-fb033406721f-kube-api-access-xs8lm\") pod \"nmstate-metrics-5dcf9c57c5-srvwk\" (UID: \"b329347f-4c7a-4973-b432-fb033406721f\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-srvwk" Nov 24 21:35:22 crc kubenswrapper[4915]: I1124 21:35:22.922193 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs8lm\" (UniqueName: \"kubernetes.io/projected/b329347f-4c7a-4973-b432-fb033406721f-kube-api-access-xs8lm\") pod \"nmstate-metrics-5dcf9c57c5-srvwk\" (UID: \"b329347f-4c7a-4973-b432-fb033406721f\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-srvwk" Nov 24 21:35:22 crc kubenswrapper[4915]: I1124 21:35:22.970506 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-kvpdj"] Nov 24 21:35:22 crc kubenswrapper[4915]: I1124 21:35:22.991490 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/deb64d76-5b5e-481c-8486-c387b0579aa5-ovs-socket\") pod \"nmstate-handler-brgx6\" (UID: \"deb64d76-5b5e-481c-8486-c387b0579aa5\") " pod="openshift-nmstate/nmstate-handler-brgx6" Nov 24 21:35:22 crc kubenswrapper[4915]: I1124 21:35:22.991577 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f28e1e42-b484-49c7-877f-43fd48691994-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-tgnfp\" (UID: \"f28e1e42-b484-49c7-877f-43fd48691994\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-tgnfp" Nov 24 21:35:22 crc kubenswrapper[4915]: I1124 21:35:22.991630 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwnhk\" (UniqueName: \"kubernetes.io/projected/f28e1e42-b484-49c7-877f-43fd48691994-kube-api-access-gwnhk\") pod \"nmstate-webhook-6b89b748d8-tgnfp\" (UID: \"f28e1e42-b484-49c7-877f-43fd48691994\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-tgnfp" Nov 24 21:35:22 crc kubenswrapper[4915]: I1124 21:35:22.991659 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99rpj\" (UniqueName: \"kubernetes.io/projected/deb64d76-5b5e-481c-8486-c387b0579aa5-kube-api-access-99rpj\") pod \"nmstate-handler-brgx6\" (UID: \"deb64d76-5b5e-481c-8486-c387b0579aa5\") " pod="openshift-nmstate/nmstate-handler-brgx6" Nov 24 21:35:22 crc kubenswrapper[4915]: I1124 21:35:22.991739 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/deb64d76-5b5e-481c-8486-c387b0579aa5-dbus-socket\") pod \"nmstate-handler-brgx6\" (UID: \"deb64d76-5b5e-481c-8486-c387b0579aa5\") " pod="openshift-nmstate/nmstate-handler-brgx6" Nov 24 21:35:22 crc kubenswrapper[4915]: I1124 21:35:22.991805 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/deb64d76-5b5e-481c-8486-c387b0579aa5-nmstate-lock\") pod \"nmstate-handler-brgx6\" (UID: \"deb64d76-5b5e-481c-8486-c387b0579aa5\") " pod="openshift-nmstate/nmstate-handler-brgx6" Nov 24 21:35:22 crc kubenswrapper[4915]: I1124 21:35:22.991937 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/deb64d76-5b5e-481c-8486-c387b0579aa5-nmstate-lock\") pod \"nmstate-handler-brgx6\" (UID: \"deb64d76-5b5e-481c-8486-c387b0579aa5\") " pod="openshift-nmstate/nmstate-handler-brgx6" Nov 24 21:35:22 crc kubenswrapper[4915]: I1124 21:35:22.991988 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/deb64d76-5b5e-481c-8486-c387b0579aa5-ovs-socket\") pod \"nmstate-handler-brgx6\" (UID: \"deb64d76-5b5e-481c-8486-c387b0579aa5\") " pod="openshift-nmstate/nmstate-handler-brgx6" Nov 24 21:35:22 crc kubenswrapper[4915]: I1124 21:35:22.993912 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/deb64d76-5b5e-481c-8486-c387b0579aa5-dbus-socket\") pod \"nmstate-handler-brgx6\" (UID: \"deb64d76-5b5e-481c-8486-c387b0579aa5\") " pod="openshift-nmstate/nmstate-handler-brgx6" Nov 24 21:35:22 crc kubenswrapper[4915]: I1124 21:35:22.997240 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-kvpdj" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.010512 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/f28e1e42-b484-49c7-877f-43fd48691994-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-tgnfp\" (UID: \"f28e1e42-b484-49c7-877f-43fd48691994\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-tgnfp" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.010679 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.011438 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.011700 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-f2477" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.018559 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-kvpdj"] Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.036600 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwnhk\" (UniqueName: \"kubernetes.io/projected/f28e1e42-b484-49c7-877f-43fd48691994-kube-api-access-gwnhk\") pod \"nmstate-webhook-6b89b748d8-tgnfp\" (UID: \"f28e1e42-b484-49c7-877f-43fd48691994\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-tgnfp" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.036648 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99rpj\" (UniqueName: \"kubernetes.io/projected/deb64d76-5b5e-481c-8486-c387b0579aa5-kube-api-access-99rpj\") pod \"nmstate-handler-brgx6\" (UID: \"deb64d76-5b5e-481c-8486-c387b0579aa5\") " pod="openshift-nmstate/nmstate-handler-brgx6" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.094729 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a6569ffe-880f-4e4e-8603-391a495fbb50-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-kvpdj\" (UID: \"a6569ffe-880f-4e4e-8603-391a495fbb50\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-kvpdj" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.094829 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a6569ffe-880f-4e4e-8603-391a495fbb50-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-kvpdj\" (UID: \"a6569ffe-880f-4e4e-8603-391a495fbb50\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-kvpdj" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.094866 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh9fd\" (UniqueName: \"kubernetes.io/projected/a6569ffe-880f-4e4e-8603-391a495fbb50-kube-api-access-hh9fd\") pod \"nmstate-console-plugin-5874bd7bc5-kvpdj\" (UID: \"a6569ffe-880f-4e4e-8603-391a495fbb50\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-kvpdj" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.106312 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-srvwk" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.129418 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-tgnfp" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.135763 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-brgx6" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.195869 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh9fd\" (UniqueName: \"kubernetes.io/projected/a6569ffe-880f-4e4e-8603-391a495fbb50-kube-api-access-hh9fd\") pod \"nmstate-console-plugin-5874bd7bc5-kvpdj\" (UID: \"a6569ffe-880f-4e4e-8603-391a495fbb50\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-kvpdj" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.196205 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a6569ffe-880f-4e4e-8603-391a495fbb50-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-kvpdj\" (UID: \"a6569ffe-880f-4e4e-8603-391a495fbb50\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-kvpdj" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.196278 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a6569ffe-880f-4e4e-8603-391a495fbb50-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-kvpdj\" (UID: \"a6569ffe-880f-4e4e-8603-391a495fbb50\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-kvpdj" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.197200 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a6569ffe-880f-4e4e-8603-391a495fbb50-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-kvpdj\" (UID: \"a6569ffe-880f-4e4e-8603-391a495fbb50\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-kvpdj" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.200055 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6f65944744-l2cf5"] Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.201353 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f65944744-l2cf5" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.203441 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a6569ffe-880f-4e4e-8603-391a495fbb50-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-kvpdj\" (UID: \"a6569ffe-880f-4e4e-8603-391a495fbb50\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-kvpdj" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.210048 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6f65944744-l2cf5"] Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.224327 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh9fd\" (UniqueName: \"kubernetes.io/projected/a6569ffe-880f-4e4e-8603-391a495fbb50-kube-api-access-hh9fd\") pod \"nmstate-console-plugin-5874bd7bc5-kvpdj\" (UID: \"a6569ffe-880f-4e4e-8603-391a495fbb50\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-kvpdj" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.297693 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/51dde57d-c05e-4785-b9f6-33f710b32806-console-serving-cert\") pod \"console-6f65944744-l2cf5\" (UID: \"51dde57d-c05e-4785-b9f6-33f710b32806\") " pod="openshift-console/console-6f65944744-l2cf5" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.297757 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/51dde57d-c05e-4785-b9f6-33f710b32806-oauth-serving-cert\") pod \"console-6f65944744-l2cf5\" (UID: \"51dde57d-c05e-4785-b9f6-33f710b32806\") " pod="openshift-console/console-6f65944744-l2cf5" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.297816 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51dde57d-c05e-4785-b9f6-33f710b32806-trusted-ca-bundle\") pod \"console-6f65944744-l2cf5\" (UID: \"51dde57d-c05e-4785-b9f6-33f710b32806\") " pod="openshift-console/console-6f65944744-l2cf5" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.297855 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/51dde57d-c05e-4785-b9f6-33f710b32806-console-oauth-config\") pod \"console-6f65944744-l2cf5\" (UID: \"51dde57d-c05e-4785-b9f6-33f710b32806\") " pod="openshift-console/console-6f65944744-l2cf5" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.297893 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/51dde57d-c05e-4785-b9f6-33f710b32806-service-ca\") pod \"console-6f65944744-l2cf5\" (UID: \"51dde57d-c05e-4785-b9f6-33f710b32806\") " pod="openshift-console/console-6f65944744-l2cf5" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.297941 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbnzq\" (UniqueName: \"kubernetes.io/projected/51dde57d-c05e-4785-b9f6-33f710b32806-kube-api-access-nbnzq\") pod \"console-6f65944744-l2cf5\" (UID: \"51dde57d-c05e-4785-b9f6-33f710b32806\") " pod="openshift-console/console-6f65944744-l2cf5" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.298008 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/51dde57d-c05e-4785-b9f6-33f710b32806-console-config\") pod \"console-6f65944744-l2cf5\" (UID: \"51dde57d-c05e-4785-b9f6-33f710b32806\") " pod="openshift-console/console-6f65944744-l2cf5" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.399187 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/51dde57d-c05e-4785-b9f6-33f710b32806-console-serving-cert\") pod \"console-6f65944744-l2cf5\" (UID: \"51dde57d-c05e-4785-b9f6-33f710b32806\") " pod="openshift-console/console-6f65944744-l2cf5" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.399583 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/51dde57d-c05e-4785-b9f6-33f710b32806-oauth-serving-cert\") pod \"console-6f65944744-l2cf5\" (UID: \"51dde57d-c05e-4785-b9f6-33f710b32806\") " pod="openshift-console/console-6f65944744-l2cf5" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.399633 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51dde57d-c05e-4785-b9f6-33f710b32806-trusted-ca-bundle\") pod \"console-6f65944744-l2cf5\" (UID: \"51dde57d-c05e-4785-b9f6-33f710b32806\") " pod="openshift-console/console-6f65944744-l2cf5" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.399672 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/51dde57d-c05e-4785-b9f6-33f710b32806-service-ca\") pod \"console-6f65944744-l2cf5\" (UID: \"51dde57d-c05e-4785-b9f6-33f710b32806\") " pod="openshift-console/console-6f65944744-l2cf5" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.399715 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/51dde57d-c05e-4785-b9f6-33f710b32806-console-oauth-config\") pod \"console-6f65944744-l2cf5\" (UID: \"51dde57d-c05e-4785-b9f6-33f710b32806\") " pod="openshift-console/console-6f65944744-l2cf5" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.399745 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbnzq\" (UniqueName: \"kubernetes.io/projected/51dde57d-c05e-4785-b9f6-33f710b32806-kube-api-access-nbnzq\") pod \"console-6f65944744-l2cf5\" (UID: \"51dde57d-c05e-4785-b9f6-33f710b32806\") " pod="openshift-console/console-6f65944744-l2cf5" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.399819 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/51dde57d-c05e-4785-b9f6-33f710b32806-console-config\") pod \"console-6f65944744-l2cf5\" (UID: \"51dde57d-c05e-4785-b9f6-33f710b32806\") " pod="openshift-console/console-6f65944744-l2cf5" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.401265 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/51dde57d-c05e-4785-b9f6-33f710b32806-service-ca\") pod \"console-6f65944744-l2cf5\" (UID: \"51dde57d-c05e-4785-b9f6-33f710b32806\") " pod="openshift-console/console-6f65944744-l2cf5" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.401819 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/51dde57d-c05e-4785-b9f6-33f710b32806-oauth-serving-cert\") pod \"console-6f65944744-l2cf5\" (UID: \"51dde57d-c05e-4785-b9f6-33f710b32806\") " pod="openshift-console/console-6f65944744-l2cf5" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.402160 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/51dde57d-c05e-4785-b9f6-33f710b32806-console-config\") pod \"console-6f65944744-l2cf5\" (UID: \"51dde57d-c05e-4785-b9f6-33f710b32806\") " pod="openshift-console/console-6f65944744-l2cf5" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.404462 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/51dde57d-c05e-4785-b9f6-33f710b32806-console-serving-cert\") pod \"console-6f65944744-l2cf5\" (UID: \"51dde57d-c05e-4785-b9f6-33f710b32806\") " pod="openshift-console/console-6f65944744-l2cf5" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.405220 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51dde57d-c05e-4785-b9f6-33f710b32806-trusted-ca-bundle\") pod \"console-6f65944744-l2cf5\" (UID: \"51dde57d-c05e-4785-b9f6-33f710b32806\") " pod="openshift-console/console-6f65944744-l2cf5" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.407825 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/51dde57d-c05e-4785-b9f6-33f710b32806-console-oauth-config\") pod \"console-6f65944744-l2cf5\" (UID: \"51dde57d-c05e-4785-b9f6-33f710b32806\") " pod="openshift-console/console-6f65944744-l2cf5" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.415707 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-kvpdj" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.416067 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbnzq\" (UniqueName: \"kubernetes.io/projected/51dde57d-c05e-4785-b9f6-33f710b32806-kube-api-access-nbnzq\") pod \"console-6f65944744-l2cf5\" (UID: \"51dde57d-c05e-4785-b9f6-33f710b32806\") " pod="openshift-console/console-6f65944744-l2cf5" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.525632 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f65944744-l2cf5" Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.688986 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-srvwk"] Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.776443 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-kvpdj"] Nov 24 21:35:23 crc kubenswrapper[4915]: W1124 21:35:23.783422 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6569ffe_880f_4e4e_8603_391a495fbb50.slice/crio-52a5651672946f06efaedf8cd4c9c495214425467f03c487cc5e9035b757444a WatchSource:0}: Error finding container 52a5651672946f06efaedf8cd4c9c495214425467f03c487cc5e9035b757444a: Status 404 returned error can't find the container with id 52a5651672946f06efaedf8cd4c9c495214425467f03c487cc5e9035b757444a Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.794330 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-tgnfp"] Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.876628 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-brgx6" event={"ID":"deb64d76-5b5e-481c-8486-c387b0579aa5","Type":"ContainerStarted","Data":"cc5fcc2980493fcfef3fd2abc206efddea1ac35dbdfcad6ad32d07fdf2a9794a"} Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.877449 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-kvpdj" event={"ID":"a6569ffe-880f-4e4e-8603-391a495fbb50","Type":"ContainerStarted","Data":"52a5651672946f06efaedf8cd4c9c495214425467f03c487cc5e9035b757444a"} Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.878359 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-tgnfp" event={"ID":"f28e1e42-b484-49c7-877f-43fd48691994","Type":"ContainerStarted","Data":"a13e459e2eed9a357c37505ae4753b3981c74d5c6c69d88e639f3992b36c331d"} Nov 24 21:35:23 crc kubenswrapper[4915]: I1124 21:35:23.880386 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-srvwk" event={"ID":"b329347f-4c7a-4973-b432-fb033406721f","Type":"ContainerStarted","Data":"6db96ffdd27ee447361351da6fb39f1586f729015a1837c4b513faebb9c726ba"} Nov 24 21:35:24 crc kubenswrapper[4915]: I1124 21:35:24.113157 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6f65944744-l2cf5"] Nov 24 21:35:24 crc kubenswrapper[4915]: W1124 21:35:24.125881 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51dde57d_c05e_4785_b9f6_33f710b32806.slice/crio-295aadead969e766de86bbe3bf5c06c904e4dd275fc4e00765a2e18b10d0d992 WatchSource:0}: Error finding container 295aadead969e766de86bbe3bf5c06c904e4dd275fc4e00765a2e18b10d0d992: Status 404 returned error can't find the container with id 295aadead969e766de86bbe3bf5c06c904e4dd275fc4e00765a2e18b10d0d992 Nov 24 21:35:24 crc kubenswrapper[4915]: I1124 21:35:24.327203 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:35:24 crc kubenswrapper[4915]: I1124 21:35:24.327631 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:35:24 crc kubenswrapper[4915]: I1124 21:35:24.890278 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f65944744-l2cf5" event={"ID":"51dde57d-c05e-4785-b9f6-33f710b32806","Type":"ContainerStarted","Data":"dcf0facf4614ea7610099141ed459282e1d9189cb1453b5d69ea86df5a4448dc"} Nov 24 21:35:24 crc kubenswrapper[4915]: I1124 21:35:24.890319 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f65944744-l2cf5" event={"ID":"51dde57d-c05e-4785-b9f6-33f710b32806","Type":"ContainerStarted","Data":"295aadead969e766de86bbe3bf5c06c904e4dd275fc4e00765a2e18b10d0d992"} Nov 24 21:35:24 crc kubenswrapper[4915]: I1124 21:35:24.920851 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6f65944744-l2cf5" podStartSLOduration=1.920836306 podStartE2EDuration="1.920836306s" podCreationTimestamp="2025-11-24 21:35:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:35:24.917398853 +0000 UTC m=+943.233651026" watchObservedRunningTime="2025-11-24 21:35:24.920836306 +0000 UTC m=+943.237088479" Nov 24 21:35:27 crc kubenswrapper[4915]: I1124 21:35:27.925304 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-tgnfp" event={"ID":"f28e1e42-b484-49c7-877f-43fd48691994","Type":"ContainerStarted","Data":"718a6dc0bd9e560b8c8f55819b6daa001124091ca57aee42c6543441826f0662"} Nov 24 21:35:27 crc kubenswrapper[4915]: I1124 21:35:27.925880 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-tgnfp" Nov 24 21:35:27 crc kubenswrapper[4915]: I1124 21:35:27.928396 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-srvwk" event={"ID":"b329347f-4c7a-4973-b432-fb033406721f","Type":"ContainerStarted","Data":"d4ba60a023ee438bb42bbc81eb43f1b80b257acb01dfc72064448be56e80b5f5"} Nov 24 21:35:27 crc kubenswrapper[4915]: I1124 21:35:27.930188 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-brgx6" event={"ID":"deb64d76-5b5e-481c-8486-c387b0579aa5","Type":"ContainerStarted","Data":"833f2e80b982b5034b2339ccece15a6bf386e4303c0e72fff6261660bab3b5e4"} Nov 24 21:35:27 crc kubenswrapper[4915]: I1124 21:35:27.930278 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-brgx6" Nov 24 21:35:27 crc kubenswrapper[4915]: I1124 21:35:27.932356 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-kvpdj" event={"ID":"a6569ffe-880f-4e4e-8603-391a495fbb50","Type":"ContainerStarted","Data":"6caa4c673c4fb95b22812ed3537d581ce64d328e3f07d45c6c4bf40ca84c3cd0"} Nov 24 21:35:27 crc kubenswrapper[4915]: I1124 21:35:27.954878 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-tgnfp" podStartSLOduration=2.870132979 podStartE2EDuration="5.954856856s" podCreationTimestamp="2025-11-24 21:35:22 +0000 UTC" firstStartedPulling="2025-11-24 21:35:23.794571259 +0000 UTC m=+942.110823432" lastFinishedPulling="2025-11-24 21:35:26.879295136 +0000 UTC m=+945.195547309" observedRunningTime="2025-11-24 21:35:27.944766634 +0000 UTC m=+946.261018827" watchObservedRunningTime="2025-11-24 21:35:27.954856856 +0000 UTC m=+946.271109029" Nov 24 21:35:27 crc kubenswrapper[4915]: I1124 21:35:27.993634 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-kvpdj" podStartSLOduration=2.901608767 podStartE2EDuration="5.993611281s" podCreationTimestamp="2025-11-24 21:35:22 +0000 UTC" firstStartedPulling="2025-11-24 21:35:23.787450346 +0000 UTC m=+942.103702519" lastFinishedPulling="2025-11-24 21:35:26.87945286 +0000 UTC m=+945.195705033" observedRunningTime="2025-11-24 21:35:27.966095419 +0000 UTC m=+946.282347602" watchObservedRunningTime="2025-11-24 21:35:27.993611281 +0000 UTC m=+946.309863464" Nov 24 21:35:29 crc kubenswrapper[4915]: I1124 21:35:29.949161 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-srvwk" event={"ID":"b329347f-4c7a-4973-b432-fb033406721f","Type":"ContainerStarted","Data":"35331a746b3ef79825281dd4c29d736b1a6a71a35a125b3ba63156f82e3c4663"} Nov 24 21:35:29 crc kubenswrapper[4915]: I1124 21:35:29.975265 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-brgx6" podStartSLOduration=4.320662383 podStartE2EDuration="7.975230355s" podCreationTimestamp="2025-11-24 21:35:22 +0000 UTC" firstStartedPulling="2025-11-24 21:35:23.229994386 +0000 UTC m=+941.546246559" lastFinishedPulling="2025-11-24 21:35:26.884562358 +0000 UTC m=+945.200814531" observedRunningTime="2025-11-24 21:35:27.990431405 +0000 UTC m=+946.306683578" watchObservedRunningTime="2025-11-24 21:35:29.975230355 +0000 UTC m=+948.291482608" Nov 24 21:35:29 crc kubenswrapper[4915]: I1124 21:35:29.978908 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-srvwk" podStartSLOduration=2.292523985 podStartE2EDuration="7.978884193s" podCreationTimestamp="2025-11-24 21:35:22 +0000 UTC" firstStartedPulling="2025-11-24 21:35:23.732931958 +0000 UTC m=+942.049184141" lastFinishedPulling="2025-11-24 21:35:29.419292176 +0000 UTC m=+947.735544349" observedRunningTime="2025-11-24 21:35:29.971008451 +0000 UTC m=+948.287260624" watchObservedRunningTime="2025-11-24 21:35:29.978884193 +0000 UTC m=+948.295136476" Nov 24 21:35:33 crc kubenswrapper[4915]: I1124 21:35:33.164188 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-brgx6" Nov 24 21:35:33 crc kubenswrapper[4915]: I1124 21:35:33.526885 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6f65944744-l2cf5" Nov 24 21:35:33 crc kubenswrapper[4915]: I1124 21:35:33.526979 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6f65944744-l2cf5" Nov 24 21:35:33 crc kubenswrapper[4915]: I1124 21:35:33.532889 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6f65944744-l2cf5" Nov 24 21:35:33 crc kubenswrapper[4915]: I1124 21:35:33.979466 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6f65944744-l2cf5" Nov 24 21:35:34 crc kubenswrapper[4915]: I1124 21:35:34.038207 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6679746c6d-lph77"] Nov 24 21:35:43 crc kubenswrapper[4915]: I1124 21:35:43.137538 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-tgnfp" Nov 24 21:35:54 crc kubenswrapper[4915]: I1124 21:35:54.328268 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:35:54 crc kubenswrapper[4915]: I1124 21:35:54.329152 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:35:59 crc kubenswrapper[4915]: I1124 21:35:59.098487 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6679746c6d-lph77" podUID="c539181b-6e2f-4f16-98f1-0adf73a2c1a3" containerName="console" containerID="cri-o://778146fae420e3911807b26e94cf72ab3512fef945bf53af2338c96adf3ea131" gracePeriod=15 Nov 24 21:35:59 crc kubenswrapper[4915]: I1124 21:35:59.718795 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6679746c6d-lph77_c539181b-6e2f-4f16-98f1-0adf73a2c1a3/console/0.log" Nov 24 21:35:59 crc kubenswrapper[4915]: I1124 21:35:59.719066 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6679746c6d-lph77" Nov 24 21:35:59 crc kubenswrapper[4915]: I1124 21:35:59.817873 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-trusted-ca-bundle\") pod \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\" (UID: \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\") " Nov 24 21:35:59 crc kubenswrapper[4915]: I1124 21:35:59.817966 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-oauth-serving-cert\") pod \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\" (UID: \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\") " Nov 24 21:35:59 crc kubenswrapper[4915]: I1124 21:35:59.818047 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-console-serving-cert\") pod \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\" (UID: \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\") " Nov 24 21:35:59 crc kubenswrapper[4915]: I1124 21:35:59.818072 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-console-oauth-config\") pod \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\" (UID: \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\") " Nov 24 21:35:59 crc kubenswrapper[4915]: I1124 21:35:59.818634 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c539181b-6e2f-4f16-98f1-0adf73a2c1a3" (UID: "c539181b-6e2f-4f16-98f1-0adf73a2c1a3"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:35:59 crc kubenswrapper[4915]: I1124 21:35:59.818714 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-service-ca\") pod \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\" (UID: \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\") " Nov 24 21:35:59 crc kubenswrapper[4915]: I1124 21:35:59.818732 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c539181b-6e2f-4f16-98f1-0adf73a2c1a3" (UID: "c539181b-6e2f-4f16-98f1-0adf73a2c1a3"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:35:59 crc kubenswrapper[4915]: I1124 21:35:59.818750 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8nqt\" (UniqueName: \"kubernetes.io/projected/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-kube-api-access-w8nqt\") pod \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\" (UID: \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\") " Nov 24 21:35:59 crc kubenswrapper[4915]: I1124 21:35:59.818856 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-console-config\") pod \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\" (UID: \"c539181b-6e2f-4f16-98f1-0adf73a2c1a3\") " Nov 24 21:35:59 crc kubenswrapper[4915]: I1124 21:35:59.819204 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-console-config" (OuterVolumeSpecName: "console-config") pod "c539181b-6e2f-4f16-98f1-0adf73a2c1a3" (UID: "c539181b-6e2f-4f16-98f1-0adf73a2c1a3"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:35:59 crc kubenswrapper[4915]: I1124 21:35:59.819232 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-service-ca" (OuterVolumeSpecName: "service-ca") pod "c539181b-6e2f-4f16-98f1-0adf73a2c1a3" (UID: "c539181b-6e2f-4f16-98f1-0adf73a2c1a3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:35:59 crc kubenswrapper[4915]: I1124 21:35:59.819398 4915 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:35:59 crc kubenswrapper[4915]: I1124 21:35:59.819418 4915 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:35:59 crc kubenswrapper[4915]: I1124 21:35:59.819426 4915 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:35:59 crc kubenswrapper[4915]: I1124 21:35:59.819437 4915 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:35:59 crc kubenswrapper[4915]: I1124 21:35:59.823765 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c539181b-6e2f-4f16-98f1-0adf73a2c1a3" (UID: "c539181b-6e2f-4f16-98f1-0adf73a2c1a3"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:35:59 crc kubenswrapper[4915]: I1124 21:35:59.824395 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c539181b-6e2f-4f16-98f1-0adf73a2c1a3" (UID: "c539181b-6e2f-4f16-98f1-0adf73a2c1a3"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:35:59 crc kubenswrapper[4915]: I1124 21:35:59.826622 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-kube-api-access-w8nqt" (OuterVolumeSpecName: "kube-api-access-w8nqt") pod "c539181b-6e2f-4f16-98f1-0adf73a2c1a3" (UID: "c539181b-6e2f-4f16-98f1-0adf73a2c1a3"). InnerVolumeSpecName "kube-api-access-w8nqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:35:59 crc kubenswrapper[4915]: I1124 21:35:59.920889 4915 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:35:59 crc kubenswrapper[4915]: I1124 21:35:59.920916 4915 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:35:59 crc kubenswrapper[4915]: I1124 21:35:59.920925 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8nqt\" (UniqueName: \"kubernetes.io/projected/c539181b-6e2f-4f16-98f1-0adf73a2c1a3-kube-api-access-w8nqt\") on node \"crc\" DevicePath \"\"" Nov 24 21:36:00 crc kubenswrapper[4915]: I1124 21:36:00.153907 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz"] Nov 24 21:36:00 crc kubenswrapper[4915]: E1124 21:36:00.154338 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c539181b-6e2f-4f16-98f1-0adf73a2c1a3" containerName="console" Nov 24 21:36:00 crc kubenswrapper[4915]: I1124 21:36:00.154356 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c539181b-6e2f-4f16-98f1-0adf73a2c1a3" containerName="console" Nov 24 21:36:00 crc kubenswrapper[4915]: I1124 21:36:00.154558 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c539181b-6e2f-4f16-98f1-0adf73a2c1a3" containerName="console" Nov 24 21:36:00 crc kubenswrapper[4915]: I1124 21:36:00.157522 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz" Nov 24 21:36:00 crc kubenswrapper[4915]: I1124 21:36:00.161668 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 24 21:36:00 crc kubenswrapper[4915]: I1124 21:36:00.172442 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz"] Nov 24 21:36:00 crc kubenswrapper[4915]: I1124 21:36:00.211650 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6679746c6d-lph77_c539181b-6e2f-4f16-98f1-0adf73a2c1a3/console/0.log" Nov 24 21:36:00 crc kubenswrapper[4915]: I1124 21:36:00.211961 4915 generic.go:334] "Generic (PLEG): container finished" podID="c539181b-6e2f-4f16-98f1-0adf73a2c1a3" containerID="778146fae420e3911807b26e94cf72ab3512fef945bf53af2338c96adf3ea131" exitCode=2 Nov 24 21:36:00 crc kubenswrapper[4915]: I1124 21:36:00.212005 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6679746c6d-lph77" event={"ID":"c539181b-6e2f-4f16-98f1-0adf73a2c1a3","Type":"ContainerDied","Data":"778146fae420e3911807b26e94cf72ab3512fef945bf53af2338c96adf3ea131"} Nov 24 21:36:00 crc kubenswrapper[4915]: I1124 21:36:00.212017 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6679746c6d-lph77" Nov 24 21:36:00 crc kubenswrapper[4915]: I1124 21:36:00.212044 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6679746c6d-lph77" event={"ID":"c539181b-6e2f-4f16-98f1-0adf73a2c1a3","Type":"ContainerDied","Data":"b3ec44b28cfda45aff4df8bdf983267e6b386842c328bbab538bb25dfa563417"} Nov 24 21:36:00 crc kubenswrapper[4915]: I1124 21:36:00.212063 4915 scope.go:117] "RemoveContainer" containerID="778146fae420e3911807b26e94cf72ab3512fef945bf53af2338c96adf3ea131" Nov 24 21:36:00 crc kubenswrapper[4915]: I1124 21:36:00.226878 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kl54\" (UniqueName: \"kubernetes.io/projected/67e05b3d-d45a-42b8-ad97-3e03cabf726a-kube-api-access-7kl54\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz\" (UID: \"67e05b3d-d45a-42b8-ad97-3e03cabf726a\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz" Nov 24 21:36:00 crc kubenswrapper[4915]: I1124 21:36:00.227000 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/67e05b3d-d45a-42b8-ad97-3e03cabf726a-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz\" (UID: \"67e05b3d-d45a-42b8-ad97-3e03cabf726a\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz" Nov 24 21:36:00 crc kubenswrapper[4915]: I1124 21:36:00.227058 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/67e05b3d-d45a-42b8-ad97-3e03cabf726a-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz\" (UID: \"67e05b3d-d45a-42b8-ad97-3e03cabf726a\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz" Nov 24 21:36:00 crc kubenswrapper[4915]: I1124 21:36:00.229929 4915 scope.go:117] "RemoveContainer" containerID="778146fae420e3911807b26e94cf72ab3512fef945bf53af2338c96adf3ea131" Nov 24 21:36:00 crc kubenswrapper[4915]: E1124 21:36:00.230302 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"778146fae420e3911807b26e94cf72ab3512fef945bf53af2338c96adf3ea131\": container with ID starting with 778146fae420e3911807b26e94cf72ab3512fef945bf53af2338c96adf3ea131 not found: ID does not exist" containerID="778146fae420e3911807b26e94cf72ab3512fef945bf53af2338c96adf3ea131" Nov 24 21:36:00 crc kubenswrapper[4915]: I1124 21:36:00.230420 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"778146fae420e3911807b26e94cf72ab3512fef945bf53af2338c96adf3ea131"} err="failed to get container status \"778146fae420e3911807b26e94cf72ab3512fef945bf53af2338c96adf3ea131\": rpc error: code = NotFound desc = could not find container \"778146fae420e3911807b26e94cf72ab3512fef945bf53af2338c96adf3ea131\": container with ID starting with 778146fae420e3911807b26e94cf72ab3512fef945bf53af2338c96adf3ea131 not found: ID does not exist" Nov 24 21:36:00 crc kubenswrapper[4915]: I1124 21:36:00.243883 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6679746c6d-lph77"] Nov 24 21:36:00 crc kubenswrapper[4915]: I1124 21:36:00.250343 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6679746c6d-lph77"] Nov 24 21:36:00 crc kubenswrapper[4915]: I1124 21:36:00.328789 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/67e05b3d-d45a-42b8-ad97-3e03cabf726a-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz\" (UID: \"67e05b3d-d45a-42b8-ad97-3e03cabf726a\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz" Nov 24 21:36:00 crc kubenswrapper[4915]: I1124 21:36:00.328897 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/67e05b3d-d45a-42b8-ad97-3e03cabf726a-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz\" (UID: \"67e05b3d-d45a-42b8-ad97-3e03cabf726a\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz" Nov 24 21:36:00 crc kubenswrapper[4915]: I1124 21:36:00.328969 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kl54\" (UniqueName: \"kubernetes.io/projected/67e05b3d-d45a-42b8-ad97-3e03cabf726a-kube-api-access-7kl54\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz\" (UID: \"67e05b3d-d45a-42b8-ad97-3e03cabf726a\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz" Nov 24 21:36:00 crc kubenswrapper[4915]: I1124 21:36:00.329400 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/67e05b3d-d45a-42b8-ad97-3e03cabf726a-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz\" (UID: \"67e05b3d-d45a-42b8-ad97-3e03cabf726a\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz" Nov 24 21:36:00 crc kubenswrapper[4915]: I1124 21:36:00.329409 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/67e05b3d-d45a-42b8-ad97-3e03cabf726a-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz\" (UID: \"67e05b3d-d45a-42b8-ad97-3e03cabf726a\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz" Nov 24 21:36:00 crc kubenswrapper[4915]: I1124 21:36:00.348602 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kl54\" (UniqueName: \"kubernetes.io/projected/67e05b3d-d45a-42b8-ad97-3e03cabf726a-kube-api-access-7kl54\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz\" (UID: \"67e05b3d-d45a-42b8-ad97-3e03cabf726a\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz" Nov 24 21:36:00 crc kubenswrapper[4915]: I1124 21:36:00.435008 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c539181b-6e2f-4f16-98f1-0adf73a2c1a3" path="/var/lib/kubelet/pods/c539181b-6e2f-4f16-98f1-0adf73a2c1a3/volumes" Nov 24 21:36:00 crc kubenswrapper[4915]: I1124 21:36:00.491895 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz" Nov 24 21:36:00 crc kubenswrapper[4915]: I1124 21:36:00.938901 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz"] Nov 24 21:36:01 crc kubenswrapper[4915]: I1124 21:36:01.224804 4915 generic.go:334] "Generic (PLEG): container finished" podID="67e05b3d-d45a-42b8-ad97-3e03cabf726a" containerID="58e55b7a6d203db216e5dcbcc9670489bec74c396be84032a8b7ef0188acd348" exitCode=0 Nov 24 21:36:01 crc kubenswrapper[4915]: I1124 21:36:01.224861 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz" event={"ID":"67e05b3d-d45a-42b8-ad97-3e03cabf726a","Type":"ContainerDied","Data":"58e55b7a6d203db216e5dcbcc9670489bec74c396be84032a8b7ef0188acd348"} Nov 24 21:36:01 crc kubenswrapper[4915]: I1124 21:36:01.224919 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz" event={"ID":"67e05b3d-d45a-42b8-ad97-3e03cabf726a","Type":"ContainerStarted","Data":"c62251ece34d72ad6fa8b2382752393c2fda8fb85c9a6542d4fc6252f8999fc6"} Nov 24 21:36:01 crc kubenswrapper[4915]: I1124 21:36:01.226167 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 21:36:03 crc kubenswrapper[4915]: I1124 21:36:03.251183 4915 generic.go:334] "Generic (PLEG): container finished" podID="67e05b3d-d45a-42b8-ad97-3e03cabf726a" containerID="6549e17ec63d7c31e3719c9d0ef5a7ffd65b975f1201b8603a2a90d0f4e4f813" exitCode=0 Nov 24 21:36:03 crc kubenswrapper[4915]: I1124 21:36:03.251290 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz" event={"ID":"67e05b3d-d45a-42b8-ad97-3e03cabf726a","Type":"ContainerDied","Data":"6549e17ec63d7c31e3719c9d0ef5a7ffd65b975f1201b8603a2a90d0f4e4f813"} Nov 24 21:36:04 crc kubenswrapper[4915]: I1124 21:36:04.266558 4915 generic.go:334] "Generic (PLEG): container finished" podID="67e05b3d-d45a-42b8-ad97-3e03cabf726a" containerID="b7b8ebeb4988f15b9dedf112cc9cefb66fc2bdae73aece50d7c14ae733fbb2a5" exitCode=0 Nov 24 21:36:04 crc kubenswrapper[4915]: I1124 21:36:04.266897 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz" event={"ID":"67e05b3d-d45a-42b8-ad97-3e03cabf726a","Type":"ContainerDied","Data":"b7b8ebeb4988f15b9dedf112cc9cefb66fc2bdae73aece50d7c14ae733fbb2a5"} Nov 24 21:36:05 crc kubenswrapper[4915]: I1124 21:36:05.563958 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz" Nov 24 21:36:05 crc kubenswrapper[4915]: I1124 21:36:05.618039 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kl54\" (UniqueName: \"kubernetes.io/projected/67e05b3d-d45a-42b8-ad97-3e03cabf726a-kube-api-access-7kl54\") pod \"67e05b3d-d45a-42b8-ad97-3e03cabf726a\" (UID: \"67e05b3d-d45a-42b8-ad97-3e03cabf726a\") " Nov 24 21:36:05 crc kubenswrapper[4915]: I1124 21:36:05.618073 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/67e05b3d-d45a-42b8-ad97-3e03cabf726a-bundle\") pod \"67e05b3d-d45a-42b8-ad97-3e03cabf726a\" (UID: \"67e05b3d-d45a-42b8-ad97-3e03cabf726a\") " Nov 24 21:36:05 crc kubenswrapper[4915]: I1124 21:36:05.618117 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/67e05b3d-d45a-42b8-ad97-3e03cabf726a-util\") pod \"67e05b3d-d45a-42b8-ad97-3e03cabf726a\" (UID: \"67e05b3d-d45a-42b8-ad97-3e03cabf726a\") " Nov 24 21:36:05 crc kubenswrapper[4915]: I1124 21:36:05.619383 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67e05b3d-d45a-42b8-ad97-3e03cabf726a-bundle" (OuterVolumeSpecName: "bundle") pod "67e05b3d-d45a-42b8-ad97-3e03cabf726a" (UID: "67e05b3d-d45a-42b8-ad97-3e03cabf726a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:36:05 crc kubenswrapper[4915]: I1124 21:36:05.627106 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67e05b3d-d45a-42b8-ad97-3e03cabf726a-kube-api-access-7kl54" (OuterVolumeSpecName: "kube-api-access-7kl54") pod "67e05b3d-d45a-42b8-ad97-3e03cabf726a" (UID: "67e05b3d-d45a-42b8-ad97-3e03cabf726a"). InnerVolumeSpecName "kube-api-access-7kl54". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:36:05 crc kubenswrapper[4915]: I1124 21:36:05.634489 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67e05b3d-d45a-42b8-ad97-3e03cabf726a-util" (OuterVolumeSpecName: "util") pod "67e05b3d-d45a-42b8-ad97-3e03cabf726a" (UID: "67e05b3d-d45a-42b8-ad97-3e03cabf726a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:36:05 crc kubenswrapper[4915]: I1124 21:36:05.720705 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kl54\" (UniqueName: \"kubernetes.io/projected/67e05b3d-d45a-42b8-ad97-3e03cabf726a-kube-api-access-7kl54\") on node \"crc\" DevicePath \"\"" Nov 24 21:36:05 crc kubenswrapper[4915]: I1124 21:36:05.720751 4915 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/67e05b3d-d45a-42b8-ad97-3e03cabf726a-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:36:05 crc kubenswrapper[4915]: I1124 21:36:05.720788 4915 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/67e05b3d-d45a-42b8-ad97-3e03cabf726a-util\") on node \"crc\" DevicePath \"\"" Nov 24 21:36:06 crc kubenswrapper[4915]: I1124 21:36:06.288696 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz" event={"ID":"67e05b3d-d45a-42b8-ad97-3e03cabf726a","Type":"ContainerDied","Data":"c62251ece34d72ad6fa8b2382752393c2fda8fb85c9a6542d4fc6252f8999fc6"} Nov 24 21:36:06 crc kubenswrapper[4915]: I1124 21:36:06.288820 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz" Nov 24 21:36:06 crc kubenswrapper[4915]: I1124 21:36:06.288826 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c62251ece34d72ad6fa8b2382752393c2fda8fb85c9a6542d4fc6252f8999fc6" Nov 24 21:36:14 crc kubenswrapper[4915]: I1124 21:36:14.991406 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-b6bfbb889-hdsxt"] Nov 24 21:36:14 crc kubenswrapper[4915]: E1124 21:36:14.992087 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e05b3d-d45a-42b8-ad97-3e03cabf726a" containerName="extract" Nov 24 21:36:14 crc kubenswrapper[4915]: I1124 21:36:14.992100 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e05b3d-d45a-42b8-ad97-3e03cabf726a" containerName="extract" Nov 24 21:36:14 crc kubenswrapper[4915]: E1124 21:36:14.992121 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e05b3d-d45a-42b8-ad97-3e03cabf726a" containerName="util" Nov 24 21:36:14 crc kubenswrapper[4915]: I1124 21:36:14.992127 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e05b3d-d45a-42b8-ad97-3e03cabf726a" containerName="util" Nov 24 21:36:14 crc kubenswrapper[4915]: E1124 21:36:14.992134 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e05b3d-d45a-42b8-ad97-3e03cabf726a" containerName="pull" Nov 24 21:36:14 crc kubenswrapper[4915]: I1124 21:36:14.992140 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e05b3d-d45a-42b8-ad97-3e03cabf726a" containerName="pull" Nov 24 21:36:14 crc kubenswrapper[4915]: I1124 21:36:14.992264 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e05b3d-d45a-42b8-ad97-3e03cabf726a" containerName="extract" Nov 24 21:36:14 crc kubenswrapper[4915]: I1124 21:36:14.992784 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-b6bfbb889-hdsxt" Nov 24 21:36:14 crc kubenswrapper[4915]: I1124 21:36:14.995666 4915 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-6q4sh" Nov 24 21:36:14 crc kubenswrapper[4915]: I1124 21:36:14.995745 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 24 21:36:14 crc kubenswrapper[4915]: I1124 21:36:14.995921 4915 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 24 21:36:15 crc kubenswrapper[4915]: I1124 21:36:15.000292 4915 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 24 21:36:15 crc kubenswrapper[4915]: I1124 21:36:15.005487 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 24 21:36:15 crc kubenswrapper[4915]: I1124 21:36:15.050232 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-b6bfbb889-hdsxt"] Nov 24 21:36:15 crc kubenswrapper[4915]: I1124 21:36:15.169477 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c96e3c48-3b17-4e76-ada9-4ab0d0890974-apiservice-cert\") pod \"metallb-operator-controller-manager-b6bfbb889-hdsxt\" (UID: \"c96e3c48-3b17-4e76-ada9-4ab0d0890974\") " pod="metallb-system/metallb-operator-controller-manager-b6bfbb889-hdsxt" Nov 24 21:36:15 crc kubenswrapper[4915]: I1124 21:36:15.169937 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gczrf\" (UniqueName: \"kubernetes.io/projected/c96e3c48-3b17-4e76-ada9-4ab0d0890974-kube-api-access-gczrf\") pod \"metallb-operator-controller-manager-b6bfbb889-hdsxt\" (UID: \"c96e3c48-3b17-4e76-ada9-4ab0d0890974\") " pod="metallb-system/metallb-operator-controller-manager-b6bfbb889-hdsxt" Nov 24 21:36:15 crc kubenswrapper[4915]: I1124 21:36:15.169983 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c96e3c48-3b17-4e76-ada9-4ab0d0890974-webhook-cert\") pod \"metallb-operator-controller-manager-b6bfbb889-hdsxt\" (UID: \"c96e3c48-3b17-4e76-ada9-4ab0d0890974\") " pod="metallb-system/metallb-operator-controller-manager-b6bfbb889-hdsxt" Nov 24 21:36:15 crc kubenswrapper[4915]: I1124 21:36:15.246160 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7796db4489-smctd"] Nov 24 21:36:15 crc kubenswrapper[4915]: I1124 21:36:15.247076 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7796db4489-smctd" Nov 24 21:36:15 crc kubenswrapper[4915]: I1124 21:36:15.249508 4915 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 24 21:36:15 crc kubenswrapper[4915]: I1124 21:36:15.249555 4915 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 24 21:36:15 crc kubenswrapper[4915]: I1124 21:36:15.251609 4915 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-twvx9" Nov 24 21:36:15 crc kubenswrapper[4915]: I1124 21:36:15.272046 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c96e3c48-3b17-4e76-ada9-4ab0d0890974-apiservice-cert\") pod \"metallb-operator-controller-manager-b6bfbb889-hdsxt\" (UID: \"c96e3c48-3b17-4e76-ada9-4ab0d0890974\") " pod="metallb-system/metallb-operator-controller-manager-b6bfbb889-hdsxt" Nov 24 21:36:15 crc kubenswrapper[4915]: I1124 21:36:15.272186 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gczrf\" (UniqueName: \"kubernetes.io/projected/c96e3c48-3b17-4e76-ada9-4ab0d0890974-kube-api-access-gczrf\") pod \"metallb-operator-controller-manager-b6bfbb889-hdsxt\" (UID: \"c96e3c48-3b17-4e76-ada9-4ab0d0890974\") " pod="metallb-system/metallb-operator-controller-manager-b6bfbb889-hdsxt" Nov 24 21:36:15 crc kubenswrapper[4915]: I1124 21:36:15.272236 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c96e3c48-3b17-4e76-ada9-4ab0d0890974-webhook-cert\") pod \"metallb-operator-controller-manager-b6bfbb889-hdsxt\" (UID: \"c96e3c48-3b17-4e76-ada9-4ab0d0890974\") " pod="metallb-system/metallb-operator-controller-manager-b6bfbb889-hdsxt" Nov 24 21:36:15 crc kubenswrapper[4915]: I1124 21:36:15.272359 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7796db4489-smctd"] Nov 24 21:36:15 crc kubenswrapper[4915]: I1124 21:36:15.294625 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c96e3c48-3b17-4e76-ada9-4ab0d0890974-webhook-cert\") pod \"metallb-operator-controller-manager-b6bfbb889-hdsxt\" (UID: \"c96e3c48-3b17-4e76-ada9-4ab0d0890974\") " pod="metallb-system/metallb-operator-controller-manager-b6bfbb889-hdsxt" Nov 24 21:36:15 crc kubenswrapper[4915]: I1124 21:36:15.294630 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c96e3c48-3b17-4e76-ada9-4ab0d0890974-apiservice-cert\") pod \"metallb-operator-controller-manager-b6bfbb889-hdsxt\" (UID: \"c96e3c48-3b17-4e76-ada9-4ab0d0890974\") " pod="metallb-system/metallb-operator-controller-manager-b6bfbb889-hdsxt" Nov 24 21:36:15 crc kubenswrapper[4915]: I1124 21:36:15.299594 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gczrf\" (UniqueName: \"kubernetes.io/projected/c96e3c48-3b17-4e76-ada9-4ab0d0890974-kube-api-access-gczrf\") pod \"metallb-operator-controller-manager-b6bfbb889-hdsxt\" (UID: \"c96e3c48-3b17-4e76-ada9-4ab0d0890974\") " pod="metallb-system/metallb-operator-controller-manager-b6bfbb889-hdsxt" Nov 24 21:36:15 crc kubenswrapper[4915]: I1124 21:36:15.311918 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-b6bfbb889-hdsxt" Nov 24 21:36:15 crc kubenswrapper[4915]: I1124 21:36:15.373650 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/64d59574-0f7e-4d28-818c-0fac0b2603bd-webhook-cert\") pod \"metallb-operator-webhook-server-7796db4489-smctd\" (UID: \"64d59574-0f7e-4d28-818c-0fac0b2603bd\") " pod="metallb-system/metallb-operator-webhook-server-7796db4489-smctd" Nov 24 21:36:15 crc kubenswrapper[4915]: I1124 21:36:15.373698 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjqvr\" (UniqueName: \"kubernetes.io/projected/64d59574-0f7e-4d28-818c-0fac0b2603bd-kube-api-access-wjqvr\") pod \"metallb-operator-webhook-server-7796db4489-smctd\" (UID: \"64d59574-0f7e-4d28-818c-0fac0b2603bd\") " pod="metallb-system/metallb-operator-webhook-server-7796db4489-smctd" Nov 24 21:36:15 crc kubenswrapper[4915]: I1124 21:36:15.373906 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/64d59574-0f7e-4d28-818c-0fac0b2603bd-apiservice-cert\") pod \"metallb-operator-webhook-server-7796db4489-smctd\" (UID: \"64d59574-0f7e-4d28-818c-0fac0b2603bd\") " pod="metallb-system/metallb-operator-webhook-server-7796db4489-smctd" Nov 24 21:36:15 crc kubenswrapper[4915]: I1124 21:36:15.480621 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/64d59574-0f7e-4d28-818c-0fac0b2603bd-webhook-cert\") pod \"metallb-operator-webhook-server-7796db4489-smctd\" (UID: \"64d59574-0f7e-4d28-818c-0fac0b2603bd\") " pod="metallb-system/metallb-operator-webhook-server-7796db4489-smctd" Nov 24 21:36:15 crc kubenswrapper[4915]: I1124 21:36:15.481083 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjqvr\" (UniqueName: \"kubernetes.io/projected/64d59574-0f7e-4d28-818c-0fac0b2603bd-kube-api-access-wjqvr\") pod \"metallb-operator-webhook-server-7796db4489-smctd\" (UID: \"64d59574-0f7e-4d28-818c-0fac0b2603bd\") " pod="metallb-system/metallb-operator-webhook-server-7796db4489-smctd" Nov 24 21:36:15 crc kubenswrapper[4915]: I1124 21:36:15.481127 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/64d59574-0f7e-4d28-818c-0fac0b2603bd-apiservice-cert\") pod \"metallb-operator-webhook-server-7796db4489-smctd\" (UID: \"64d59574-0f7e-4d28-818c-0fac0b2603bd\") " pod="metallb-system/metallb-operator-webhook-server-7796db4489-smctd" Nov 24 21:36:15 crc kubenswrapper[4915]: I1124 21:36:15.499164 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/64d59574-0f7e-4d28-818c-0fac0b2603bd-webhook-cert\") pod \"metallb-operator-webhook-server-7796db4489-smctd\" (UID: \"64d59574-0f7e-4d28-818c-0fac0b2603bd\") " pod="metallb-system/metallb-operator-webhook-server-7796db4489-smctd" Nov 24 21:36:15 crc kubenswrapper[4915]: I1124 21:36:15.499223 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/64d59574-0f7e-4d28-818c-0fac0b2603bd-apiservice-cert\") pod \"metallb-operator-webhook-server-7796db4489-smctd\" (UID: \"64d59574-0f7e-4d28-818c-0fac0b2603bd\") " pod="metallb-system/metallb-operator-webhook-server-7796db4489-smctd" Nov 24 21:36:15 crc kubenswrapper[4915]: I1124 21:36:15.505631 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjqvr\" (UniqueName: \"kubernetes.io/projected/64d59574-0f7e-4d28-818c-0fac0b2603bd-kube-api-access-wjqvr\") pod \"metallb-operator-webhook-server-7796db4489-smctd\" (UID: \"64d59574-0f7e-4d28-818c-0fac0b2603bd\") " pod="metallb-system/metallb-operator-webhook-server-7796db4489-smctd" Nov 24 21:36:15 crc kubenswrapper[4915]: I1124 21:36:15.563674 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7796db4489-smctd" Nov 24 21:36:15 crc kubenswrapper[4915]: I1124 21:36:15.826023 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-b6bfbb889-hdsxt"] Nov 24 21:36:15 crc kubenswrapper[4915]: W1124 21:36:15.834449 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc96e3c48_3b17_4e76_ada9_4ab0d0890974.slice/crio-fea4b90e6ab25163bac8735fc4b33a6e7fbd939289eecea5fc90b733bcb28f90 WatchSource:0}: Error finding container fea4b90e6ab25163bac8735fc4b33a6e7fbd939289eecea5fc90b733bcb28f90: Status 404 returned error can't find the container with id fea4b90e6ab25163bac8735fc4b33a6e7fbd939289eecea5fc90b733bcb28f90 Nov 24 21:36:16 crc kubenswrapper[4915]: I1124 21:36:16.013749 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7796db4489-smctd"] Nov 24 21:36:16 crc kubenswrapper[4915]: I1124 21:36:16.392182 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-b6bfbb889-hdsxt" event={"ID":"c96e3c48-3b17-4e76-ada9-4ab0d0890974","Type":"ContainerStarted","Data":"fea4b90e6ab25163bac8735fc4b33a6e7fbd939289eecea5fc90b733bcb28f90"} Nov 24 21:36:16 crc kubenswrapper[4915]: I1124 21:36:16.393588 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7796db4489-smctd" event={"ID":"64d59574-0f7e-4d28-818c-0fac0b2603bd","Type":"ContainerStarted","Data":"8016a5487384fb3bc37d69c976a5d708d54d53aa6855dee52b7d737951d5e8a1"} Nov 24 21:36:19 crc kubenswrapper[4915]: I1124 21:36:19.423887 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-b6bfbb889-hdsxt" event={"ID":"c96e3c48-3b17-4e76-ada9-4ab0d0890974","Type":"ContainerStarted","Data":"11c5d3ae2dcd7424833dd8edebaf4240ccc71525731e1437238795001c9f3f86"} Nov 24 21:36:19 crc kubenswrapper[4915]: I1124 21:36:19.424460 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-b6bfbb889-hdsxt" Nov 24 21:36:21 crc kubenswrapper[4915]: I1124 21:36:21.449641 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7796db4489-smctd" event={"ID":"64d59574-0f7e-4d28-818c-0fac0b2603bd","Type":"ContainerStarted","Data":"c37b585c07b36b83c0e3d4b24851ad364c64579af8208cdca72d87fdad97c07c"} Nov 24 21:36:21 crc kubenswrapper[4915]: I1124 21:36:21.450186 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7796db4489-smctd" Nov 24 21:36:21 crc kubenswrapper[4915]: I1124 21:36:21.469553 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-b6bfbb889-hdsxt" podStartSLOduration=4.137208537 podStartE2EDuration="7.469532286s" podCreationTimestamp="2025-11-24 21:36:14 +0000 UTC" firstStartedPulling="2025-11-24 21:36:15.836519266 +0000 UTC m=+994.152771439" lastFinishedPulling="2025-11-24 21:36:19.168843015 +0000 UTC m=+997.485095188" observedRunningTime="2025-11-24 21:36:19.462411204 +0000 UTC m=+997.778663407" watchObservedRunningTime="2025-11-24 21:36:21.469532286 +0000 UTC m=+999.785784469" Nov 24 21:36:21 crc kubenswrapper[4915]: I1124 21:36:21.473026 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7796db4489-smctd" podStartSLOduration=1.450921021 podStartE2EDuration="6.47301606s" podCreationTimestamp="2025-11-24 21:36:15 +0000 UTC" firstStartedPulling="2025-11-24 21:36:16.021296065 +0000 UTC m=+994.337548238" lastFinishedPulling="2025-11-24 21:36:21.043391104 +0000 UTC m=+999.359643277" observedRunningTime="2025-11-24 21:36:21.467894652 +0000 UTC m=+999.784146845" watchObservedRunningTime="2025-11-24 21:36:21.47301606 +0000 UTC m=+999.789268233" Nov 24 21:36:24 crc kubenswrapper[4915]: I1124 21:36:24.327794 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:36:24 crc kubenswrapper[4915]: I1124 21:36:24.328151 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:36:24 crc kubenswrapper[4915]: I1124 21:36:24.328216 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 21:36:24 crc kubenswrapper[4915]: I1124 21:36:24.329085 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9b807af914a662edc8043def20fa4b712cbac16789b5da03da771b483217896d"} pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:36:24 crc kubenswrapper[4915]: I1124 21:36:24.329155 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" containerID="cri-o://9b807af914a662edc8043def20fa4b712cbac16789b5da03da771b483217896d" gracePeriod=600 Nov 24 21:36:24 crc kubenswrapper[4915]: I1124 21:36:24.472644 4915 generic.go:334] "Generic (PLEG): container finished" podID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerID="9b807af914a662edc8043def20fa4b712cbac16789b5da03da771b483217896d" exitCode=0 Nov 24 21:36:24 crc kubenswrapper[4915]: I1124 21:36:24.472686 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerDied","Data":"9b807af914a662edc8043def20fa4b712cbac16789b5da03da771b483217896d"} Nov 24 21:36:24 crc kubenswrapper[4915]: I1124 21:36:24.472719 4915 scope.go:117] "RemoveContainer" containerID="615393d21d21ae1d445108dc4b58018415536dc2737ece31d186b4e6013b73e9" Nov 24 21:36:25 crc kubenswrapper[4915]: I1124 21:36:25.480635 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerStarted","Data":"c001cd4c9ce6030e46567b52ccde6925b9b174f41dc20336633c7c1d5f367107"} Nov 24 21:36:35 crc kubenswrapper[4915]: I1124 21:36:35.568402 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7796db4489-smctd" Nov 24 21:36:55 crc kubenswrapper[4915]: I1124 21:36:55.315047 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-b6bfbb889-hdsxt" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.037382 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-wfnx5"] Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.040594 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-wfnx5" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.042289 4915 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-lgfkq" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.045453 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.045651 4915 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.046518 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-97pzg"] Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.047561 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-97pzg" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.050302 4915 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.061689 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-97pzg"] Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.148902 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-pw7wv"] Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.150441 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-pw7wv" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.153313 4915 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.153650 4915 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.155496 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.155656 4915 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-zhlbc" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.169558 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-fg52n"] Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.171071 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-fg52n" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.175691 4915 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.181967 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-fg52n"] Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.182840 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679-frr-startup\") pod \"frr-k8s-wfnx5\" (UID: \"1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679\") " pod="metallb-system/frr-k8s-wfnx5" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.182889 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb5bs\" (UniqueName: \"kubernetes.io/projected/aca8edea-7127-45be-be5f-1a7385ce37bf-kube-api-access-bb5bs\") pod \"frr-k8s-webhook-server-6998585d5-97pzg\" (UID: \"aca8edea-7127-45be-be5f-1a7385ce37bf\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-97pzg" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.182908 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679-frr-conf\") pod \"frr-k8s-wfnx5\" (UID: \"1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679\") " pod="metallb-system/frr-k8s-wfnx5" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.182924 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679-metrics-certs\") pod \"frr-k8s-wfnx5\" (UID: \"1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679\") " pod="metallb-system/frr-k8s-wfnx5" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.182958 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk75k\" (UniqueName: \"kubernetes.io/projected/1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679-kube-api-access-mk75k\") pod \"frr-k8s-wfnx5\" (UID: \"1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679\") " pod="metallb-system/frr-k8s-wfnx5" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.182988 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679-metrics\") pod \"frr-k8s-wfnx5\" (UID: \"1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679\") " pod="metallb-system/frr-k8s-wfnx5" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.183009 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aca8edea-7127-45be-be5f-1a7385ce37bf-cert\") pod \"frr-k8s-webhook-server-6998585d5-97pzg\" (UID: \"aca8edea-7127-45be-be5f-1a7385ce37bf\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-97pzg" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.183029 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679-frr-sockets\") pod \"frr-k8s-wfnx5\" (UID: \"1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679\") " pod="metallb-system/frr-k8s-wfnx5" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.183044 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679-reloader\") pod \"frr-k8s-wfnx5\" (UID: \"1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679\") " pod="metallb-system/frr-k8s-wfnx5" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.284529 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f1123b31-719b-4a3e-a6a9-31ea827aa3eb-cert\") pod \"controller-6c7b4b5f48-fg52n\" (UID: \"f1123b31-719b-4a3e-a6a9-31ea827aa3eb\") " pod="metallb-system/controller-6c7b4b5f48-fg52n" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.284581 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb5bs\" (UniqueName: \"kubernetes.io/projected/aca8edea-7127-45be-be5f-1a7385ce37bf-kube-api-access-bb5bs\") pod \"frr-k8s-webhook-server-6998585d5-97pzg\" (UID: \"aca8edea-7127-45be-be5f-1a7385ce37bf\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-97pzg" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.284620 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679-frr-conf\") pod \"frr-k8s-wfnx5\" (UID: \"1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679\") " pod="metallb-system/frr-k8s-wfnx5" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.284644 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e20a9fa7-0d01-46c0-9475-0895c44836f2-metallb-excludel2\") pod \"speaker-pw7wv\" (UID: \"e20a9fa7-0d01-46c0-9475-0895c44836f2\") " pod="metallb-system/speaker-pw7wv" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.284673 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679-metrics-certs\") pod \"frr-k8s-wfnx5\" (UID: \"1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679\") " pod="metallb-system/frr-k8s-wfnx5" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.284727 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk75k\" (UniqueName: \"kubernetes.io/projected/1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679-kube-api-access-mk75k\") pod \"frr-k8s-wfnx5\" (UID: \"1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679\") " pod="metallb-system/frr-k8s-wfnx5" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.284789 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1123b31-719b-4a3e-a6a9-31ea827aa3eb-metrics-certs\") pod \"controller-6c7b4b5f48-fg52n\" (UID: \"f1123b31-719b-4a3e-a6a9-31ea827aa3eb\") " pod="metallb-system/controller-6c7b4b5f48-fg52n" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.284816 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679-metrics\") pod \"frr-k8s-wfnx5\" (UID: \"1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679\") " pod="metallb-system/frr-k8s-wfnx5" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.284842 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aca8edea-7127-45be-be5f-1a7385ce37bf-cert\") pod \"frr-k8s-webhook-server-6998585d5-97pzg\" (UID: \"aca8edea-7127-45be-be5f-1a7385ce37bf\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-97pzg" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.284875 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679-frr-sockets\") pod \"frr-k8s-wfnx5\" (UID: \"1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679\") " pod="metallb-system/frr-k8s-wfnx5" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.284898 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679-reloader\") pod \"frr-k8s-wfnx5\" (UID: \"1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679\") " pod="metallb-system/frr-k8s-wfnx5" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.284920 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmqgr\" (UniqueName: \"kubernetes.io/projected/e20a9fa7-0d01-46c0-9475-0895c44836f2-kube-api-access-xmqgr\") pod \"speaker-pw7wv\" (UID: \"e20a9fa7-0d01-46c0-9475-0895c44836f2\") " pod="metallb-system/speaker-pw7wv" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.284973 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e20a9fa7-0d01-46c0-9475-0895c44836f2-metrics-certs\") pod \"speaker-pw7wv\" (UID: \"e20a9fa7-0d01-46c0-9475-0895c44836f2\") " pod="metallb-system/speaker-pw7wv" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.285010 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e20a9fa7-0d01-46c0-9475-0895c44836f2-memberlist\") pod \"speaker-pw7wv\" (UID: \"e20a9fa7-0d01-46c0-9475-0895c44836f2\") " pod="metallb-system/speaker-pw7wv" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.285047 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679-frr-startup\") pod \"frr-k8s-wfnx5\" (UID: \"1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679\") " pod="metallb-system/frr-k8s-wfnx5" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.285087 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9td7f\" (UniqueName: \"kubernetes.io/projected/f1123b31-719b-4a3e-a6a9-31ea827aa3eb-kube-api-access-9td7f\") pod \"controller-6c7b4b5f48-fg52n\" (UID: \"f1123b31-719b-4a3e-a6a9-31ea827aa3eb\") " pod="metallb-system/controller-6c7b4b5f48-fg52n" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.285305 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679-frr-conf\") pod \"frr-k8s-wfnx5\" (UID: \"1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679\") " pod="metallb-system/frr-k8s-wfnx5" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.285498 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679-metrics\") pod \"frr-k8s-wfnx5\" (UID: \"1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679\") " pod="metallb-system/frr-k8s-wfnx5" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.286159 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679-frr-sockets\") pod \"frr-k8s-wfnx5\" (UID: \"1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679\") " pod="metallb-system/frr-k8s-wfnx5" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.286505 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679-reloader\") pod \"frr-k8s-wfnx5\" (UID: \"1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679\") " pod="metallb-system/frr-k8s-wfnx5" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.286910 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679-frr-startup\") pod \"frr-k8s-wfnx5\" (UID: \"1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679\") " pod="metallb-system/frr-k8s-wfnx5" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.294852 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/aca8edea-7127-45be-be5f-1a7385ce37bf-cert\") pod \"frr-k8s-webhook-server-6998585d5-97pzg\" (UID: \"aca8edea-7127-45be-be5f-1a7385ce37bf\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-97pzg" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.295345 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679-metrics-certs\") pod \"frr-k8s-wfnx5\" (UID: \"1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679\") " pod="metallb-system/frr-k8s-wfnx5" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.313556 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb5bs\" (UniqueName: \"kubernetes.io/projected/aca8edea-7127-45be-be5f-1a7385ce37bf-kube-api-access-bb5bs\") pod \"frr-k8s-webhook-server-6998585d5-97pzg\" (UID: \"aca8edea-7127-45be-be5f-1a7385ce37bf\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-97pzg" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.329848 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk75k\" (UniqueName: \"kubernetes.io/projected/1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679-kube-api-access-mk75k\") pod \"frr-k8s-wfnx5\" (UID: \"1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679\") " pod="metallb-system/frr-k8s-wfnx5" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.366325 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-wfnx5" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.383605 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-97pzg" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.387635 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmqgr\" (UniqueName: \"kubernetes.io/projected/e20a9fa7-0d01-46c0-9475-0895c44836f2-kube-api-access-xmqgr\") pod \"speaker-pw7wv\" (UID: \"e20a9fa7-0d01-46c0-9475-0895c44836f2\") " pod="metallb-system/speaker-pw7wv" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.387685 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e20a9fa7-0d01-46c0-9475-0895c44836f2-metrics-certs\") pod \"speaker-pw7wv\" (UID: \"e20a9fa7-0d01-46c0-9475-0895c44836f2\") " pod="metallb-system/speaker-pw7wv" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.387715 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e20a9fa7-0d01-46c0-9475-0895c44836f2-memberlist\") pod \"speaker-pw7wv\" (UID: \"e20a9fa7-0d01-46c0-9475-0895c44836f2\") " pod="metallb-system/speaker-pw7wv" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.387753 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9td7f\" (UniqueName: \"kubernetes.io/projected/f1123b31-719b-4a3e-a6a9-31ea827aa3eb-kube-api-access-9td7f\") pod \"controller-6c7b4b5f48-fg52n\" (UID: \"f1123b31-719b-4a3e-a6a9-31ea827aa3eb\") " pod="metallb-system/controller-6c7b4b5f48-fg52n" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.387794 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f1123b31-719b-4a3e-a6a9-31ea827aa3eb-cert\") pod \"controller-6c7b4b5f48-fg52n\" (UID: \"f1123b31-719b-4a3e-a6a9-31ea827aa3eb\") " pod="metallb-system/controller-6c7b4b5f48-fg52n" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.387818 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e20a9fa7-0d01-46c0-9475-0895c44836f2-metallb-excludel2\") pod \"speaker-pw7wv\" (UID: \"e20a9fa7-0d01-46c0-9475-0895c44836f2\") " pod="metallb-system/speaker-pw7wv" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.387867 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1123b31-719b-4a3e-a6a9-31ea827aa3eb-metrics-certs\") pod \"controller-6c7b4b5f48-fg52n\" (UID: \"f1123b31-719b-4a3e-a6a9-31ea827aa3eb\") " pod="metallb-system/controller-6c7b4b5f48-fg52n" Nov 24 21:36:56 crc kubenswrapper[4915]: E1124 21:36:56.388065 4915 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 24 21:36:56 crc kubenswrapper[4915]: E1124 21:36:56.388109 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e20a9fa7-0d01-46c0-9475-0895c44836f2-memberlist podName:e20a9fa7-0d01-46c0-9475-0895c44836f2 nodeName:}" failed. No retries permitted until 2025-11-24 21:36:56.888095158 +0000 UTC m=+1035.204347331 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e20a9fa7-0d01-46c0-9475-0895c44836f2-memberlist") pod "speaker-pw7wv" (UID: "e20a9fa7-0d01-46c0-9475-0895c44836f2") : secret "metallb-memberlist" not found Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.388843 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e20a9fa7-0d01-46c0-9475-0895c44836f2-metallb-excludel2\") pod \"speaker-pw7wv\" (UID: \"e20a9fa7-0d01-46c0-9475-0895c44836f2\") " pod="metallb-system/speaker-pw7wv" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.391376 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e20a9fa7-0d01-46c0-9475-0895c44836f2-metrics-certs\") pod \"speaker-pw7wv\" (UID: \"e20a9fa7-0d01-46c0-9475-0895c44836f2\") " pod="metallb-system/speaker-pw7wv" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.391617 4915 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.391766 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1123b31-719b-4a3e-a6a9-31ea827aa3eb-metrics-certs\") pod \"controller-6c7b4b5f48-fg52n\" (UID: \"f1123b31-719b-4a3e-a6a9-31ea827aa3eb\") " pod="metallb-system/controller-6c7b4b5f48-fg52n" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.402390 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f1123b31-719b-4a3e-a6a9-31ea827aa3eb-cert\") pod \"controller-6c7b4b5f48-fg52n\" (UID: \"f1123b31-719b-4a3e-a6a9-31ea827aa3eb\") " pod="metallb-system/controller-6c7b4b5f48-fg52n" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.410283 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmqgr\" (UniqueName: \"kubernetes.io/projected/e20a9fa7-0d01-46c0-9475-0895c44836f2-kube-api-access-xmqgr\") pod \"speaker-pw7wv\" (UID: \"e20a9fa7-0d01-46c0-9475-0895c44836f2\") " pod="metallb-system/speaker-pw7wv" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.410303 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9td7f\" (UniqueName: \"kubernetes.io/projected/f1123b31-719b-4a3e-a6a9-31ea827aa3eb-kube-api-access-9td7f\") pod \"controller-6c7b4b5f48-fg52n\" (UID: \"f1123b31-719b-4a3e-a6a9-31ea827aa3eb\") " pod="metallb-system/controller-6c7b4b5f48-fg52n" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.487798 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-fg52n" Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.741622 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wfnx5" event={"ID":"1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679","Type":"ContainerStarted","Data":"79470128723ab354a3c64e6325e37e404bda4d287de28f3b9f83f4605499f9b0"} Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.811704 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-97pzg"] Nov 24 21:36:56 crc kubenswrapper[4915]: W1124 21:36:56.811997 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaca8edea_7127_45be_be5f_1a7385ce37bf.slice/crio-b5f6290654fcf5d3237495e0959bb2f810bf247c1cfb8b12f88b17c5d17b9919 WatchSource:0}: Error finding container b5f6290654fcf5d3237495e0959bb2f810bf247c1cfb8b12f88b17c5d17b9919: Status 404 returned error can't find the container with id b5f6290654fcf5d3237495e0959bb2f810bf247c1cfb8b12f88b17c5d17b9919 Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.895821 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e20a9fa7-0d01-46c0-9475-0895c44836f2-memberlist\") pod \"speaker-pw7wv\" (UID: \"e20a9fa7-0d01-46c0-9475-0895c44836f2\") " pod="metallb-system/speaker-pw7wv" Nov 24 21:36:56 crc kubenswrapper[4915]: E1124 21:36:56.896032 4915 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 24 21:36:56 crc kubenswrapper[4915]: E1124 21:36:56.896118 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e20a9fa7-0d01-46c0-9475-0895c44836f2-memberlist podName:e20a9fa7-0d01-46c0-9475-0895c44836f2 nodeName:}" failed. No retries permitted until 2025-11-24 21:36:57.896097136 +0000 UTC m=+1036.212349319 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e20a9fa7-0d01-46c0-9475-0895c44836f2-memberlist") pod "speaker-pw7wv" (UID: "e20a9fa7-0d01-46c0-9475-0895c44836f2") : secret "metallb-memberlist" not found Nov 24 21:36:56 crc kubenswrapper[4915]: I1124 21:36:56.917688 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-fg52n"] Nov 24 21:36:57 crc kubenswrapper[4915]: I1124 21:36:57.753416 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-fg52n" event={"ID":"f1123b31-719b-4a3e-a6a9-31ea827aa3eb","Type":"ContainerStarted","Data":"2636229a3aeee5da30ea4d806d99152d26e2eb4a524fa2e0a320f45aa1dd3398"} Nov 24 21:36:57 crc kubenswrapper[4915]: I1124 21:36:57.753475 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-fg52n" event={"ID":"f1123b31-719b-4a3e-a6a9-31ea827aa3eb","Type":"ContainerStarted","Data":"757f54632d3ded5055f7c9945dd0d15f7db3775df6b5e2b5a335c439afcc229a"} Nov 24 21:36:57 crc kubenswrapper[4915]: I1124 21:36:57.753497 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-fg52n" event={"ID":"f1123b31-719b-4a3e-a6a9-31ea827aa3eb","Type":"ContainerStarted","Data":"8a1382456c18ede78d1958c0f7af7759c04249e97f8d8bbe7a85b8243c47f4a9"} Nov 24 21:36:57 crc kubenswrapper[4915]: I1124 21:36:57.754892 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-fg52n" Nov 24 21:36:57 crc kubenswrapper[4915]: I1124 21:36:57.756708 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-97pzg" event={"ID":"aca8edea-7127-45be-be5f-1a7385ce37bf","Type":"ContainerStarted","Data":"b5f6290654fcf5d3237495e0959bb2f810bf247c1cfb8b12f88b17c5d17b9919"} Nov 24 21:36:57 crc kubenswrapper[4915]: I1124 21:36:57.776060 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-fg52n" podStartSLOduration=1.776038065 podStartE2EDuration="1.776038065s" podCreationTimestamp="2025-11-24 21:36:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:36:57.771060391 +0000 UTC m=+1036.087312574" watchObservedRunningTime="2025-11-24 21:36:57.776038065 +0000 UTC m=+1036.092290248" Nov 24 21:36:57 crc kubenswrapper[4915]: I1124 21:36:57.913884 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e20a9fa7-0d01-46c0-9475-0895c44836f2-memberlist\") pod \"speaker-pw7wv\" (UID: \"e20a9fa7-0d01-46c0-9475-0895c44836f2\") " pod="metallb-system/speaker-pw7wv" Nov 24 21:36:57 crc kubenswrapper[4915]: I1124 21:36:57.932447 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e20a9fa7-0d01-46c0-9475-0895c44836f2-memberlist\") pod \"speaker-pw7wv\" (UID: \"e20a9fa7-0d01-46c0-9475-0895c44836f2\") " pod="metallb-system/speaker-pw7wv" Nov 24 21:36:57 crc kubenswrapper[4915]: I1124 21:36:57.968190 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-pw7wv" Nov 24 21:36:58 crc kubenswrapper[4915]: W1124 21:36:58.001177 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode20a9fa7_0d01_46c0_9475_0895c44836f2.slice/crio-7548f751101121a08c315ccff9c119686fae51b92d5494e54cc41e7df09b36ed WatchSource:0}: Error finding container 7548f751101121a08c315ccff9c119686fae51b92d5494e54cc41e7df09b36ed: Status 404 returned error can't find the container with id 7548f751101121a08c315ccff9c119686fae51b92d5494e54cc41e7df09b36ed Nov 24 21:36:58 crc kubenswrapper[4915]: I1124 21:36:58.766428 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pw7wv" event={"ID":"e20a9fa7-0d01-46c0-9475-0895c44836f2","Type":"ContainerStarted","Data":"bc63873e661f8e1809984dfdce35e82b00712f8effda4d2470f5186fb68022b5"} Nov 24 21:36:58 crc kubenswrapper[4915]: I1124 21:36:58.766483 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pw7wv" event={"ID":"e20a9fa7-0d01-46c0-9475-0895c44836f2","Type":"ContainerStarted","Data":"008c62c4c9270f8c67f0663e7dd38f8d646cb0da6ad38ecd4a857c528a78f0a7"} Nov 24 21:36:58 crc kubenswrapper[4915]: I1124 21:36:58.766497 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-pw7wv" event={"ID":"e20a9fa7-0d01-46c0-9475-0895c44836f2","Type":"ContainerStarted","Data":"7548f751101121a08c315ccff9c119686fae51b92d5494e54cc41e7df09b36ed"} Nov 24 21:36:58 crc kubenswrapper[4915]: I1124 21:36:58.766904 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-pw7wv" Nov 24 21:36:58 crc kubenswrapper[4915]: I1124 21:36:58.796259 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-pw7wv" podStartSLOduration=2.796226824 podStartE2EDuration="2.796226824s" podCreationTimestamp="2025-11-24 21:36:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:36:58.793396288 +0000 UTC m=+1037.109648461" watchObservedRunningTime="2025-11-24 21:36:58.796226824 +0000 UTC m=+1037.112478997" Nov 24 21:37:04 crc kubenswrapper[4915]: I1124 21:37:04.815015 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-97pzg" event={"ID":"aca8edea-7127-45be-be5f-1a7385ce37bf","Type":"ContainerStarted","Data":"7902b069d19e3f450e1b812132645c310231fada7229f83efb3c69ad8d1f96cb"} Nov 24 21:37:04 crc kubenswrapper[4915]: I1124 21:37:04.815714 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-97pzg" Nov 24 21:37:04 crc kubenswrapper[4915]: I1124 21:37:04.817167 4915 generic.go:334] "Generic (PLEG): container finished" podID="1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679" containerID="519ad0eb3aab606bd58174ff8b5722ba56ab980b7e9166be8b9d9982c8784a24" exitCode=0 Nov 24 21:37:04 crc kubenswrapper[4915]: I1124 21:37:04.817208 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wfnx5" event={"ID":"1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679","Type":"ContainerDied","Data":"519ad0eb3aab606bd58174ff8b5722ba56ab980b7e9166be8b9d9982c8784a24"} Nov 24 21:37:04 crc kubenswrapper[4915]: I1124 21:37:04.847151 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-97pzg" podStartSLOduration=1.442575236 podStartE2EDuration="8.847127657s" podCreationTimestamp="2025-11-24 21:36:56 +0000 UTC" firstStartedPulling="2025-11-24 21:36:56.814316802 +0000 UTC m=+1035.130568975" lastFinishedPulling="2025-11-24 21:37:04.218869223 +0000 UTC m=+1042.535121396" observedRunningTime="2025-11-24 21:37:04.837686152 +0000 UTC m=+1043.153938335" watchObservedRunningTime="2025-11-24 21:37:04.847127657 +0000 UTC m=+1043.163379870" Nov 24 21:37:05 crc kubenswrapper[4915]: I1124 21:37:05.830079 4915 generic.go:334] "Generic (PLEG): container finished" podID="1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679" containerID="55c2f2fadf070e6a55a07e966441d18bc6f2d422605d32430e76216cfa97f7d1" exitCode=0 Nov 24 21:37:05 crc kubenswrapper[4915]: I1124 21:37:05.830959 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wfnx5" event={"ID":"1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679","Type":"ContainerDied","Data":"55c2f2fadf070e6a55a07e966441d18bc6f2d422605d32430e76216cfa97f7d1"} Nov 24 21:37:06 crc kubenswrapper[4915]: I1124 21:37:06.844428 4915 generic.go:334] "Generic (PLEG): container finished" podID="1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679" containerID="3eb127572d8026a355b723aeab7ab293ddda9bcf63c997423b58046073cfeb38" exitCode=0 Nov 24 21:37:06 crc kubenswrapper[4915]: I1124 21:37:06.844514 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wfnx5" event={"ID":"1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679","Type":"ContainerDied","Data":"3eb127572d8026a355b723aeab7ab293ddda9bcf63c997423b58046073cfeb38"} Nov 24 21:37:07 crc kubenswrapper[4915]: I1124 21:37:07.874709 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wfnx5" event={"ID":"1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679","Type":"ContainerStarted","Data":"4404f99934e9633b71ae5aeaf36e4b6d965f168ab4aeae022bbea9eba0efced6"} Nov 24 21:37:07 crc kubenswrapper[4915]: I1124 21:37:07.875068 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wfnx5" event={"ID":"1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679","Type":"ContainerStarted","Data":"31fa4112a4acc90161d6d787d13b1610817afc0002232d0718078703bdbb3d92"} Nov 24 21:37:07 crc kubenswrapper[4915]: I1124 21:37:07.875080 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wfnx5" event={"ID":"1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679","Type":"ContainerStarted","Data":"7786875c7a56548c991be2110f8bf53ac85eab322228b430a1be3f0c671aa882"} Nov 24 21:37:07 crc kubenswrapper[4915]: I1124 21:37:07.875091 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wfnx5" event={"ID":"1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679","Type":"ContainerStarted","Data":"8889fee2b964afd227816c69ddb3cb32162ef8cd38af731bd439e13ed5c6ed74"} Nov 24 21:37:07 crc kubenswrapper[4915]: I1124 21:37:07.875101 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wfnx5" event={"ID":"1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679","Type":"ContainerStarted","Data":"f613f53fece31710eb156746f81188c67846557d21617ab0cd50dfffa7f6a444"} Nov 24 21:37:08 crc kubenswrapper[4915]: I1124 21:37:08.885760 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-wfnx5" event={"ID":"1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679","Type":"ContainerStarted","Data":"2d2eed1e0cc676024c23837b623a760e8bfc6ccba86e15b231d5ec6c68b172c8"} Nov 24 21:37:08 crc kubenswrapper[4915]: I1124 21:37:08.886108 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-wfnx5" Nov 24 21:37:08 crc kubenswrapper[4915]: I1124 21:37:08.926603 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-wfnx5" podStartSLOduration=5.296629157 podStartE2EDuration="12.926582766s" podCreationTimestamp="2025-11-24 21:36:56 +0000 UTC" firstStartedPulling="2025-11-24 21:36:56.551060419 +0000 UTC m=+1034.867312592" lastFinishedPulling="2025-11-24 21:37:04.181014018 +0000 UTC m=+1042.497266201" observedRunningTime="2025-11-24 21:37:08.920182483 +0000 UTC m=+1047.236434656" watchObservedRunningTime="2025-11-24 21:37:08.926582766 +0000 UTC m=+1047.242834939" Nov 24 21:37:11 crc kubenswrapper[4915]: I1124 21:37:11.417717 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-wfnx5" Nov 24 21:37:11 crc kubenswrapper[4915]: I1124 21:37:11.463493 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-wfnx5" Nov 24 21:37:16 crc kubenswrapper[4915]: I1124 21:37:16.389085 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-97pzg" Nov 24 21:37:16 crc kubenswrapper[4915]: I1124 21:37:16.492116 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-fg52n" Nov 24 21:37:17 crc kubenswrapper[4915]: I1124 21:37:17.974857 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-pw7wv" Nov 24 21:37:20 crc kubenswrapper[4915]: I1124 21:37:20.980935 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-2v4rw"] Nov 24 21:37:20 crc kubenswrapper[4915]: I1124 21:37:20.982626 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2v4rw" Nov 24 21:37:20 crc kubenswrapper[4915]: I1124 21:37:20.987138 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-qnnsm" Nov 24 21:37:20 crc kubenswrapper[4915]: I1124 21:37:20.987210 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 24 21:37:20 crc kubenswrapper[4915]: I1124 21:37:20.987322 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 24 21:37:21 crc kubenswrapper[4915]: I1124 21:37:21.013632 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2v4rw"] Nov 24 21:37:21 crc kubenswrapper[4915]: I1124 21:37:21.036707 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk9q8\" (UniqueName: \"kubernetes.io/projected/98d58e94-a022-4e87-bd79-d6aaa13493a0-kube-api-access-nk9q8\") pod \"openstack-operator-index-2v4rw\" (UID: \"98d58e94-a022-4e87-bd79-d6aaa13493a0\") " pod="openstack-operators/openstack-operator-index-2v4rw" Nov 24 21:37:21 crc kubenswrapper[4915]: I1124 21:37:21.138305 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk9q8\" (UniqueName: \"kubernetes.io/projected/98d58e94-a022-4e87-bd79-d6aaa13493a0-kube-api-access-nk9q8\") pod \"openstack-operator-index-2v4rw\" (UID: \"98d58e94-a022-4e87-bd79-d6aaa13493a0\") " pod="openstack-operators/openstack-operator-index-2v4rw" Nov 24 21:37:21 crc kubenswrapper[4915]: I1124 21:37:21.163847 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk9q8\" (UniqueName: \"kubernetes.io/projected/98d58e94-a022-4e87-bd79-d6aaa13493a0-kube-api-access-nk9q8\") pod \"openstack-operator-index-2v4rw\" (UID: \"98d58e94-a022-4e87-bd79-d6aaa13493a0\") " pod="openstack-operators/openstack-operator-index-2v4rw" Nov 24 21:37:21 crc kubenswrapper[4915]: I1124 21:37:21.308805 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2v4rw" Nov 24 21:37:21 crc kubenswrapper[4915]: I1124 21:37:21.768147 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2v4rw"] Nov 24 21:37:21 crc kubenswrapper[4915]: W1124 21:37:21.780998 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98d58e94_a022_4e87_bd79_d6aaa13493a0.slice/crio-fc9026a57ffe93f7b0c089196d6247114573e431f5ad22dcd16fa98308905dd0 WatchSource:0}: Error finding container fc9026a57ffe93f7b0c089196d6247114573e431f5ad22dcd16fa98308905dd0: Status 404 returned error can't find the container with id fc9026a57ffe93f7b0c089196d6247114573e431f5ad22dcd16fa98308905dd0 Nov 24 21:37:21 crc kubenswrapper[4915]: I1124 21:37:21.993918 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2v4rw" event={"ID":"98d58e94-a022-4e87-bd79-d6aaa13493a0","Type":"ContainerStarted","Data":"fc9026a57ffe93f7b0c089196d6247114573e431f5ad22dcd16fa98308905dd0"} Nov 24 21:37:24 crc kubenswrapper[4915]: I1124 21:37:24.358232 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-2v4rw"] Nov 24 21:37:25 crc kubenswrapper[4915]: I1124 21:37:25.175139 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-x5l2g"] Nov 24 21:37:25 crc kubenswrapper[4915]: I1124 21:37:25.177352 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x5l2g" Nov 24 21:37:25 crc kubenswrapper[4915]: I1124 21:37:25.180386 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-x5l2g"] Nov 24 21:37:25 crc kubenswrapper[4915]: I1124 21:37:25.311750 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wxjw\" (UniqueName: \"kubernetes.io/projected/b37899b6-9f24-4165-b16c-ee2984d44800-kube-api-access-2wxjw\") pod \"openstack-operator-index-x5l2g\" (UID: \"b37899b6-9f24-4165-b16c-ee2984d44800\") " pod="openstack-operators/openstack-operator-index-x5l2g" Nov 24 21:37:25 crc kubenswrapper[4915]: I1124 21:37:25.413546 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wxjw\" (UniqueName: \"kubernetes.io/projected/b37899b6-9f24-4165-b16c-ee2984d44800-kube-api-access-2wxjw\") pod \"openstack-operator-index-x5l2g\" (UID: \"b37899b6-9f24-4165-b16c-ee2984d44800\") " pod="openstack-operators/openstack-operator-index-x5l2g" Nov 24 21:37:25 crc kubenswrapper[4915]: I1124 21:37:25.436566 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wxjw\" (UniqueName: \"kubernetes.io/projected/b37899b6-9f24-4165-b16c-ee2984d44800-kube-api-access-2wxjw\") pod \"openstack-operator-index-x5l2g\" (UID: \"b37899b6-9f24-4165-b16c-ee2984d44800\") " pod="openstack-operators/openstack-operator-index-x5l2g" Nov 24 21:37:25 crc kubenswrapper[4915]: I1124 21:37:25.537011 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x5l2g" Nov 24 21:37:26 crc kubenswrapper[4915]: I1124 21:37:26.171611 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-x5l2g"] Nov 24 21:37:26 crc kubenswrapper[4915]: I1124 21:37:26.369432 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-wfnx5" Nov 24 21:37:27 crc kubenswrapper[4915]: I1124 21:37:27.061076 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x5l2g" event={"ID":"b37899b6-9f24-4165-b16c-ee2984d44800","Type":"ContainerStarted","Data":"96f3b42478bd54265cb2f7bea490f5b30318b6580eae900639aca81e594b83f4"} Nov 24 21:37:27 crc kubenswrapper[4915]: I1124 21:37:27.061520 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x5l2g" event={"ID":"b37899b6-9f24-4165-b16c-ee2984d44800","Type":"ContainerStarted","Data":"7eaedeb9c3916ab0884f91c5959b879a0eeb6e3db63921c65e6fce0a2c0e95bd"} Nov 24 21:37:27 crc kubenswrapper[4915]: I1124 21:37:27.066401 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2v4rw" event={"ID":"98d58e94-a022-4e87-bd79-d6aaa13493a0","Type":"ContainerStarted","Data":"f058385d2211c740d919e7e88e4c3a942991e63a2c046624e845acdaec674df4"} Nov 24 21:37:27 crc kubenswrapper[4915]: I1124 21:37:27.066534 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-2v4rw" podUID="98d58e94-a022-4e87-bd79-d6aaa13493a0" containerName="registry-server" containerID="cri-o://f058385d2211c740d919e7e88e4c3a942991e63a2c046624e845acdaec674df4" gracePeriod=2 Nov 24 21:37:27 crc kubenswrapper[4915]: I1124 21:37:27.089999 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-x5l2g" podStartSLOduration=2.035995693 podStartE2EDuration="2.089963914s" podCreationTimestamp="2025-11-24 21:37:25 +0000 UTC" firstStartedPulling="2025-11-24 21:37:26.18257762 +0000 UTC m=+1064.498829793" lastFinishedPulling="2025-11-24 21:37:26.236545841 +0000 UTC m=+1064.552798014" observedRunningTime="2025-11-24 21:37:27.084339291 +0000 UTC m=+1065.400591504" watchObservedRunningTime="2025-11-24 21:37:27.089963914 +0000 UTC m=+1065.406216127" Nov 24 21:37:27 crc kubenswrapper[4915]: I1124 21:37:27.121662 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-2v4rw" podStartSLOduration=3.114637674 podStartE2EDuration="7.121635371s" podCreationTimestamp="2025-11-24 21:37:20 +0000 UTC" firstStartedPulling="2025-11-24 21:37:21.784190124 +0000 UTC m=+1060.100442297" lastFinishedPulling="2025-11-24 21:37:25.791187821 +0000 UTC m=+1064.107439994" observedRunningTime="2025-11-24 21:37:27.119145935 +0000 UTC m=+1065.435398148" watchObservedRunningTime="2025-11-24 21:37:27.121635371 +0000 UTC m=+1065.437887564" Nov 24 21:37:27 crc kubenswrapper[4915]: I1124 21:37:27.598021 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2v4rw" Nov 24 21:37:27 crc kubenswrapper[4915]: I1124 21:37:27.770888 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk9q8\" (UniqueName: \"kubernetes.io/projected/98d58e94-a022-4e87-bd79-d6aaa13493a0-kube-api-access-nk9q8\") pod \"98d58e94-a022-4e87-bd79-d6aaa13493a0\" (UID: \"98d58e94-a022-4e87-bd79-d6aaa13493a0\") " Nov 24 21:37:27 crc kubenswrapper[4915]: I1124 21:37:27.777099 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98d58e94-a022-4e87-bd79-d6aaa13493a0-kube-api-access-nk9q8" (OuterVolumeSpecName: "kube-api-access-nk9q8") pod "98d58e94-a022-4e87-bd79-d6aaa13493a0" (UID: "98d58e94-a022-4e87-bd79-d6aaa13493a0"). InnerVolumeSpecName "kube-api-access-nk9q8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:37:27 crc kubenswrapper[4915]: I1124 21:37:27.872997 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nk9q8\" (UniqueName: \"kubernetes.io/projected/98d58e94-a022-4e87-bd79-d6aaa13493a0-kube-api-access-nk9q8\") on node \"crc\" DevicePath \"\"" Nov 24 21:37:28 crc kubenswrapper[4915]: I1124 21:37:28.076063 4915 generic.go:334] "Generic (PLEG): container finished" podID="98d58e94-a022-4e87-bd79-d6aaa13493a0" containerID="f058385d2211c740d919e7e88e4c3a942991e63a2c046624e845acdaec674df4" exitCode=0 Nov 24 21:37:28 crc kubenswrapper[4915]: I1124 21:37:28.076739 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2v4rw" Nov 24 21:37:28 crc kubenswrapper[4915]: I1124 21:37:28.076924 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2v4rw" event={"ID":"98d58e94-a022-4e87-bd79-d6aaa13493a0","Type":"ContainerDied","Data":"f058385d2211c740d919e7e88e4c3a942991e63a2c046624e845acdaec674df4"} Nov 24 21:37:28 crc kubenswrapper[4915]: I1124 21:37:28.076997 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2v4rw" event={"ID":"98d58e94-a022-4e87-bd79-d6aaa13493a0","Type":"ContainerDied","Data":"fc9026a57ffe93f7b0c089196d6247114573e431f5ad22dcd16fa98308905dd0"} Nov 24 21:37:28 crc kubenswrapper[4915]: I1124 21:37:28.077028 4915 scope.go:117] "RemoveContainer" containerID="f058385d2211c740d919e7e88e4c3a942991e63a2c046624e845acdaec674df4" Nov 24 21:37:28 crc kubenswrapper[4915]: I1124 21:37:28.100007 4915 scope.go:117] "RemoveContainer" containerID="f058385d2211c740d919e7e88e4c3a942991e63a2c046624e845acdaec674df4" Nov 24 21:37:28 crc kubenswrapper[4915]: E1124 21:37:28.100384 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f058385d2211c740d919e7e88e4c3a942991e63a2c046624e845acdaec674df4\": container with ID starting with f058385d2211c740d919e7e88e4c3a942991e63a2c046624e845acdaec674df4 not found: ID does not exist" containerID="f058385d2211c740d919e7e88e4c3a942991e63a2c046624e845acdaec674df4" Nov 24 21:37:28 crc kubenswrapper[4915]: I1124 21:37:28.100424 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f058385d2211c740d919e7e88e4c3a942991e63a2c046624e845acdaec674df4"} err="failed to get container status \"f058385d2211c740d919e7e88e4c3a942991e63a2c046624e845acdaec674df4\": rpc error: code = NotFound desc = could not find container \"f058385d2211c740d919e7e88e4c3a942991e63a2c046624e845acdaec674df4\": container with ID starting with f058385d2211c740d919e7e88e4c3a942991e63a2c046624e845acdaec674df4 not found: ID does not exist" Nov 24 21:37:28 crc kubenswrapper[4915]: I1124 21:37:28.114822 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-2v4rw"] Nov 24 21:37:28 crc kubenswrapper[4915]: I1124 21:37:28.122936 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-2v4rw"] Nov 24 21:37:28 crc kubenswrapper[4915]: I1124 21:37:28.442633 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98d58e94-a022-4e87-bd79-d6aaa13493a0" path="/var/lib/kubelet/pods/98d58e94-a022-4e87-bd79-d6aaa13493a0/volumes" Nov 24 21:37:35 crc kubenswrapper[4915]: I1124 21:37:35.537614 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-x5l2g" Nov 24 21:37:35 crc kubenswrapper[4915]: I1124 21:37:35.537999 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-x5l2g" Nov 24 21:37:35 crc kubenswrapper[4915]: I1124 21:37:35.593207 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-x5l2g" Nov 24 21:37:36 crc kubenswrapper[4915]: I1124 21:37:36.175029 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-x5l2g" Nov 24 21:37:54 crc kubenswrapper[4915]: I1124 21:37:54.846705 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f"] Nov 24 21:37:54 crc kubenswrapper[4915]: E1124 21:37:54.848683 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98d58e94-a022-4e87-bd79-d6aaa13493a0" containerName="registry-server" Nov 24 21:37:54 crc kubenswrapper[4915]: I1124 21:37:54.848714 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="98d58e94-a022-4e87-bd79-d6aaa13493a0" containerName="registry-server" Nov 24 21:37:54 crc kubenswrapper[4915]: I1124 21:37:54.849084 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="98d58e94-a022-4e87-bd79-d6aaa13493a0" containerName="registry-server" Nov 24 21:37:54 crc kubenswrapper[4915]: I1124 21:37:54.852105 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f" Nov 24 21:37:54 crc kubenswrapper[4915]: I1124 21:37:54.854286 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-lcrx9" Nov 24 21:37:54 crc kubenswrapper[4915]: I1124 21:37:54.857335 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f"] Nov 24 21:37:54 crc kubenswrapper[4915]: I1124 21:37:54.889417 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z56dx\" (UniqueName: \"kubernetes.io/projected/b2a9acff-41f6-47c0-9d84-8cb23ea017df-kube-api-access-z56dx\") pod \"d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f\" (UID: \"b2a9acff-41f6-47c0-9d84-8cb23ea017df\") " pod="openstack-operators/d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f" Nov 24 21:37:54 crc kubenswrapper[4915]: I1124 21:37:54.889536 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b2a9acff-41f6-47c0-9d84-8cb23ea017df-util\") pod \"d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f\" (UID: \"b2a9acff-41f6-47c0-9d84-8cb23ea017df\") " pod="openstack-operators/d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f" Nov 24 21:37:54 crc kubenswrapper[4915]: I1124 21:37:54.889608 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b2a9acff-41f6-47c0-9d84-8cb23ea017df-bundle\") pod \"d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f\" (UID: \"b2a9acff-41f6-47c0-9d84-8cb23ea017df\") " pod="openstack-operators/d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f" Nov 24 21:37:54 crc kubenswrapper[4915]: I1124 21:37:54.990476 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b2a9acff-41f6-47c0-9d84-8cb23ea017df-bundle\") pod \"d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f\" (UID: \"b2a9acff-41f6-47c0-9d84-8cb23ea017df\") " pod="openstack-operators/d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f" Nov 24 21:37:54 crc kubenswrapper[4915]: I1124 21:37:54.990589 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z56dx\" (UniqueName: \"kubernetes.io/projected/b2a9acff-41f6-47c0-9d84-8cb23ea017df-kube-api-access-z56dx\") pod \"d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f\" (UID: \"b2a9acff-41f6-47c0-9d84-8cb23ea017df\") " pod="openstack-operators/d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f" Nov 24 21:37:54 crc kubenswrapper[4915]: I1124 21:37:54.990691 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b2a9acff-41f6-47c0-9d84-8cb23ea017df-util\") pod \"d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f\" (UID: \"b2a9acff-41f6-47c0-9d84-8cb23ea017df\") " pod="openstack-operators/d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f" Nov 24 21:37:54 crc kubenswrapper[4915]: I1124 21:37:54.991375 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b2a9acff-41f6-47c0-9d84-8cb23ea017df-util\") pod \"d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f\" (UID: \"b2a9acff-41f6-47c0-9d84-8cb23ea017df\") " pod="openstack-operators/d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f" Nov 24 21:37:54 crc kubenswrapper[4915]: I1124 21:37:54.991641 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b2a9acff-41f6-47c0-9d84-8cb23ea017df-bundle\") pod \"d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f\" (UID: \"b2a9acff-41f6-47c0-9d84-8cb23ea017df\") " pod="openstack-operators/d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f" Nov 24 21:37:55 crc kubenswrapper[4915]: I1124 21:37:55.010719 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z56dx\" (UniqueName: \"kubernetes.io/projected/b2a9acff-41f6-47c0-9d84-8cb23ea017df-kube-api-access-z56dx\") pod \"d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f\" (UID: \"b2a9acff-41f6-47c0-9d84-8cb23ea017df\") " pod="openstack-operators/d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f" Nov 24 21:37:55 crc kubenswrapper[4915]: I1124 21:37:55.181153 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f" Nov 24 21:37:55 crc kubenswrapper[4915]: I1124 21:37:55.677879 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f"] Nov 24 21:37:56 crc kubenswrapper[4915]: I1124 21:37:56.351186 4915 generic.go:334] "Generic (PLEG): container finished" podID="b2a9acff-41f6-47c0-9d84-8cb23ea017df" containerID="075d1051ad6d328c2406997a2aa300bf0a86bcb56833bec8070af6a084b37cc9" exitCode=0 Nov 24 21:37:56 crc kubenswrapper[4915]: I1124 21:37:56.351258 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f" event={"ID":"b2a9acff-41f6-47c0-9d84-8cb23ea017df","Type":"ContainerDied","Data":"075d1051ad6d328c2406997a2aa300bf0a86bcb56833bec8070af6a084b37cc9"} Nov 24 21:37:56 crc kubenswrapper[4915]: I1124 21:37:56.351732 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f" event={"ID":"b2a9acff-41f6-47c0-9d84-8cb23ea017df","Type":"ContainerStarted","Data":"98c279f7aa28f6863a8608f3557265d4ba81d77b76819665bd492e3b511ca56c"} Nov 24 21:37:57 crc kubenswrapper[4915]: I1124 21:37:57.362977 4915 generic.go:334] "Generic (PLEG): container finished" podID="b2a9acff-41f6-47c0-9d84-8cb23ea017df" containerID="69fcf401167a0bdd66bcac99a61bf5504e5e20ce75cc526281b024b7f61525ab" exitCode=0 Nov 24 21:37:57 crc kubenswrapper[4915]: I1124 21:37:57.363409 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f" event={"ID":"b2a9acff-41f6-47c0-9d84-8cb23ea017df","Type":"ContainerDied","Data":"69fcf401167a0bdd66bcac99a61bf5504e5e20ce75cc526281b024b7f61525ab"} Nov 24 21:37:58 crc kubenswrapper[4915]: I1124 21:37:58.375623 4915 generic.go:334] "Generic (PLEG): container finished" podID="b2a9acff-41f6-47c0-9d84-8cb23ea017df" containerID="ee2c624024a064cc8317a5f2b1dfe292dff259f57e50b0b29b934d752d073fea" exitCode=0 Nov 24 21:37:58 crc kubenswrapper[4915]: I1124 21:37:58.375721 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f" event={"ID":"b2a9acff-41f6-47c0-9d84-8cb23ea017df","Type":"ContainerDied","Data":"ee2c624024a064cc8317a5f2b1dfe292dff259f57e50b0b29b934d752d073fea"} Nov 24 21:37:59 crc kubenswrapper[4915]: I1124 21:37:59.789031 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f" Nov 24 21:37:59 crc kubenswrapper[4915]: I1124 21:37:59.882483 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b2a9acff-41f6-47c0-9d84-8cb23ea017df-util\") pod \"b2a9acff-41f6-47c0-9d84-8cb23ea017df\" (UID: \"b2a9acff-41f6-47c0-9d84-8cb23ea017df\") " Nov 24 21:37:59 crc kubenswrapper[4915]: I1124 21:37:59.882535 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z56dx\" (UniqueName: \"kubernetes.io/projected/b2a9acff-41f6-47c0-9d84-8cb23ea017df-kube-api-access-z56dx\") pod \"b2a9acff-41f6-47c0-9d84-8cb23ea017df\" (UID: \"b2a9acff-41f6-47c0-9d84-8cb23ea017df\") " Nov 24 21:37:59 crc kubenswrapper[4915]: I1124 21:37:59.882607 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b2a9acff-41f6-47c0-9d84-8cb23ea017df-bundle\") pod \"b2a9acff-41f6-47c0-9d84-8cb23ea017df\" (UID: \"b2a9acff-41f6-47c0-9d84-8cb23ea017df\") " Nov 24 21:37:59 crc kubenswrapper[4915]: I1124 21:37:59.883474 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2a9acff-41f6-47c0-9d84-8cb23ea017df-bundle" (OuterVolumeSpecName: "bundle") pod "b2a9acff-41f6-47c0-9d84-8cb23ea017df" (UID: "b2a9acff-41f6-47c0-9d84-8cb23ea017df"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:37:59 crc kubenswrapper[4915]: I1124 21:37:59.891031 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2a9acff-41f6-47c0-9d84-8cb23ea017df-kube-api-access-z56dx" (OuterVolumeSpecName: "kube-api-access-z56dx") pod "b2a9acff-41f6-47c0-9d84-8cb23ea017df" (UID: "b2a9acff-41f6-47c0-9d84-8cb23ea017df"). InnerVolumeSpecName "kube-api-access-z56dx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:37:59 crc kubenswrapper[4915]: I1124 21:37:59.917820 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2a9acff-41f6-47c0-9d84-8cb23ea017df-util" (OuterVolumeSpecName: "util") pod "b2a9acff-41f6-47c0-9d84-8cb23ea017df" (UID: "b2a9acff-41f6-47c0-9d84-8cb23ea017df"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:37:59 crc kubenswrapper[4915]: I1124 21:37:59.986173 4915 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b2a9acff-41f6-47c0-9d84-8cb23ea017df-util\") on node \"crc\" DevicePath \"\"" Nov 24 21:37:59 crc kubenswrapper[4915]: I1124 21:37:59.986225 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z56dx\" (UniqueName: \"kubernetes.io/projected/b2a9acff-41f6-47c0-9d84-8cb23ea017df-kube-api-access-z56dx\") on node \"crc\" DevicePath \"\"" Nov 24 21:37:59 crc kubenswrapper[4915]: I1124 21:37:59.986249 4915 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b2a9acff-41f6-47c0-9d84-8cb23ea017df-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:38:00 crc kubenswrapper[4915]: I1124 21:38:00.401297 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f" event={"ID":"b2a9acff-41f6-47c0-9d84-8cb23ea017df","Type":"ContainerDied","Data":"98c279f7aa28f6863a8608f3557265d4ba81d77b76819665bd492e3b511ca56c"} Nov 24 21:38:00 crc kubenswrapper[4915]: I1124 21:38:00.401654 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98c279f7aa28f6863a8608f3557265d4ba81d77b76819665bd492e3b511ca56c" Nov 24 21:38:00 crc kubenswrapper[4915]: I1124 21:38:00.401879 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f" Nov 24 21:38:08 crc kubenswrapper[4915]: I1124 21:38:08.698854 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-78bdd85758-sqsbl"] Nov 24 21:38:08 crc kubenswrapper[4915]: E1124 21:38:08.699715 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2a9acff-41f6-47c0-9d84-8cb23ea017df" containerName="extract" Nov 24 21:38:08 crc kubenswrapper[4915]: I1124 21:38:08.699731 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2a9acff-41f6-47c0-9d84-8cb23ea017df" containerName="extract" Nov 24 21:38:08 crc kubenswrapper[4915]: E1124 21:38:08.699750 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2a9acff-41f6-47c0-9d84-8cb23ea017df" containerName="pull" Nov 24 21:38:08 crc kubenswrapper[4915]: I1124 21:38:08.699757 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2a9acff-41f6-47c0-9d84-8cb23ea017df" containerName="pull" Nov 24 21:38:08 crc kubenswrapper[4915]: E1124 21:38:08.699767 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2a9acff-41f6-47c0-9d84-8cb23ea017df" containerName="util" Nov 24 21:38:08 crc kubenswrapper[4915]: I1124 21:38:08.699790 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2a9acff-41f6-47c0-9d84-8cb23ea017df" containerName="util" Nov 24 21:38:08 crc kubenswrapper[4915]: I1124 21:38:08.699952 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2a9acff-41f6-47c0-9d84-8cb23ea017df" containerName="extract" Nov 24 21:38:08 crc kubenswrapper[4915]: I1124 21:38:08.700629 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-78bdd85758-sqsbl" Nov 24 21:38:08 crc kubenswrapper[4915]: I1124 21:38:08.702097 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-ffxsd" Nov 24 21:38:08 crc kubenswrapper[4915]: I1124 21:38:08.720007 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-78bdd85758-sqsbl"] Nov 24 21:38:08 crc kubenswrapper[4915]: I1124 21:38:08.848316 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsmsj\" (UniqueName: \"kubernetes.io/projected/920274d2-7453-4b35-ac67-ada3c857cd58-kube-api-access-vsmsj\") pod \"openstack-operator-controller-operator-78bdd85758-sqsbl\" (UID: \"920274d2-7453-4b35-ac67-ada3c857cd58\") " pod="openstack-operators/openstack-operator-controller-operator-78bdd85758-sqsbl" Nov 24 21:38:08 crc kubenswrapper[4915]: I1124 21:38:08.949854 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsmsj\" (UniqueName: \"kubernetes.io/projected/920274d2-7453-4b35-ac67-ada3c857cd58-kube-api-access-vsmsj\") pod \"openstack-operator-controller-operator-78bdd85758-sqsbl\" (UID: \"920274d2-7453-4b35-ac67-ada3c857cd58\") " pod="openstack-operators/openstack-operator-controller-operator-78bdd85758-sqsbl" Nov 24 21:38:08 crc kubenswrapper[4915]: I1124 21:38:08.973424 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsmsj\" (UniqueName: \"kubernetes.io/projected/920274d2-7453-4b35-ac67-ada3c857cd58-kube-api-access-vsmsj\") pod \"openstack-operator-controller-operator-78bdd85758-sqsbl\" (UID: \"920274d2-7453-4b35-ac67-ada3c857cd58\") " pod="openstack-operators/openstack-operator-controller-operator-78bdd85758-sqsbl" Nov 24 21:38:09 crc kubenswrapper[4915]: I1124 21:38:09.089766 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-78bdd85758-sqsbl" Nov 24 21:38:09 crc kubenswrapper[4915]: I1124 21:38:09.615228 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-78bdd85758-sqsbl"] Nov 24 21:38:10 crc kubenswrapper[4915]: I1124 21:38:10.493357 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-78bdd85758-sqsbl" event={"ID":"920274d2-7453-4b35-ac67-ada3c857cd58","Type":"ContainerStarted","Data":"a2a16391268952e9df985bd8701443f7e923a1166419b1b8d403ff860ff65684"} Nov 24 21:38:14 crc kubenswrapper[4915]: I1124 21:38:14.534529 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-78bdd85758-sqsbl" event={"ID":"920274d2-7453-4b35-ac67-ada3c857cd58","Type":"ContainerStarted","Data":"b568f2199289480af06367c9bd2cc0b95ddae5f96dc81db39852b37b9023b761"} Nov 24 21:38:14 crc kubenswrapper[4915]: I1124 21:38:14.535037 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-78bdd85758-sqsbl" Nov 24 21:38:14 crc kubenswrapper[4915]: I1124 21:38:14.568652 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-78bdd85758-sqsbl" podStartSLOduration=2.507091281 podStartE2EDuration="6.568634783s" podCreationTimestamp="2025-11-24 21:38:08 +0000 UTC" firstStartedPulling="2025-11-24 21:38:09.620305517 +0000 UTC m=+1107.936557680" lastFinishedPulling="2025-11-24 21:38:13.681849009 +0000 UTC m=+1111.998101182" observedRunningTime="2025-11-24 21:38:14.561589604 +0000 UTC m=+1112.877841777" watchObservedRunningTime="2025-11-24 21:38:14.568634783 +0000 UTC m=+1112.884886956" Nov 24 21:38:19 crc kubenswrapper[4915]: I1124 21:38:19.092658 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-78bdd85758-sqsbl" Nov 24 21:38:24 crc kubenswrapper[4915]: I1124 21:38:24.327506 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:38:24 crc kubenswrapper[4915]: I1124 21:38:24.327845 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.560467 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-rml9z"] Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.564184 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-rml9z" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.575382 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-cjf6b" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.578218 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-rml9z"] Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.584193 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-wp6b9"] Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.585635 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wp6b9" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.588756 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-6gvpm" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.591507 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-8hkmn"] Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.592876 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8hkmn" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.596326 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-6mq57" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.605995 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-wp6b9"] Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.612937 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-8hkmn"] Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.622208 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-rpbgd"] Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.641133 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-rpbgd" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.643835 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-qdzj4"] Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.654759 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-qdzj4" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.659688 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-vft2k"] Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.661093 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-vft2k" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.661466 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-l4rwx" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.661636 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-dwnv4" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.666877 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-qlcgg" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.679192 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-kcs7l"] Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.680506 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-kcs7l" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.683043 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-9584m" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.683244 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.690612 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t427s\" (UniqueName: \"kubernetes.io/projected/6638fde3-c855-47c4-a339-a4b64a3b83ad-kube-api-access-t427s\") pod \"designate-operator-controller-manager-7d695c9b56-wp6b9\" (UID: \"6638fde3-c855-47c4-a339-a4b64a3b83ad\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wp6b9" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.690689 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wgpx\" (UniqueName: \"kubernetes.io/projected/87d99431-c528-49c5-b28b-e32a6f46baaf-kube-api-access-7wgpx\") pod \"cinder-operator-controller-manager-79856dc55c-8hkmn\" (UID: \"87d99431-c528-49c5-b28b-e32a6f46baaf\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8hkmn" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.690720 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdcxv\" (UniqueName: \"kubernetes.io/projected/5c67ba4c-d4f5-496d-bf28-6d681f42c840-kube-api-access-tdcxv\") pod \"barbican-operator-controller-manager-86dc4d89c8-rml9z\" (UID: \"5c67ba4c-d4f5-496d-bf28-6d681f42c840\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-rml9z" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.690840 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmssw\" (UniqueName: \"kubernetes.io/projected/04e18c27-a9e4-439e-994b-2c38eb126153-kube-api-access-qmssw\") pod \"glance-operator-controller-manager-68b95954c9-rpbgd\" (UID: \"04e18c27-a9e4-439e-994b-2c38eb126153\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-rpbgd" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.696893 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-rpbgd"] Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.701665 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-qdzj4"] Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.713935 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-vft2k"] Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.762829 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-m4z8z"] Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.764079 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-m4z8z" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.773132 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-gn2mm" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.777940 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-kcs7l"] Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.791912 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmssw\" (UniqueName: \"kubernetes.io/projected/04e18c27-a9e4-439e-994b-2c38eb126153-kube-api-access-qmssw\") pod \"glance-operator-controller-manager-68b95954c9-rpbgd\" (UID: \"04e18c27-a9e4-439e-994b-2c38eb126153\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-rpbgd" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.791952 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb28t\" (UniqueName: \"kubernetes.io/projected/32115f30-05b5-4828-ba4b-155b238026a1-kube-api-access-cb28t\") pod \"horizon-operator-controller-manager-68c9694994-vft2k\" (UID: \"32115f30-05b5-4828-ba4b-155b238026a1\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-vft2k" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.791989 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t427s\" (UniqueName: \"kubernetes.io/projected/6638fde3-c855-47c4-a339-a4b64a3b83ad-kube-api-access-t427s\") pod \"designate-operator-controller-manager-7d695c9b56-wp6b9\" (UID: \"6638fde3-c855-47c4-a339-a4b64a3b83ad\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wp6b9" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.792030 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wgpx\" (UniqueName: \"kubernetes.io/projected/87d99431-c528-49c5-b28b-e32a6f46baaf-kube-api-access-7wgpx\") pod \"cinder-operator-controller-manager-79856dc55c-8hkmn\" (UID: \"87d99431-c528-49c5-b28b-e32a6f46baaf\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8hkmn" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.792048 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj4fz\" (UniqueName: \"kubernetes.io/projected/c8b0feeb-28f3-41a6-8ccc-a9eb042c4416-kube-api-access-vj4fz\") pod \"heat-operator-controller-manager-774b86978c-qdzj4\" (UID: \"c8b0feeb-28f3-41a6-8ccc-a9eb042c4416\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-qdzj4" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.792111 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdcxv\" (UniqueName: \"kubernetes.io/projected/5c67ba4c-d4f5-496d-bf28-6d681f42c840-kube-api-access-tdcxv\") pod \"barbican-operator-controller-manager-86dc4d89c8-rml9z\" (UID: \"5c67ba4c-d4f5-496d-bf28-6d681f42c840\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-rml9z" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.792127 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-kcs7l\" (UID: \"b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-kcs7l" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.792190 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jmv4\" (UniqueName: \"kubernetes.io/projected/b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b-kube-api-access-2jmv4\") pod \"infra-operator-controller-manager-d5cc86f4b-kcs7l\" (UID: \"b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-kcs7l" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.815023 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-m4z8z"] Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.826569 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdcxv\" (UniqueName: \"kubernetes.io/projected/5c67ba4c-d4f5-496d-bf28-6d681f42c840-kube-api-access-tdcxv\") pod \"barbican-operator-controller-manager-86dc4d89c8-rml9z\" (UID: \"5c67ba4c-d4f5-496d-bf28-6d681f42c840\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-rml9z" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.830071 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-8bzrw"] Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.831198 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-8bzrw" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.835533 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wgpx\" (UniqueName: \"kubernetes.io/projected/87d99431-c528-49c5-b28b-e32a6f46baaf-kube-api-access-7wgpx\") pod \"cinder-operator-controller-manager-79856dc55c-8hkmn\" (UID: \"87d99431-c528-49c5-b28b-e32a6f46baaf\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8hkmn" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.837601 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-wnjnr" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.843139 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-8bzrw"] Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.858892 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t427s\" (UniqueName: \"kubernetes.io/projected/6638fde3-c855-47c4-a339-a4b64a3b83ad-kube-api-access-t427s\") pod \"designate-operator-controller-manager-7d695c9b56-wp6b9\" (UID: \"6638fde3-c855-47c4-a339-a4b64a3b83ad\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wp6b9" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.859593 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-86mf2"] Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.860983 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-86mf2" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.865142 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-gdt7m" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.871087 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmssw\" (UniqueName: \"kubernetes.io/projected/04e18c27-a9e4-439e-994b-2c38eb126153-kube-api-access-qmssw\") pod \"glance-operator-controller-manager-68b95954c9-rpbgd\" (UID: \"04e18c27-a9e4-439e-994b-2c38eb126153\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-rpbgd" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.880472 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-4mrzh"] Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.881834 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-4mrzh" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.886680 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-86mf2"] Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.888542 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-rml9z" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.895978 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-5lx62" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.897245 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb28t\" (UniqueName: \"kubernetes.io/projected/32115f30-05b5-4828-ba4b-155b238026a1-kube-api-access-cb28t\") pod \"horizon-operator-controller-manager-68c9694994-vft2k\" (UID: \"32115f30-05b5-4828-ba4b-155b238026a1\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-vft2k" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.897310 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj4fz\" (UniqueName: \"kubernetes.io/projected/c8b0feeb-28f3-41a6-8ccc-a9eb042c4416-kube-api-access-vj4fz\") pod \"heat-operator-controller-manager-774b86978c-qdzj4\" (UID: \"c8b0feeb-28f3-41a6-8ccc-a9eb042c4416\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-qdzj4" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.897332 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-kcs7l\" (UID: \"b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-kcs7l" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.897352 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jmv4\" (UniqueName: \"kubernetes.io/projected/b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b-kube-api-access-2jmv4\") pod \"infra-operator-controller-manager-d5cc86f4b-kcs7l\" (UID: \"b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-kcs7l" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.897371 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z65hx\" (UniqueName: \"kubernetes.io/projected/6c2ce74c-40c4-4b98-ac53-1ce4869dfbe1-kube-api-access-z65hx\") pod \"ironic-operator-controller-manager-5bfcdc958c-m4z8z\" (UID: \"6c2ce74c-40c4-4b98-ac53-1ce4869dfbe1\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-m4z8z" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.897404 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87qnc\" (UniqueName: \"kubernetes.io/projected/d090cd1f-8af4-468c-881d-d04cf192b0c4-kube-api-access-87qnc\") pod \"manila-operator-controller-manager-58bb8d67cc-8bzrw\" (UID: \"d090cd1f-8af4-468c-881d-d04cf192b0c4\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-8bzrw" Nov 24 21:38:40 crc kubenswrapper[4915]: E1124 21:38:40.897850 4915 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 24 21:38:40 crc kubenswrapper[4915]: E1124 21:38:40.897886 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b-cert podName:b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b nodeName:}" failed. No retries permitted until 2025-11-24 21:38:41.397871536 +0000 UTC m=+1139.714123709 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b-cert") pod "infra-operator-controller-manager-d5cc86f4b-kcs7l" (UID: "b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b") : secret "infra-operator-webhook-server-cert" not found Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.923032 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wp6b9" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.923571 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8hkmn" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.925823 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-xz4zz"] Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.926401 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj4fz\" (UniqueName: \"kubernetes.io/projected/c8b0feeb-28f3-41a6-8ccc-a9eb042c4416-kube-api-access-vj4fz\") pod \"heat-operator-controller-manager-774b86978c-qdzj4\" (UID: \"c8b0feeb-28f3-41a6-8ccc-a9eb042c4416\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-qdzj4" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.968769 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-xz4zz" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.970245 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jmv4\" (UniqueName: \"kubernetes.io/projected/b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b-kube-api-access-2jmv4\") pod \"infra-operator-controller-manager-d5cc86f4b-kcs7l\" (UID: \"b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-kcs7l" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.972384 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-nsnq7" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.983747 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-zk5k2"] Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.986289 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-zk5k2" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.988352 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-gpz5d" Nov 24 21:38:40 crc kubenswrapper[4915]: I1124 21:38:40.988745 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-rpbgd" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.005698 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87qnc\" (UniqueName: \"kubernetes.io/projected/d090cd1f-8af4-468c-881d-d04cf192b0c4-kube-api-access-87qnc\") pod \"manila-operator-controller-manager-58bb8d67cc-8bzrw\" (UID: \"d090cd1f-8af4-468c-881d-d04cf192b0c4\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-8bzrw" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.006199 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn8zh\" (UniqueName: \"kubernetes.io/projected/2ce17a51-ca7e-4692-946d-1a09c9a865c5-kube-api-access-nn8zh\") pod \"keystone-operator-controller-manager-748dc6576f-86mf2\" (UID: \"2ce17a51-ca7e-4692-946d-1a09c9a865c5\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-86mf2" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.006555 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z65hx\" (UniqueName: \"kubernetes.io/projected/6c2ce74c-40c4-4b98-ac53-1ce4869dfbe1-kube-api-access-z65hx\") pod \"ironic-operator-controller-manager-5bfcdc958c-m4z8z\" (UID: \"6c2ce74c-40c4-4b98-ac53-1ce4869dfbe1\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-m4z8z" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.006949 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbzgx\" (UniqueName: \"kubernetes.io/projected/567bba9d-1881-4a67-b6bb-678650252bcc-kube-api-access-lbzgx\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-xz4zz\" (UID: \"567bba9d-1881-4a67-b6bb-678650252bcc\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-xz4zz" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.007117 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw4nt\" (UniqueName: \"kubernetes.io/projected/3f175a4a-2119-4b83-84f6-d067eb8be406-kube-api-access-xw4nt\") pod \"neutron-operator-controller-manager-7c57c8bbc4-4mrzh\" (UID: \"3f175a4a-2119-4b83-84f6-d067eb8be406\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-4mrzh" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.017817 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb28t\" (UniqueName: \"kubernetes.io/projected/32115f30-05b5-4828-ba4b-155b238026a1-kube-api-access-cb28t\") pod \"horizon-operator-controller-manager-68c9694994-vft2k\" (UID: \"32115f30-05b5-4828-ba4b-155b238026a1\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-vft2k" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.023825 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-4mrzh"] Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.033101 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-qdzj4" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.033490 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87qnc\" (UniqueName: \"kubernetes.io/projected/d090cd1f-8af4-468c-881d-d04cf192b0c4-kube-api-access-87qnc\") pod \"manila-operator-controller-manager-58bb8d67cc-8bzrw\" (UID: \"d090cd1f-8af4-468c-881d-d04cf192b0c4\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-8bzrw" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.034445 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-zk5k2"] Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.050461 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z65hx\" (UniqueName: \"kubernetes.io/projected/6c2ce74c-40c4-4b98-ac53-1ce4869dfbe1-kube-api-access-z65hx\") pod \"ironic-operator-controller-manager-5bfcdc958c-m4z8z\" (UID: \"6c2ce74c-40c4-4b98-ac53-1ce4869dfbe1\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-m4z8z" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.054573 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-xz4zz"] Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.066188 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-whss9"] Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.067829 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-whss9" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.068926 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-vft2k" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.069921 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-bfwh6" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.072362 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-whss9"] Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.094756 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7"] Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.096227 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.100346 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-cfrqm" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.100393 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.101866 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-zr674"] Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.103170 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-zr674" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.105192 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-m4z8z" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.105317 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-hp79q" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.108950 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rl99\" (UniqueName: \"kubernetes.io/projected/cdd28f21-72b9-4818-88fd-68e6a8dbc508-kube-api-access-4rl99\") pod \"octavia-operator-controller-manager-fd75fd47d-whss9\" (UID: \"cdd28f21-72b9-4818-88fd-68e6a8dbc508\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-whss9" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.109036 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn8zh\" (UniqueName: \"kubernetes.io/projected/2ce17a51-ca7e-4692-946d-1a09c9a865c5-kube-api-access-nn8zh\") pod \"keystone-operator-controller-manager-748dc6576f-86mf2\" (UID: \"2ce17a51-ca7e-4692-946d-1a09c9a865c5\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-86mf2" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.109098 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsm89\" (UniqueName: \"kubernetes.io/projected/3bbdd81f-f69b-4e09-b1b2-374723b591ab-kube-api-access-gsm89\") pod \"nova-operator-controller-manager-79556f57fc-zk5k2\" (UID: \"3bbdd81f-f69b-4e09-b1b2-374723b591ab\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-zk5k2" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.109186 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbzgx\" (UniqueName: \"kubernetes.io/projected/567bba9d-1881-4a67-b6bb-678650252bcc-kube-api-access-lbzgx\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-xz4zz\" (UID: \"567bba9d-1881-4a67-b6bb-678650252bcc\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-xz4zz" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.109223 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw4nt\" (UniqueName: \"kubernetes.io/projected/3f175a4a-2119-4b83-84f6-d067eb8be406-kube-api-access-xw4nt\") pod \"neutron-operator-controller-manager-7c57c8bbc4-4mrzh\" (UID: \"3f175a4a-2119-4b83-84f6-d067eb8be406\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-4mrzh" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.115363 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-cc9cv"] Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.117116 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-cc9cv" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.119722 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-gqddk" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.127448 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-cc9cv"] Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.132558 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbzgx\" (UniqueName: \"kubernetes.io/projected/567bba9d-1881-4a67-b6bb-678650252bcc-kube-api-access-lbzgx\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-xz4zz\" (UID: \"567bba9d-1881-4a67-b6bb-678650252bcc\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-xz4zz" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.134291 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw4nt\" (UniqueName: \"kubernetes.io/projected/3f175a4a-2119-4b83-84f6-d067eb8be406-kube-api-access-xw4nt\") pod \"neutron-operator-controller-manager-7c57c8bbc4-4mrzh\" (UID: \"3f175a4a-2119-4b83-84f6-d067eb8be406\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-4mrzh" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.134501 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn8zh\" (UniqueName: \"kubernetes.io/projected/2ce17a51-ca7e-4692-946d-1a09c9a865c5-kube-api-access-nn8zh\") pod \"keystone-operator-controller-manager-748dc6576f-86mf2\" (UID: \"2ce17a51-ca7e-4692-946d-1a09c9a865c5\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-86mf2" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.141473 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-zr674"] Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.190674 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-6q9th"] Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.192341 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-6q9th" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.199135 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-4jf9t" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.205385 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7"] Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.210562 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5mzb\" (UniqueName: \"kubernetes.io/projected/03b268f6-b5db-44d1-8fc9-6d8cedb8c9b6-kube-api-access-r5mzb\") pod \"ovn-operator-controller-manager-66cf5c67ff-zr674\" (UID: \"03b268f6-b5db-44d1-8fc9-6d8cedb8c9b6\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-zr674" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.210754 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d92r\" (UniqueName: \"kubernetes.io/projected/0a3128a6-4ca7-4cd5-800f-20860a97aed5-kube-api-access-4d92r\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7\" (UID: \"0a3128a6-4ca7-4cd5-800f-20860a97aed5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.210847 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rl99\" (UniqueName: \"kubernetes.io/projected/cdd28f21-72b9-4818-88fd-68e6a8dbc508-kube-api-access-4rl99\") pod \"octavia-operator-controller-manager-fd75fd47d-whss9\" (UID: \"cdd28f21-72b9-4818-88fd-68e6a8dbc508\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-whss9" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.210979 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26g4b\" (UniqueName: \"kubernetes.io/projected/272c7c4e-9ecc-42cf-8b44-0a61f7823578-kube-api-access-26g4b\") pod \"placement-operator-controller-manager-5db546f9d9-cc9cv\" (UID: \"272c7c4e-9ecc-42cf-8b44-0a61f7823578\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-cc9cv" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.211061 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a3128a6-4ca7-4cd5-800f-20860a97aed5-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7\" (UID: \"0a3128a6-4ca7-4cd5-800f-20860a97aed5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.211180 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsm89\" (UniqueName: \"kubernetes.io/projected/3bbdd81f-f69b-4e09-b1b2-374723b591ab-kube-api-access-gsm89\") pod \"nova-operator-controller-manager-79556f57fc-zk5k2\" (UID: \"3bbdd81f-f69b-4e09-b1b2-374723b591ab\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-zk5k2" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.218998 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-6q9th"] Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.224845 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-f55c5bd94-dck7p"] Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.226413 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-f55c5bd94-dck7p" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.228297 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rl99\" (UniqueName: \"kubernetes.io/projected/cdd28f21-72b9-4818-88fd-68e6a8dbc508-kube-api-access-4rl99\") pod \"octavia-operator-controller-manager-fd75fd47d-whss9\" (UID: \"cdd28f21-72b9-4818-88fd-68e6a8dbc508\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-whss9" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.230172 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-kb88p" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.242950 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-f55c5bd94-dck7p"] Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.259128 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsm89\" (UniqueName: \"kubernetes.io/projected/3bbdd81f-f69b-4e09-b1b2-374723b591ab-kube-api-access-gsm89\") pod \"nova-operator-controller-manager-79556f57fc-zk5k2\" (UID: \"3bbdd81f-f69b-4e09-b1b2-374723b591ab\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-zk5k2" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.274991 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-g9rrr"] Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.276598 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-g9rrr" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.280243 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-j5mpk" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.309942 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-g9rrr"] Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.314827 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5mzb\" (UniqueName: \"kubernetes.io/projected/03b268f6-b5db-44d1-8fc9-6d8cedb8c9b6-kube-api-access-r5mzb\") pod \"ovn-operator-controller-manager-66cf5c67ff-zr674\" (UID: \"03b268f6-b5db-44d1-8fc9-6d8cedb8c9b6\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-zr674" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.314868 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prwdk\" (UniqueName: \"kubernetes.io/projected/d23a4b31-9618-4e98-82a6-8e32881bec59-kube-api-access-prwdk\") pod \"test-operator-controller-manager-5cb74df96-g9rrr\" (UID: \"d23a4b31-9618-4e98-82a6-8e32881bec59\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-g9rrr" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.314887 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d92r\" (UniqueName: \"kubernetes.io/projected/0a3128a6-4ca7-4cd5-800f-20860a97aed5-kube-api-access-4d92r\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7\" (UID: \"0a3128a6-4ca7-4cd5-800f-20860a97aed5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.314909 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqcbv\" (UniqueName: \"kubernetes.io/projected/caacbc5e-655b-4876-a8ec-94fc83478510-kube-api-access-dqcbv\") pod \"swift-operator-controller-manager-6fdc4fcf86-6q9th\" (UID: \"caacbc5e-655b-4876-a8ec-94fc83478510\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-6q9th" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.314945 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26g4b\" (UniqueName: \"kubernetes.io/projected/272c7c4e-9ecc-42cf-8b44-0a61f7823578-kube-api-access-26g4b\") pod \"placement-operator-controller-manager-5db546f9d9-cc9cv\" (UID: \"272c7c4e-9ecc-42cf-8b44-0a61f7823578\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-cc9cv" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.314978 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a3128a6-4ca7-4cd5-800f-20860a97aed5-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7\" (UID: \"0a3128a6-4ca7-4cd5-800f-20860a97aed5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.315018 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-926wj\" (UniqueName: \"kubernetes.io/projected/3f8c6903-143e-450e-8ea1-92d6ac474b48-kube-api-access-926wj\") pod \"telemetry-operator-controller-manager-f55c5bd94-dck7p\" (UID: \"3f8c6903-143e-450e-8ea1-92d6ac474b48\") " pod="openstack-operators/telemetry-operator-controller-manager-f55c5bd94-dck7p" Nov 24 21:38:41 crc kubenswrapper[4915]: E1124 21:38:41.315592 4915 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 21:38:41 crc kubenswrapper[4915]: E1124 21:38:41.315628 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a3128a6-4ca7-4cd5-800f-20860a97aed5-cert podName:0a3128a6-4ca7-4cd5-800f-20860a97aed5 nodeName:}" failed. No retries permitted until 2025-11-24 21:38:41.815615597 +0000 UTC m=+1140.131867770 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0a3128a6-4ca7-4cd5-800f-20860a97aed5-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7" (UID: "0a3128a6-4ca7-4cd5-800f-20860a97aed5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.320386 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-8bzrw" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.342000 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26g4b\" (UniqueName: \"kubernetes.io/projected/272c7c4e-9ecc-42cf-8b44-0a61f7823578-kube-api-access-26g4b\") pod \"placement-operator-controller-manager-5db546f9d9-cc9cv\" (UID: \"272c7c4e-9ecc-42cf-8b44-0a61f7823578\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-cc9cv" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.349858 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d92r\" (UniqueName: \"kubernetes.io/projected/0a3128a6-4ca7-4cd5-800f-20860a97aed5-kube-api-access-4d92r\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7\" (UID: \"0a3128a6-4ca7-4cd5-800f-20860a97aed5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.358184 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-86mf2" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.369109 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-ngwcx"] Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.372390 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5mzb\" (UniqueName: \"kubernetes.io/projected/03b268f6-b5db-44d1-8fc9-6d8cedb8c9b6-kube-api-access-r5mzb\") pod \"ovn-operator-controller-manager-66cf5c67ff-zr674\" (UID: \"03b268f6-b5db-44d1-8fc9-6d8cedb8c9b6\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-zr674" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.374611 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-4mrzh" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.374796 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-ngwcx" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.380608 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-99rcf" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.387845 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-ngwcx"] Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.391388 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-xz4zz" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.395953 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-zk5k2" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.414216 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-whss9" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.416200 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prwdk\" (UniqueName: \"kubernetes.io/projected/d23a4b31-9618-4e98-82a6-8e32881bec59-kube-api-access-prwdk\") pod \"test-operator-controller-manager-5cb74df96-g9rrr\" (UID: \"d23a4b31-9618-4e98-82a6-8e32881bec59\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-g9rrr" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.416227 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqcbv\" (UniqueName: \"kubernetes.io/projected/caacbc5e-655b-4876-a8ec-94fc83478510-kube-api-access-dqcbv\") pod \"swift-operator-controller-manager-6fdc4fcf86-6q9th\" (UID: \"caacbc5e-655b-4876-a8ec-94fc83478510\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-6q9th" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.416333 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-926wj\" (UniqueName: \"kubernetes.io/projected/3f8c6903-143e-450e-8ea1-92d6ac474b48-kube-api-access-926wj\") pod \"telemetry-operator-controller-manager-f55c5bd94-dck7p\" (UID: \"3f8c6903-143e-450e-8ea1-92d6ac474b48\") " pod="openstack-operators/telemetry-operator-controller-manager-f55c5bd94-dck7p" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.416355 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7994\" (UniqueName: \"kubernetes.io/projected/de2fcac4-4e4f-404d-a8e9-a4774d3f3936-kube-api-access-n7994\") pod \"watcher-operator-controller-manager-864885998-ngwcx\" (UID: \"de2fcac4-4e4f-404d-a8e9-a4774d3f3936\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-ngwcx" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.416437 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-kcs7l\" (UID: \"b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-kcs7l" Nov 24 21:38:41 crc kubenswrapper[4915]: E1124 21:38:41.416579 4915 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 24 21:38:41 crc kubenswrapper[4915]: E1124 21:38:41.416618 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b-cert podName:b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b nodeName:}" failed. No retries permitted until 2025-11-24 21:38:42.416603947 +0000 UTC m=+1140.732856120 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b-cert") pod "infra-operator-controller-manager-d5cc86f4b-kcs7l" (UID: "b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b") : secret "infra-operator-webhook-server-cert" not found Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.454077 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-926wj\" (UniqueName: \"kubernetes.io/projected/3f8c6903-143e-450e-8ea1-92d6ac474b48-kube-api-access-926wj\") pod \"telemetry-operator-controller-manager-f55c5bd94-dck7p\" (UID: \"3f8c6903-143e-450e-8ea1-92d6ac474b48\") " pod="openstack-operators/telemetry-operator-controller-manager-f55c5bd94-dck7p" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.454648 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prwdk\" (UniqueName: \"kubernetes.io/projected/d23a4b31-9618-4e98-82a6-8e32881bec59-kube-api-access-prwdk\") pod \"test-operator-controller-manager-5cb74df96-g9rrr\" (UID: \"d23a4b31-9618-4e98-82a6-8e32881bec59\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-g9rrr" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.474606 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqcbv\" (UniqueName: \"kubernetes.io/projected/caacbc5e-655b-4876-a8ec-94fc83478510-kube-api-access-dqcbv\") pod \"swift-operator-controller-manager-6fdc4fcf86-6q9th\" (UID: \"caacbc5e-655b-4876-a8ec-94fc83478510\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-6q9th" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.481987 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-b6b55f9c-sl7sk"] Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.499331 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-b6b55f9c-sl7sk" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.501481 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.501874 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.512015 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-b6b55f9c-sl7sk"] Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.512161 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mhlx7" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.519436 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7994\" (UniqueName: \"kubernetes.io/projected/de2fcac4-4e4f-404d-a8e9-a4774d3f3936-kube-api-access-n7994\") pod \"watcher-operator-controller-manager-864885998-ngwcx\" (UID: \"de2fcac4-4e4f-404d-a8e9-a4774d3f3936\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-ngwcx" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.560440 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7994\" (UniqueName: \"kubernetes.io/projected/de2fcac4-4e4f-404d-a8e9-a4774d3f3936-kube-api-access-n7994\") pod \"watcher-operator-controller-manager-864885998-ngwcx\" (UID: \"de2fcac4-4e4f-404d-a8e9-a4774d3f3936\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-ngwcx" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.560511 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxrdr"] Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.561734 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxrdr" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.569449 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-zr674" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.571462 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-6lx46" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.597460 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-cc9cv" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.621261 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f254t\" (UniqueName: \"kubernetes.io/projected/49fac201-00d2-42d3-9e1f-ac2fde219037-kube-api-access-f254t\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zxrdr\" (UID: \"49fac201-00d2-42d3-9e1f-ac2fde219037\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxrdr" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.621422 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87db2cf4-862d-4c0e-9cc9-72548e8bb63b-webhook-certs\") pod \"openstack-operator-controller-manager-b6b55f9c-sl7sk\" (UID: \"87db2cf4-862d-4c0e-9cc9-72548e8bb63b\") " pod="openstack-operators/openstack-operator-controller-manager-b6b55f9c-sl7sk" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.621455 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzcxr\" (UniqueName: \"kubernetes.io/projected/87db2cf4-862d-4c0e-9cc9-72548e8bb63b-kube-api-access-mzcxr\") pod \"openstack-operator-controller-manager-b6b55f9c-sl7sk\" (UID: \"87db2cf4-862d-4c0e-9cc9-72548e8bb63b\") " pod="openstack-operators/openstack-operator-controller-manager-b6b55f9c-sl7sk" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.622028 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-6q9th" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.622679 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87db2cf4-862d-4c0e-9cc9-72548e8bb63b-metrics-certs\") pod \"openstack-operator-controller-manager-b6b55f9c-sl7sk\" (UID: \"87db2cf4-862d-4c0e-9cc9-72548e8bb63b\") " pod="openstack-operators/openstack-operator-controller-manager-b6b55f9c-sl7sk" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.631829 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxrdr"] Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.650416 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-f55c5bd94-dck7p" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.677217 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-g9rrr" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.745664 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-ngwcx" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.745827 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87db2cf4-862d-4c0e-9cc9-72548e8bb63b-webhook-certs\") pod \"openstack-operator-controller-manager-b6b55f9c-sl7sk\" (UID: \"87db2cf4-862d-4c0e-9cc9-72548e8bb63b\") " pod="openstack-operators/openstack-operator-controller-manager-b6b55f9c-sl7sk" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.746434 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzcxr\" (UniqueName: \"kubernetes.io/projected/87db2cf4-862d-4c0e-9cc9-72548e8bb63b-kube-api-access-mzcxr\") pod \"openstack-operator-controller-manager-b6b55f9c-sl7sk\" (UID: \"87db2cf4-862d-4c0e-9cc9-72548e8bb63b\") " pod="openstack-operators/openstack-operator-controller-manager-b6b55f9c-sl7sk" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.746481 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87db2cf4-862d-4c0e-9cc9-72548e8bb63b-metrics-certs\") pod \"openstack-operator-controller-manager-b6b55f9c-sl7sk\" (UID: \"87db2cf4-862d-4c0e-9cc9-72548e8bb63b\") " pod="openstack-operators/openstack-operator-controller-manager-b6b55f9c-sl7sk" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.746577 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f254t\" (UniqueName: \"kubernetes.io/projected/49fac201-00d2-42d3-9e1f-ac2fde219037-kube-api-access-f254t\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zxrdr\" (UID: \"49fac201-00d2-42d3-9e1f-ac2fde219037\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxrdr" Nov 24 21:38:41 crc kubenswrapper[4915]: E1124 21:38:41.745926 4915 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 21:38:41 crc kubenswrapper[4915]: E1124 21:38:41.746984 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87db2cf4-862d-4c0e-9cc9-72548e8bb63b-webhook-certs podName:87db2cf4-862d-4c0e-9cc9-72548e8bb63b nodeName:}" failed. No retries permitted until 2025-11-24 21:38:42.246968865 +0000 UTC m=+1140.563221038 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/87db2cf4-862d-4c0e-9cc9-72548e8bb63b-webhook-certs") pod "openstack-operator-controller-manager-b6b55f9c-sl7sk" (UID: "87db2cf4-862d-4c0e-9cc9-72548e8bb63b") : secret "webhook-server-cert" not found Nov 24 21:38:41 crc kubenswrapper[4915]: E1124 21:38:41.746932 4915 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 24 21:38:41 crc kubenswrapper[4915]: E1124 21:38:41.747080 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87db2cf4-862d-4c0e-9cc9-72548e8bb63b-metrics-certs podName:87db2cf4-862d-4c0e-9cc9-72548e8bb63b nodeName:}" failed. No retries permitted until 2025-11-24 21:38:42.247064907 +0000 UTC m=+1140.563317080 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87db2cf4-862d-4c0e-9cc9-72548e8bb63b-metrics-certs") pod "openstack-operator-controller-manager-b6b55f9c-sl7sk" (UID: "87db2cf4-862d-4c0e-9cc9-72548e8bb63b") : secret "metrics-server-cert" not found Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.768415 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f254t\" (UniqueName: \"kubernetes.io/projected/49fac201-00d2-42d3-9e1f-ac2fde219037-kube-api-access-f254t\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zxrdr\" (UID: \"49fac201-00d2-42d3-9e1f-ac2fde219037\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxrdr" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.777629 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzcxr\" (UniqueName: \"kubernetes.io/projected/87db2cf4-862d-4c0e-9cc9-72548e8bb63b-kube-api-access-mzcxr\") pod \"openstack-operator-controller-manager-b6b55f9c-sl7sk\" (UID: \"87db2cf4-862d-4c0e-9cc9-72548e8bb63b\") " pod="openstack-operators/openstack-operator-controller-manager-b6b55f9c-sl7sk" Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.831579 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-rml9z"] Nov 24 21:38:41 crc kubenswrapper[4915]: I1124 21:38:41.848493 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a3128a6-4ca7-4cd5-800f-20860a97aed5-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7\" (UID: \"0a3128a6-4ca7-4cd5-800f-20860a97aed5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7" Nov 24 21:38:41 crc kubenswrapper[4915]: E1124 21:38:41.848638 4915 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 21:38:41 crc kubenswrapper[4915]: E1124 21:38:41.848684 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a3128a6-4ca7-4cd5-800f-20860a97aed5-cert podName:0a3128a6-4ca7-4cd5-800f-20860a97aed5 nodeName:}" failed. No retries permitted until 2025-11-24 21:38:42.848670624 +0000 UTC m=+1141.164922797 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0a3128a6-4ca7-4cd5-800f-20860a97aed5-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7" (UID: "0a3128a6-4ca7-4cd5-800f-20860a97aed5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 21:38:42 crc kubenswrapper[4915]: I1124 21:38:42.028323 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxrdr" Nov 24 21:38:42 crc kubenswrapper[4915]: I1124 21:38:42.257619 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87db2cf4-862d-4c0e-9cc9-72548e8bb63b-webhook-certs\") pod \"openstack-operator-controller-manager-b6b55f9c-sl7sk\" (UID: \"87db2cf4-862d-4c0e-9cc9-72548e8bb63b\") " pod="openstack-operators/openstack-operator-controller-manager-b6b55f9c-sl7sk" Nov 24 21:38:42 crc kubenswrapper[4915]: E1124 21:38:42.258026 4915 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 21:38:42 crc kubenswrapper[4915]: E1124 21:38:42.258097 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87db2cf4-862d-4c0e-9cc9-72548e8bb63b-webhook-certs podName:87db2cf4-862d-4c0e-9cc9-72548e8bb63b nodeName:}" failed. No retries permitted until 2025-11-24 21:38:43.258078971 +0000 UTC m=+1141.574331144 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/87db2cf4-862d-4c0e-9cc9-72548e8bb63b-webhook-certs") pod "openstack-operator-controller-manager-b6b55f9c-sl7sk" (UID: "87db2cf4-862d-4c0e-9cc9-72548e8bb63b") : secret "webhook-server-cert" not found Nov 24 21:38:42 crc kubenswrapper[4915]: I1124 21:38:42.258101 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87db2cf4-862d-4c0e-9cc9-72548e8bb63b-metrics-certs\") pod \"openstack-operator-controller-manager-b6b55f9c-sl7sk\" (UID: \"87db2cf4-862d-4c0e-9cc9-72548e8bb63b\") " pod="openstack-operators/openstack-operator-controller-manager-b6b55f9c-sl7sk" Nov 24 21:38:42 crc kubenswrapper[4915]: E1124 21:38:42.258276 4915 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 24 21:38:42 crc kubenswrapper[4915]: E1124 21:38:42.258326 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87db2cf4-862d-4c0e-9cc9-72548e8bb63b-metrics-certs podName:87db2cf4-862d-4c0e-9cc9-72548e8bb63b nodeName:}" failed. No retries permitted until 2025-11-24 21:38:43.258312057 +0000 UTC m=+1141.574564230 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/87db2cf4-862d-4c0e-9cc9-72548e8bb63b-metrics-certs") pod "openstack-operator-controller-manager-b6b55f9c-sl7sk" (UID: "87db2cf4-862d-4c0e-9cc9-72548e8bb63b") : secret "metrics-server-cert" not found Nov 24 21:38:42 crc kubenswrapper[4915]: I1124 21:38:42.463414 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-kcs7l\" (UID: \"b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-kcs7l" Nov 24 21:38:42 crc kubenswrapper[4915]: E1124 21:38:42.464225 4915 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 24 21:38:42 crc kubenswrapper[4915]: E1124 21:38:42.464284 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b-cert podName:b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b nodeName:}" failed. No retries permitted until 2025-11-24 21:38:44.464266553 +0000 UTC m=+1142.780518726 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b-cert") pod "infra-operator-controller-manager-d5cc86f4b-kcs7l" (UID: "b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b") : secret "infra-operator-webhook-server-cert" not found Nov 24 21:38:42 crc kubenswrapper[4915]: I1124 21:38:42.476111 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-8hkmn"] Nov 24 21:38:42 crc kubenswrapper[4915]: I1124 21:38:42.491227 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-rpbgd"] Nov 24 21:38:42 crc kubenswrapper[4915]: I1124 21:38:42.497061 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-m4z8z"] Nov 24 21:38:42 crc kubenswrapper[4915]: I1124 21:38:42.525886 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-qdzj4"] Nov 24 21:38:42 crc kubenswrapper[4915]: I1124 21:38:42.535831 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-wp6b9"] Nov 24 21:38:42 crc kubenswrapper[4915]: I1124 21:38:42.793048 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-m4z8z" event={"ID":"6c2ce74c-40c4-4b98-ac53-1ce4869dfbe1","Type":"ContainerStarted","Data":"6228f8ea187ab0037ee1bc3f406e710a1df412deb9c5d8f184c02da8aed03044"} Nov 24 21:38:42 crc kubenswrapper[4915]: I1124 21:38:42.797504 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-rml9z" event={"ID":"5c67ba4c-d4f5-496d-bf28-6d681f42c840","Type":"ContainerStarted","Data":"4e32b0765486e7c6795d6aafb60b24692d655c35ba543c0d1369da84ec6db387"} Nov 24 21:38:42 crc kubenswrapper[4915]: I1124 21:38:42.800132 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-rpbgd" event={"ID":"04e18c27-a9e4-439e-994b-2c38eb126153","Type":"ContainerStarted","Data":"9892e6a572dcdfe1e007ce66ddc6e9c7ff2df88d85e1188d92cdb10d2c3a620c"} Nov 24 21:38:42 crc kubenswrapper[4915]: I1124 21:38:42.801336 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8hkmn" event={"ID":"87d99431-c528-49c5-b28b-e32a6f46baaf","Type":"ContainerStarted","Data":"6f9cf431d878e6e75af03776077a3baeac4ad483ed3c769b148edd2ee6eff17c"} Nov 24 21:38:42 crc kubenswrapper[4915]: I1124 21:38:42.805458 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-qdzj4" event={"ID":"c8b0feeb-28f3-41a6-8ccc-a9eb042c4416","Type":"ContainerStarted","Data":"77be65bbdc3b564c8a22b201581ae186b1741cc8379dbaedffc9d4b0279a253c"} Nov 24 21:38:42 crc kubenswrapper[4915]: I1124 21:38:42.807977 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wp6b9" event={"ID":"6638fde3-c855-47c4-a339-a4b64a3b83ad","Type":"ContainerStarted","Data":"5a7ff801a0a608ad9afffadeb28693cbc4e6832e0a90064f47c5b34bfb964e03"} Nov 24 21:38:42 crc kubenswrapper[4915]: I1124 21:38:42.871406 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a3128a6-4ca7-4cd5-800f-20860a97aed5-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7\" (UID: \"0a3128a6-4ca7-4cd5-800f-20860a97aed5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7" Nov 24 21:38:42 crc kubenswrapper[4915]: E1124 21:38:42.871634 4915 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 21:38:42 crc kubenswrapper[4915]: E1124 21:38:42.871727 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0a3128a6-4ca7-4cd5-800f-20860a97aed5-cert podName:0a3128a6-4ca7-4cd5-800f-20860a97aed5 nodeName:}" failed. No retries permitted until 2025-11-24 21:38:44.871709068 +0000 UTC m=+1143.187961241 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0a3128a6-4ca7-4cd5-800f-20860a97aed5-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7" (UID: "0a3128a6-4ca7-4cd5-800f-20860a97aed5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 24 21:38:43 crc kubenswrapper[4915]: I1124 21:38:43.229370 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-6q9th"] Nov 24 21:38:43 crc kubenswrapper[4915]: I1124 21:38:43.255324 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-g9rrr"] Nov 24 21:38:43 crc kubenswrapper[4915]: I1124 21:38:43.284225 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87db2cf4-862d-4c0e-9cc9-72548e8bb63b-webhook-certs\") pod \"openstack-operator-controller-manager-b6b55f9c-sl7sk\" (UID: \"87db2cf4-862d-4c0e-9cc9-72548e8bb63b\") " pod="openstack-operators/openstack-operator-controller-manager-b6b55f9c-sl7sk" Nov 24 21:38:43 crc kubenswrapper[4915]: I1124 21:38:43.284293 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87db2cf4-862d-4c0e-9cc9-72548e8bb63b-metrics-certs\") pod \"openstack-operator-controller-manager-b6b55f9c-sl7sk\" (UID: \"87db2cf4-862d-4c0e-9cc9-72548e8bb63b\") " pod="openstack-operators/openstack-operator-controller-manager-b6b55f9c-sl7sk" Nov 24 21:38:43 crc kubenswrapper[4915]: E1124 21:38:43.288106 4915 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 24 21:38:43 crc kubenswrapper[4915]: E1124 21:38:43.288177 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/87db2cf4-862d-4c0e-9cc9-72548e8bb63b-webhook-certs podName:87db2cf4-862d-4c0e-9cc9-72548e8bb63b nodeName:}" failed. No retries permitted until 2025-11-24 21:38:45.288159514 +0000 UTC m=+1143.604411687 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/87db2cf4-862d-4c0e-9cc9-72548e8bb63b-webhook-certs") pod "openstack-operator-controller-manager-b6b55f9c-sl7sk" (UID: "87db2cf4-862d-4c0e-9cc9-72548e8bb63b") : secret "webhook-server-cert" not found Nov 24 21:38:43 crc kubenswrapper[4915]: I1124 21:38:43.309599 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/87db2cf4-862d-4c0e-9cc9-72548e8bb63b-metrics-certs\") pod \"openstack-operator-controller-manager-b6b55f9c-sl7sk\" (UID: \"87db2cf4-862d-4c0e-9cc9-72548e8bb63b\") " pod="openstack-operators/openstack-operator-controller-manager-b6b55f9c-sl7sk" Nov 24 21:38:43 crc kubenswrapper[4915]: I1124 21:38:43.316245 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-86mf2"] Nov 24 21:38:43 crc kubenswrapper[4915]: I1124 21:38:43.331511 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-vft2k"] Nov 24 21:38:43 crc kubenswrapper[4915]: I1124 21:38:43.334649 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-8bzrw"] Nov 24 21:38:43 crc kubenswrapper[4915]: I1124 21:38:43.374346 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-ngwcx"] Nov 24 21:38:43 crc kubenswrapper[4915]: I1124 21:38:43.384852 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-whss9"] Nov 24 21:38:43 crc kubenswrapper[4915]: W1124 21:38:43.394683 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bbdd81f_f69b_4e09_b1b2_374723b591ab.slice/crio-06bcff1a8cbdda46b047d88a88015dc9a1506df6bae968371ae8891b3f5ddecf WatchSource:0}: Error finding container 06bcff1a8cbdda46b047d88a88015dc9a1506df6bae968371ae8891b3f5ddecf: Status 404 returned error can't find the container with id 06bcff1a8cbdda46b047d88a88015dc9a1506df6bae968371ae8891b3f5ddecf Nov 24 21:38:43 crc kubenswrapper[4915]: W1124 21:38:43.400649 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod567bba9d_1881_4a67_b6bb_678650252bcc.slice/crio-c3476ee367597acf8e3b6ca851470f65a5ef5d9ed68401bf89ec6059f35ff976 WatchSource:0}: Error finding container c3476ee367597acf8e3b6ca851470f65a5ef5d9ed68401bf89ec6059f35ff976: Status 404 returned error can't find the container with id c3476ee367597acf8e3b6ca851470f65a5ef5d9ed68401bf89ec6059f35ff976 Nov 24 21:38:43 crc kubenswrapper[4915]: E1124 21:38:43.412104 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lbzgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-cb6c4fdb7-xz4zz_openstack-operators(567bba9d-1881-4a67-b6bb-678650252bcc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 21:38:43 crc kubenswrapper[4915]: I1124 21:38:43.413882 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-zk5k2"] Nov 24 21:38:43 crc kubenswrapper[4915]: E1124 21:38:43.414754 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lbzgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-cb6c4fdb7-xz4zz_openstack-operators(567bba9d-1881-4a67-b6bb-678650252bcc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 21:38:43 crc kubenswrapper[4915]: E1124 21:38:43.418234 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gsm89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-zk5k2_openstack-operators(3bbdd81f-f69b-4e09-b1b2-374723b591ab): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 21:38:43 crc kubenswrapper[4915]: E1124 21:38:43.418834 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-xz4zz" podUID="567bba9d-1881-4a67-b6bb-678650252bcc" Nov 24 21:38:43 crc kubenswrapper[4915]: W1124 21:38:43.418997 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd090cd1f_8af4_468c_881d_d04cf192b0c4.slice/crio-8d09ce099695f0eb53b85f13953bca888795945402358c14167cff403791d745 WatchSource:0}: Error finding container 8d09ce099695f0eb53b85f13953bca888795945402358c14167cff403791d745: Status 404 returned error can't find the container with id 8d09ce099695f0eb53b85f13953bca888795945402358c14167cff403791d745 Nov 24 21:38:43 crc kubenswrapper[4915]: E1124 21:38:43.423020 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gsm89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-zk5k2_openstack-operators(3bbdd81f-f69b-4e09-b1b2-374723b591ab): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 21:38:43 crc kubenswrapper[4915]: E1124 21:38:43.424155 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-zk5k2" podUID="3bbdd81f-f69b-4e09-b1b2-374723b591ab" Nov 24 21:38:43 crc kubenswrapper[4915]: I1124 21:38:43.425121 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-f55c5bd94-dck7p"] Nov 24 21:38:43 crc kubenswrapper[4915]: W1124 21:38:43.427169 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f175a4a_2119_4b83_84f6_d067eb8be406.slice/crio-90bab3ec63d5fe7bc92257b774a011faadb702ac6f870e3eede23eb5ab11ed6a WatchSource:0}: Error finding container 90bab3ec63d5fe7bc92257b774a011faadb702ac6f870e3eede23eb5ab11ed6a: Status 404 returned error can't find the container with id 90bab3ec63d5fe7bc92257b774a011faadb702ac6f870e3eede23eb5ab11ed6a Nov 24 21:38:43 crc kubenswrapper[4915]: I1124 21:38:43.442045 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-4mrzh"] Nov 24 21:38:43 crc kubenswrapper[4915]: E1124 21:38:43.443060 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xw4nt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7c57c8bbc4-4mrzh_openstack-operators(3f175a4a-2119-4b83-84f6-d067eb8be406): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 21:38:43 crc kubenswrapper[4915]: W1124 21:38:43.445347 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49fac201_00d2_42d3_9e1f_ac2fde219037.slice/crio-a4d8adc4da1c880e7f58949fb1026440d059004976107e654022ab75b5771abf WatchSource:0}: Error finding container a4d8adc4da1c880e7f58949fb1026440d059004976107e654022ab75b5771abf: Status 404 returned error can't find the container with id a4d8adc4da1c880e7f58949fb1026440d059004976107e654022ab75b5771abf Nov 24 21:38:43 crc kubenswrapper[4915]: E1124 21:38:43.445500 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xw4nt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7c57c8bbc4-4mrzh_openstack-operators(3f175a4a-2119-4b83-84f6-d067eb8be406): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 21:38:43 crc kubenswrapper[4915]: E1124 21:38:43.447833 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-4mrzh" podUID="3f175a4a-2119-4b83-84f6-d067eb8be406" Nov 24 21:38:43 crc kubenswrapper[4915]: E1124 21:38:43.454577 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f254t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-zxrdr_openstack-operators(49fac201-00d2-42d3-9e1f-ac2fde219037): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 24 21:38:43 crc kubenswrapper[4915]: E1124 21:38:43.455875 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxrdr" podUID="49fac201-00d2-42d3-9e1f-ac2fde219037" Nov 24 21:38:43 crc kubenswrapper[4915]: I1124 21:38:43.462131 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-zr674"] Nov 24 21:38:43 crc kubenswrapper[4915]: I1124 21:38:43.468924 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-cc9cv"] Nov 24 21:38:43 crc kubenswrapper[4915]: I1124 21:38:43.479893 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-xz4zz"] Nov 24 21:38:43 crc kubenswrapper[4915]: I1124 21:38:43.487931 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxrdr"] Nov 24 21:38:43 crc kubenswrapper[4915]: I1124 21:38:43.817822 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-vft2k" event={"ID":"32115f30-05b5-4828-ba4b-155b238026a1","Type":"ContainerStarted","Data":"624b5884b6f9000d4e92dae89ec20b2baa6de9bdacf64d93462fa51121b5fee6"} Nov 24 21:38:43 crc kubenswrapper[4915]: I1124 21:38:43.819826 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-ngwcx" event={"ID":"de2fcac4-4e4f-404d-a8e9-a4774d3f3936","Type":"ContainerStarted","Data":"5b47da0397daaed4f5804fe4c897e437bf2da5ab4c531c45b95f112de3d39315"} Nov 24 21:38:43 crc kubenswrapper[4915]: I1124 21:38:43.821270 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-86mf2" event={"ID":"2ce17a51-ca7e-4692-946d-1a09c9a865c5","Type":"ContainerStarted","Data":"1c4a16bdbc93e12c2961873fd974612b433d9d2d6e90c662365756be6d76f037"} Nov 24 21:38:43 crc kubenswrapper[4915]: I1124 21:38:43.822259 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-xz4zz" event={"ID":"567bba9d-1881-4a67-b6bb-678650252bcc","Type":"ContainerStarted","Data":"c3476ee367597acf8e3b6ca851470f65a5ef5d9ed68401bf89ec6059f35ff976"} Nov 24 21:38:43 crc kubenswrapper[4915]: I1124 21:38:43.823499 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-4mrzh" event={"ID":"3f175a4a-2119-4b83-84f6-d067eb8be406","Type":"ContainerStarted","Data":"90bab3ec63d5fe7bc92257b774a011faadb702ac6f870e3eede23eb5ab11ed6a"} Nov 24 21:38:43 crc kubenswrapper[4915]: E1124 21:38:43.826995 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-4mrzh" podUID="3f175a4a-2119-4b83-84f6-d067eb8be406" Nov 24 21:38:43 crc kubenswrapper[4915]: E1124 21:38:43.827251 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-xz4zz" podUID="567bba9d-1881-4a67-b6bb-678650252bcc" Nov 24 21:38:43 crc kubenswrapper[4915]: I1124 21:38:43.827905 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-g9rrr" event={"ID":"d23a4b31-9618-4e98-82a6-8e32881bec59","Type":"ContainerStarted","Data":"7de8cc5c200f4ad9c228c8492a21282de957f105e1c6105b78648b552ed8b260"} Nov 24 21:38:43 crc kubenswrapper[4915]: I1124 21:38:43.828904 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-f55c5bd94-dck7p" event={"ID":"3f8c6903-143e-450e-8ea1-92d6ac474b48","Type":"ContainerStarted","Data":"55265acafa40273462eeee83354127881bc90c01ce4e0a84ca979effab2a86b9"} Nov 24 21:38:43 crc kubenswrapper[4915]: I1124 21:38:43.830686 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-cc9cv" event={"ID":"272c7c4e-9ecc-42cf-8b44-0a61f7823578","Type":"ContainerStarted","Data":"458aeafd0e55d0692f792becad24d712a46edcca75a323fbf28c742b24b3493a"} Nov 24 21:38:43 crc kubenswrapper[4915]: I1124 21:38:43.831967 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-whss9" event={"ID":"cdd28f21-72b9-4818-88fd-68e6a8dbc508","Type":"ContainerStarted","Data":"f15bcfbed15a8f52aaa49549303a68422dd473f9ddfc15bb85f5aabacbd64454"} Nov 24 21:38:43 crc kubenswrapper[4915]: I1124 21:38:43.833328 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-zk5k2" event={"ID":"3bbdd81f-f69b-4e09-b1b2-374723b591ab","Type":"ContainerStarted","Data":"06bcff1a8cbdda46b047d88a88015dc9a1506df6bae968371ae8891b3f5ddecf"} Nov 24 21:38:43 crc kubenswrapper[4915]: E1124 21:38:43.835561 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-zk5k2" podUID="3bbdd81f-f69b-4e09-b1b2-374723b591ab" Nov 24 21:38:43 crc kubenswrapper[4915]: I1124 21:38:43.838194 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-zr674" event={"ID":"03b268f6-b5db-44d1-8fc9-6d8cedb8c9b6","Type":"ContainerStarted","Data":"786f4e0656faa93449135a467dad47a4b9abdeb269b00a62e9ef68a710b28d00"} Nov 24 21:38:43 crc kubenswrapper[4915]: I1124 21:38:43.844073 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-6q9th" event={"ID":"caacbc5e-655b-4876-a8ec-94fc83478510","Type":"ContainerStarted","Data":"4ad31b42789c13aae24c7288dbdd33d56a56d2058f3c168b43985452844d3d73"} Nov 24 21:38:43 crc kubenswrapper[4915]: I1124 21:38:43.861895 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxrdr" event={"ID":"49fac201-00d2-42d3-9e1f-ac2fde219037","Type":"ContainerStarted","Data":"a4d8adc4da1c880e7f58949fb1026440d059004976107e654022ab75b5771abf"} Nov 24 21:38:43 crc kubenswrapper[4915]: I1124 21:38:43.863978 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-8bzrw" event={"ID":"d090cd1f-8af4-468c-881d-d04cf192b0c4","Type":"ContainerStarted","Data":"8d09ce099695f0eb53b85f13953bca888795945402358c14167cff403791d745"} Nov 24 21:38:43 crc kubenswrapper[4915]: E1124 21:38:43.864802 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxrdr" podUID="49fac201-00d2-42d3-9e1f-ac2fde219037" Nov 24 21:38:44 crc kubenswrapper[4915]: I1124 21:38:44.515964 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-kcs7l\" (UID: \"b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-kcs7l" Nov 24 21:38:44 crc kubenswrapper[4915]: E1124 21:38:44.517102 4915 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 24 21:38:44 crc kubenswrapper[4915]: E1124 21:38:44.517219 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b-cert podName:b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b nodeName:}" failed. No retries permitted until 2025-11-24 21:38:48.517199146 +0000 UTC m=+1146.833451319 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b-cert") pod "infra-operator-controller-manager-d5cc86f4b-kcs7l" (UID: "b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b") : secret "infra-operator-webhook-server-cert" not found Nov 24 21:38:44 crc kubenswrapper[4915]: E1124 21:38:44.885345 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxrdr" podUID="49fac201-00d2-42d3-9e1f-ac2fde219037" Nov 24 21:38:44 crc kubenswrapper[4915]: E1124 21:38:44.890207 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-4mrzh" podUID="3f175a4a-2119-4b83-84f6-d067eb8be406" Nov 24 21:38:44 crc kubenswrapper[4915]: E1124 21:38:44.890288 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-xz4zz" podUID="567bba9d-1881-4a67-b6bb-678650252bcc" Nov 24 21:38:44 crc kubenswrapper[4915]: E1124 21:38:44.890357 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-zk5k2" podUID="3bbdd81f-f69b-4e09-b1b2-374723b591ab" Nov 24 21:38:44 crc kubenswrapper[4915]: I1124 21:38:44.943567 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a3128a6-4ca7-4cd5-800f-20860a97aed5-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7\" (UID: \"0a3128a6-4ca7-4cd5-800f-20860a97aed5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7" Nov 24 21:38:44 crc kubenswrapper[4915]: I1124 21:38:44.974055 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a3128a6-4ca7-4cd5-800f-20860a97aed5-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7\" (UID: \"0a3128a6-4ca7-4cd5-800f-20860a97aed5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7" Nov 24 21:38:45 crc kubenswrapper[4915]: I1124 21:38:45.056139 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-cfrqm" Nov 24 21:38:45 crc kubenswrapper[4915]: I1124 21:38:45.064908 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7" Nov 24 21:38:45 crc kubenswrapper[4915]: I1124 21:38:45.372516 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87db2cf4-862d-4c0e-9cc9-72548e8bb63b-webhook-certs\") pod \"openstack-operator-controller-manager-b6b55f9c-sl7sk\" (UID: \"87db2cf4-862d-4c0e-9cc9-72548e8bb63b\") " pod="openstack-operators/openstack-operator-controller-manager-b6b55f9c-sl7sk" Nov 24 21:38:45 crc kubenswrapper[4915]: I1124 21:38:45.393344 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/87db2cf4-862d-4c0e-9cc9-72548e8bb63b-webhook-certs\") pod \"openstack-operator-controller-manager-b6b55f9c-sl7sk\" (UID: \"87db2cf4-862d-4c0e-9cc9-72548e8bb63b\") " pod="openstack-operators/openstack-operator-controller-manager-b6b55f9c-sl7sk" Nov 24 21:38:45 crc kubenswrapper[4915]: I1124 21:38:45.620292 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mhlx7" Nov 24 21:38:45 crc kubenswrapper[4915]: I1124 21:38:45.629034 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-b6b55f9c-sl7sk" Nov 24 21:38:45 crc kubenswrapper[4915]: I1124 21:38:45.692521 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7"] Nov 24 21:38:48 crc kubenswrapper[4915]: W1124 21:38:48.031197 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a3128a6_4ca7_4cd5_800f_20860a97aed5.slice/crio-f128a83e094613894a445891df3e7999b7daaa96714f9923b88207231ce0fb2a WatchSource:0}: Error finding container f128a83e094613894a445891df3e7999b7daaa96714f9923b88207231ce0fb2a: Status 404 returned error can't find the container with id f128a83e094613894a445891df3e7999b7daaa96714f9923b88207231ce0fb2a Nov 24 21:38:48 crc kubenswrapper[4915]: I1124 21:38:48.529443 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-kcs7l\" (UID: \"b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-kcs7l" Nov 24 21:38:48 crc kubenswrapper[4915]: I1124 21:38:48.536139 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-kcs7l\" (UID: \"b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-kcs7l" Nov 24 21:38:48 crc kubenswrapper[4915]: I1124 21:38:48.587913 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-9584m" Nov 24 21:38:48 crc kubenswrapper[4915]: I1124 21:38:48.595968 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-kcs7l" Nov 24 21:38:48 crc kubenswrapper[4915]: I1124 21:38:48.940769 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7" event={"ID":"0a3128a6-4ca7-4cd5-800f-20860a97aed5","Type":"ContainerStarted","Data":"f128a83e094613894a445891df3e7999b7daaa96714f9923b88207231ce0fb2a"} Nov 24 21:38:54 crc kubenswrapper[4915]: I1124 21:38:54.327496 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:38:54 crc kubenswrapper[4915]: I1124 21:38:54.328087 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:39:05 crc kubenswrapper[4915]: E1124 21:39:05.075504 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:d38faa9070da05487afdaa9e261ad39274c2ed862daf42efa460a040431f1991" Nov 24 21:39:05 crc kubenswrapper[4915]: E1124 21:39:05.076348 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:d38faa9070da05487afdaa9e261ad39274c2ed862daf42efa460a040431f1991,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qmssw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-68b95954c9-rpbgd_openstack-operators(04e18c27-a9e4-439e-994b-2c38eb126153): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:39:05 crc kubenswrapper[4915]: E1124 21:39:05.761563 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d" Nov 24 21:39:05 crc kubenswrapper[4915]: E1124 21:39:05.762101 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:82207e753574d4be246f86c4b074500d66cf20214aa80f0a8525cf3287a35e6d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-prwdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cb74df96-g9rrr_openstack-operators(d23a4b31-9618-4e98-82a6-8e32881bec59): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:39:06 crc kubenswrapper[4915]: E1124 21:39:06.993060 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c" Nov 24 21:39:06 crc kubenswrapper[4915]: E1124 21:39:06.993494 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-26g4b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5db546f9d9-cc9cv_openstack-operators(272c7c4e-9ecc-42cf-8b44-0a61f7823578): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:39:07 crc kubenswrapper[4915]: E1124 21:39:07.418030 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13" Nov 24 21:39:07 crc kubenswrapper[4915]: E1124 21:39:07.418513 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4rl99,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-fd75fd47d-whss9_openstack-operators(cdd28f21-72b9-4818-88fd-68e6a8dbc508): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:39:09 crc kubenswrapper[4915]: E1124 21:39:09.352312 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b" Nov 24 21:39:09 crc kubenswrapper[4915]: E1124 21:39:09.352802 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r5mzb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-66cf5c67ff-zr674_openstack-operators(03b268f6-b5db-44d1-8fc9-6d8cedb8c9b6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:39:10 crc kubenswrapper[4915]: E1124 21:39:10.138683 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:5edd825a235f5784d9a65892763c5388c39df1731d0fcbf4ee33408b8c83ac96" Nov 24 21:39:10 crc kubenswrapper[4915]: E1124 21:39:10.138876 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:5edd825a235f5784d9a65892763c5388c39df1731d0fcbf4ee33408b8c83ac96,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vj4fz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-774b86978c-qdzj4_openstack-operators(c8b0feeb-28f3-41a6-8ccc-a9eb042c4416): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:39:10 crc kubenswrapper[4915]: E1124 21:39:10.615745 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd" Nov 24 21:39:10 crc kubenswrapper[4915]: E1124 21:39:10.616439 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4d92r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7_openstack-operators(0a3128a6-4ca7-4cd5-800f-20860a97aed5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:39:10 crc kubenswrapper[4915]: E1124 21:39:10.694746 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.39:5001/openstack-k8s-operators/telemetry-operator:56ac70de11c59aa2d2e243307b98da32e354f5ef" Nov 24 21:39:10 crc kubenswrapper[4915]: E1124 21:39:10.694820 4915 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.39:5001/openstack-k8s-operators/telemetry-operator:56ac70de11c59aa2d2e243307b98da32e354f5ef" Nov 24 21:39:10 crc kubenswrapper[4915]: E1124 21:39:10.695098 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.39:5001/openstack-k8s-operators/telemetry-operator:56ac70de11c59aa2d2e243307b98da32e354f5ef,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-926wj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-f55c5bd94-dck7p_openstack-operators(3f8c6903-143e-450e-8ea1-92d6ac474b48): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:39:15 crc kubenswrapper[4915]: I1124 21:39:15.459752 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-b6b55f9c-sl7sk"] Nov 24 21:39:15 crc kubenswrapper[4915]: I1124 21:39:15.614752 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-kcs7l"] Nov 24 21:39:15 crc kubenswrapper[4915]: W1124 21:39:15.702915 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb37bc9b3_fd0d_43f8_b30c_b5dda3f4cb8b.slice/crio-4dae66a577bee6607d51b908689084e83352a3d6277c0a31036ce88bccb2b6c9 WatchSource:0}: Error finding container 4dae66a577bee6607d51b908689084e83352a3d6277c0a31036ce88bccb2b6c9: Status 404 returned error can't find the container with id 4dae66a577bee6607d51b908689084e83352a3d6277c0a31036ce88bccb2b6c9 Nov 24 21:39:15 crc kubenswrapper[4915]: W1124 21:39:15.705758 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87db2cf4_862d_4c0e_9cc9_72548e8bb63b.slice/crio-787e35f54f1030d52b4a952e0ee123f81b01d47a8f3cf3c5dd8f5f7c6c257043 WatchSource:0}: Error finding container 787e35f54f1030d52b4a952e0ee123f81b01d47a8f3cf3c5dd8f5f7c6c257043: Status 404 returned error can't find the container with id 787e35f54f1030d52b4a952e0ee123f81b01d47a8f3cf3c5dd8f5f7c6c257043 Nov 24 21:39:16 crc kubenswrapper[4915]: I1124 21:39:16.215531 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wp6b9" event={"ID":"6638fde3-c855-47c4-a339-a4b64a3b83ad","Type":"ContainerStarted","Data":"b5cb0e7b45f55e3e90f3d3a837a226a286574a7454c2782760beb7497d3e1b95"} Nov 24 21:39:16 crc kubenswrapper[4915]: I1124 21:39:16.217638 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-ngwcx" event={"ID":"de2fcac4-4e4f-404d-a8e9-a4774d3f3936","Type":"ContainerStarted","Data":"65228d7948b06cbfed51b2e0c167280bf99f1a5d6253f17f5c52bca050828d31"} Nov 24 21:39:16 crc kubenswrapper[4915]: I1124 21:39:16.227976 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-m4z8z" event={"ID":"6c2ce74c-40c4-4b98-ac53-1ce4869dfbe1","Type":"ContainerStarted","Data":"71552dad87e2d20d334ce13ee4647cc8e899620a26ea57bc7e626477c6bae1dd"} Nov 24 21:39:16 crc kubenswrapper[4915]: I1124 21:39:16.230336 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-rml9z" event={"ID":"5c67ba4c-d4f5-496d-bf28-6d681f42c840","Type":"ContainerStarted","Data":"4e78f45d901c8f95d7f04bc216df649a3d41f041310667b39548e9b1dcc33be9"} Nov 24 21:39:16 crc kubenswrapper[4915]: I1124 21:39:16.234228 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-8bzrw" event={"ID":"d090cd1f-8af4-468c-881d-d04cf192b0c4","Type":"ContainerStarted","Data":"10b744c8894cbc0790dfcd2174aa1d16ab7a714be2b4208c6f74ead841e1656b"} Nov 24 21:39:16 crc kubenswrapper[4915]: I1124 21:39:16.236956 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-kcs7l" event={"ID":"b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b","Type":"ContainerStarted","Data":"4dae66a577bee6607d51b908689084e83352a3d6277c0a31036ce88bccb2b6c9"} Nov 24 21:39:16 crc kubenswrapper[4915]: I1124 21:39:16.240470 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-b6b55f9c-sl7sk" event={"ID":"87db2cf4-862d-4c0e-9cc9-72548e8bb63b","Type":"ContainerStarted","Data":"787e35f54f1030d52b4a952e0ee123f81b01d47a8f3cf3c5dd8f5f7c6c257043"} Nov 24 21:39:17 crc kubenswrapper[4915]: I1124 21:39:17.255829 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-6q9th" event={"ID":"caacbc5e-655b-4876-a8ec-94fc83478510","Type":"ContainerStarted","Data":"a59c52c9b15c640857646b20e6c9821a15b710c27b61d94b542af15e5e4d78fc"} Nov 24 21:39:17 crc kubenswrapper[4915]: I1124 21:39:17.265388 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8hkmn" event={"ID":"87d99431-c528-49c5-b28b-e32a6f46baaf","Type":"ContainerStarted","Data":"f77a32d4da20658bbdcf3a1f6be03e303f7a87051ef4256f434ada22b7ed2144"} Nov 24 21:39:17 crc kubenswrapper[4915]: I1124 21:39:17.270561 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-vft2k" event={"ID":"32115f30-05b5-4828-ba4b-155b238026a1","Type":"ContainerStarted","Data":"218e1dfe3fe14bd8fd58ce229570159b8b6fbb65f1f861a17b1ad09980d284d5"} Nov 24 21:39:19 crc kubenswrapper[4915]: I1124 21:39:19.308066 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-zk5k2" event={"ID":"3bbdd81f-f69b-4e09-b1b2-374723b591ab","Type":"ContainerStarted","Data":"f763f94d62f13c6e7eb4cd2782c3067af7c4f6386b318d4bdd5dd744bc04e807"} Nov 24 21:39:19 crc kubenswrapper[4915]: I1124 21:39:19.310561 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-86mf2" event={"ID":"2ce17a51-ca7e-4692-946d-1a09c9a865c5","Type":"ContainerStarted","Data":"709b29e5c09e533def31185e3591a93e5376587fcc1c3ab48cd334134b33b745"} Nov 24 21:39:19 crc kubenswrapper[4915]: I1124 21:39:19.312420 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-b6b55f9c-sl7sk" event={"ID":"87db2cf4-862d-4c0e-9cc9-72548e8bb63b","Type":"ContainerStarted","Data":"3fe843a302d9f2e8ac33b6e226a5417495470b7bfe1d8f61002d93dddcef6eb1"} Nov 24 21:39:19 crc kubenswrapper[4915]: I1124 21:39:19.312600 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-b6b55f9c-sl7sk" Nov 24 21:39:19 crc kubenswrapper[4915]: I1124 21:39:19.339048 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-b6b55f9c-sl7sk" podStartSLOduration=38.33903144 podStartE2EDuration="38.33903144s" podCreationTimestamp="2025-11-24 21:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:39:19.335012764 +0000 UTC m=+1177.651264957" watchObservedRunningTime="2025-11-24 21:39:19.33903144 +0000 UTC m=+1177.655283633" Nov 24 21:39:21 crc kubenswrapper[4915]: E1124 21:39:21.128681 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-zr674" podUID="03b268f6-b5db-44d1-8fc9-6d8cedb8c9b6" Nov 24 21:39:21 crc kubenswrapper[4915]: E1124 21:39:21.148602 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-cc9cv" podUID="272c7c4e-9ecc-42cf-8b44-0a61f7823578" Nov 24 21:39:21 crc kubenswrapper[4915]: E1124 21:39:21.181962 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-whss9" podUID="cdd28f21-72b9-4818-88fd-68e6a8dbc508" Nov 24 21:39:21 crc kubenswrapper[4915]: I1124 21:39:21.330446 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-whss9" event={"ID":"cdd28f21-72b9-4818-88fd-68e6a8dbc508","Type":"ContainerStarted","Data":"a4acca86b59ae8d7e046502cfea34634a707257980190cae35a4befc75000c61"} Nov 24 21:39:21 crc kubenswrapper[4915]: I1124 21:39:21.331959 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-zr674" event={"ID":"03b268f6-b5db-44d1-8fc9-6d8cedb8c9b6","Type":"ContainerStarted","Data":"b8a26776d794ebc7ee1811f123f5326d51cd4c32fd8b5025d0429202438be65d"} Nov 24 21:39:21 crc kubenswrapper[4915]: I1124 21:39:21.341563 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-4mrzh" event={"ID":"3f175a4a-2119-4b83-84f6-d067eb8be406","Type":"ContainerStarted","Data":"5fe357a04661ea7dfbc256ba126e832d564f027a3c80b89cdf6abf1436d303cf"} Nov 24 21:39:21 crc kubenswrapper[4915]: I1124 21:39:21.341876 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-4mrzh" event={"ID":"3f175a4a-2119-4b83-84f6-d067eb8be406","Type":"ContainerStarted","Data":"45e89624791b7215a62fd43bae840ae57efb0923e3c3029c7b9fd25254bc3a78"} Nov 24 21:39:21 crc kubenswrapper[4915]: I1124 21:39:21.342885 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-4mrzh" Nov 24 21:39:21 crc kubenswrapper[4915]: I1124 21:39:21.354721 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxrdr" event={"ID":"49fac201-00d2-42d3-9e1f-ac2fde219037","Type":"ContainerStarted","Data":"d62dc666a3504fe6aa22662eb886c9c1f676b0768b75f108b7178eaa080b81b8"} Nov 24 21:39:21 crc kubenswrapper[4915]: I1124 21:39:21.357224 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-kcs7l" event={"ID":"b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b","Type":"ContainerStarted","Data":"825a9051296a996a59dc160b38783a967d906f4cc74d85effb7d67aa51066353"} Nov 24 21:39:21 crc kubenswrapper[4915]: I1124 21:39:21.358808 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-xz4zz" event={"ID":"567bba9d-1881-4a67-b6bb-678650252bcc","Type":"ContainerStarted","Data":"261bece91161e05c14ce1fbdc62679107a6ecec13eb68aaeb4b30eb779962e35"} Nov 24 21:39:21 crc kubenswrapper[4915]: I1124 21:39:21.383591 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-cc9cv" event={"ID":"272c7c4e-9ecc-42cf-8b44-0a61f7823578","Type":"ContainerStarted","Data":"5111c92593b61c0b7dccf90157888191700e36f1522e748905a1d7033092d9a2"} Nov 24 21:39:21 crc kubenswrapper[4915]: E1124 21:39:21.397370 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-f55c5bd94-dck7p" podUID="3f8c6903-143e-450e-8ea1-92d6ac474b48" Nov 24 21:39:21 crc kubenswrapper[4915]: I1124 21:39:21.409316 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-4mrzh" podStartSLOduration=9.175929623 podStartE2EDuration="41.409297037s" podCreationTimestamp="2025-11-24 21:38:40 +0000 UTC" firstStartedPulling="2025-11-24 21:38:43.442958194 +0000 UTC m=+1141.759210367" lastFinishedPulling="2025-11-24 21:39:15.676325588 +0000 UTC m=+1173.992577781" observedRunningTime="2025-11-24 21:39:21.393375464 +0000 UTC m=+1179.709627647" watchObservedRunningTime="2025-11-24 21:39:21.409297037 +0000 UTC m=+1179.725549210" Nov 24 21:39:21 crc kubenswrapper[4915]: I1124 21:39:21.439295 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zxrdr" podStartSLOduration=8.04182509 podStartE2EDuration="40.439273677s" podCreationTimestamp="2025-11-24 21:38:41 +0000 UTC" firstStartedPulling="2025-11-24 21:38:43.454413392 +0000 UTC m=+1141.770665565" lastFinishedPulling="2025-11-24 21:39:15.851861959 +0000 UTC m=+1174.168114152" observedRunningTime="2025-11-24 21:39:21.43488756 +0000 UTC m=+1179.751139733" watchObservedRunningTime="2025-11-24 21:39:21.439273677 +0000 UTC m=+1179.755525860" Nov 24 21:39:21 crc kubenswrapper[4915]: E1124 21:39:21.480381 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5cb74df96-g9rrr" podUID="d23a4b31-9618-4e98-82a6-8e32881bec59" Nov 24 21:39:21 crc kubenswrapper[4915]: E1124 21:39:21.703451 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7" podUID="0a3128a6-4ca7-4cd5-800f-20860a97aed5" Nov 24 21:39:22 crc kubenswrapper[4915]: E1124 21:39:22.081274 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-rpbgd" podUID="04e18c27-a9e4-439e-994b-2c38eb126153" Nov 24 21:39:22 crc kubenswrapper[4915]: E1124 21:39:22.239671 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-qdzj4" podUID="c8b0feeb-28f3-41a6-8ccc-a9eb042c4416" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.393901 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-rml9z" event={"ID":"5c67ba4c-d4f5-496d-bf28-6d681f42c840","Type":"ContainerStarted","Data":"39186c8e3495a95f404af5b50a40e3e48065a8dc6475574e9871a5f97d98e506"} Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.394253 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-rml9z" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.396050 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-86mf2" event={"ID":"2ce17a51-ca7e-4692-946d-1a09c9a865c5","Type":"ContainerStarted","Data":"7d014b644d58b8ac8a103531c7861cbd314aec351e739e1d7e7cea763e9f5094"} Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.396199 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-86mf2" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.397728 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-rml9z" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.398209 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-6q9th" event={"ID":"caacbc5e-655b-4876-a8ec-94fc83478510","Type":"ContainerStarted","Data":"3320cdb35bf816d4f7e388815db859ca5c4d3eab8e668807542ba069aceccbfb"} Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.398387 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-6q9th" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.400406 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-6q9th" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.400654 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-xz4zz" event={"ID":"567bba9d-1881-4a67-b6bb-678650252bcc","Type":"ContainerStarted","Data":"bda8bfe9cbcc840a6501968fc0356b11fa65aae00c6d8efeddbff915aecde956"} Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.400830 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-xz4zz" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.402718 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-qdzj4" event={"ID":"c8b0feeb-28f3-41a6-8ccc-a9eb042c4416","Type":"ContainerStarted","Data":"bc943f2c4839117ccd7b38a7baa83e82acb8a9fc1a494c2e3016103794c7ccdc"} Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.404989 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wp6b9" event={"ID":"6638fde3-c855-47c4-a339-a4b64a3b83ad","Type":"ContainerStarted","Data":"3ac6b905f57dc5ee6f3faba9a10f3ff5588e9b3dc609f91bef6e6fb34d5b1882"} Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.405230 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wp6b9" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.406659 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wp6b9" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.407112 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7" event={"ID":"0a3128a6-4ca7-4cd5-800f-20860a97aed5","Type":"ContainerStarted","Data":"b5d2125c731f4b34e0cb75bd84af0c0ceeadf1f82bffbff1d7f39690b56e57e7"} Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.409037 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-m4z8z" event={"ID":"6c2ce74c-40c4-4b98-ac53-1ce4869dfbe1","Type":"ContainerStarted","Data":"ca088348cee9def2ff69545270aa43eafaf03f6f76cc2b6194376d1edf514738"} Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.409267 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-m4z8z" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.411465 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-m4z8z" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.411816 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-g9rrr" event={"ID":"d23a4b31-9618-4e98-82a6-8e32881bec59","Type":"ContainerStarted","Data":"5fa064bd5ebbf6f542ee56e15b93d8bd8fe4775870bf3fdd90b67881b850be34"} Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.417067 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-cc9cv" event={"ID":"272c7c4e-9ecc-42cf-8b44-0a61f7823578","Type":"ContainerStarted","Data":"3dbb714d0e2be0a03f8e634ea070cde03051eb21cf739084c57433a4824d62d7"} Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.417192 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-cc9cv" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.421838 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-vft2k" event={"ID":"32115f30-05b5-4828-ba4b-155b238026a1","Type":"ContainerStarted","Data":"8c6bbaecd796e728d6ce3030434843a2b177ddcd2ab3d495458d1cc6b8ea38c7"} Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.422010 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-vft2k" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.423729 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-rml9z" podStartSLOduration=3.709563099 podStartE2EDuration="42.423714194s" podCreationTimestamp="2025-11-24 21:38:40 +0000 UTC" firstStartedPulling="2025-11-24 21:38:41.932146562 +0000 UTC m=+1140.248398735" lastFinishedPulling="2025-11-24 21:39:20.646297647 +0000 UTC m=+1178.962549830" observedRunningTime="2025-11-24 21:39:22.414419776 +0000 UTC m=+1180.730671949" watchObservedRunningTime="2025-11-24 21:39:22.423714194 +0000 UTC m=+1180.739966367" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.440699 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-vft2k" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.440729 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-whss9" event={"ID":"cdd28f21-72b9-4818-88fd-68e6a8dbc508","Type":"ContainerStarted","Data":"978e29801a62e2fd0b44ec081241c65d49cda4e76ca1c301f3463209abd6b5ed"} Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.440748 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-whss9" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.443411 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-rpbgd" event={"ID":"04e18c27-a9e4-439e-994b-2c38eb126153","Type":"ContainerStarted","Data":"e039307845ebe28d378115ce24b595cad904c6a27d289bdea9bcee8c63a1162f"} Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.447666 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-8bzrw" event={"ID":"d090cd1f-8af4-468c-881d-d04cf192b0c4","Type":"ContainerStarted","Data":"a307b94c42cc9258e36ec0ef66e52770d5e985f8522a26e7a6f4f049df6bb53c"} Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.448578 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-8bzrw" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.453267 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-8bzrw" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.455880 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8hkmn" event={"ID":"87d99431-c528-49c5-b28b-e32a6f46baaf","Type":"ContainerStarted","Data":"9e5646c739e2c3c52fa6b1963c528e0a3adb1497fdae8e6a4da13e6cf90b8745"} Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.458015 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8hkmn" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.462986 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8hkmn" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.468954 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-f55c5bd94-dck7p" event={"ID":"3f8c6903-143e-450e-8ea1-92d6ac474b48","Type":"ContainerStarted","Data":"98a893dd80bdda13d6849a875b21e9abe934db52fec3ac82fd11a4c3ce0faad8"} Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.495156 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-zk5k2" event={"ID":"3bbdd81f-f69b-4e09-b1b2-374723b591ab","Type":"ContainerStarted","Data":"53641ade82f8364c517e1ac4050db2082b5070739e603f45ab9a938a9e882723"} Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.496061 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-zk5k2" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.525643 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-ngwcx" event={"ID":"de2fcac4-4e4f-404d-a8e9-a4774d3f3936","Type":"ContainerStarted","Data":"d0c26330959ef75cbe9ac3ef54b848c0698467433bbf1171bf36e2e51c395de0"} Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.530122 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-ngwcx" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.536956 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wp6b9" podStartSLOduration=4.421888685 podStartE2EDuration="42.536931794s" podCreationTimestamp="2025-11-24 21:38:40 +0000 UTC" firstStartedPulling="2025-11-24 21:38:42.530089817 +0000 UTC m=+1140.846342000" lastFinishedPulling="2025-11-24 21:39:20.645132926 +0000 UTC m=+1178.961385109" observedRunningTime="2025-11-24 21:39:22.5180402 +0000 UTC m=+1180.834292373" watchObservedRunningTime="2025-11-24 21:39:22.536931794 +0000 UTC m=+1180.853183967" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.539396 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-zr674" event={"ID":"03b268f6-b5db-44d1-8fc9-6d8cedb8c9b6","Type":"ContainerStarted","Data":"49038d39d1b2f9cdbeea1350be5a50f32ab3bb7759fe19a75105f2ebcefa90a1"} Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.539553 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-zr674" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.540424 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-864885998-ngwcx" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.543721 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-kcs7l" event={"ID":"b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b","Type":"ContainerStarted","Data":"f82fbebdb6d949c098203f7ff2e56c1a78c3141b1d925b782cdb8db4c0b61e48"} Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.543770 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-kcs7l" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.557729 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-xz4zz" podStartSLOduration=10.292385142 podStartE2EDuration="42.557712368s" podCreationTimestamp="2025-11-24 21:38:40 +0000 UTC" firstStartedPulling="2025-11-24 21:38:43.41166716 +0000 UTC m=+1141.727919333" lastFinishedPulling="2025-11-24 21:39:15.676994376 +0000 UTC m=+1173.993246559" observedRunningTime="2025-11-24 21:39:22.55326545 +0000 UTC m=+1180.869517623" watchObservedRunningTime="2025-11-24 21:39:22.557712368 +0000 UTC m=+1180.873964541" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.590663 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-6q9th" podStartSLOduration=5.255960286 podStartE2EDuration="42.590639847s" podCreationTimestamp="2025-11-24 21:38:40 +0000 UTC" firstStartedPulling="2025-11-24 21:38:43.309460658 +0000 UTC m=+1141.625712831" lastFinishedPulling="2025-11-24 21:39:20.644140209 +0000 UTC m=+1178.960392392" observedRunningTime="2025-11-24 21:39:22.580883567 +0000 UTC m=+1180.897135750" watchObservedRunningTime="2025-11-24 21:39:22.590639847 +0000 UTC m=+1180.906892030" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.642362 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-m4z8z" podStartSLOduration=4.489245875 podStartE2EDuration="42.642338545s" podCreationTimestamp="2025-11-24 21:38:40 +0000 UTC" firstStartedPulling="2025-11-24 21:38:42.506982034 +0000 UTC m=+1140.823234207" lastFinishedPulling="2025-11-24 21:39:20.660074704 +0000 UTC m=+1178.976326877" observedRunningTime="2025-11-24 21:39:22.611091182 +0000 UTC m=+1180.927343355" watchObservedRunningTime="2025-11-24 21:39:22.642338545 +0000 UTC m=+1180.958590718" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.683706 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-86mf2" podStartSLOduration=5.42004691 podStartE2EDuration="42.683691158s" podCreationTimestamp="2025-11-24 21:38:40 +0000 UTC" firstStartedPulling="2025-11-24 21:38:43.380687776 +0000 UTC m=+1141.696939949" lastFinishedPulling="2025-11-24 21:39:20.644332024 +0000 UTC m=+1178.960584197" observedRunningTime="2025-11-24 21:39:22.683234796 +0000 UTC m=+1180.999486969" watchObservedRunningTime="2025-11-24 21:39:22.683691158 +0000 UTC m=+1180.999943331" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.769717 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-864885998-ngwcx" podStartSLOduration=4.533514987 podStartE2EDuration="41.769685372s" podCreationTimestamp="2025-11-24 21:38:41 +0000 UTC" firstStartedPulling="2025-11-24 21:38:43.380391978 +0000 UTC m=+1141.696644141" lastFinishedPulling="2025-11-24 21:39:20.616562333 +0000 UTC m=+1178.932814526" observedRunningTime="2025-11-24 21:39:22.753330096 +0000 UTC m=+1181.069582279" watchObservedRunningTime="2025-11-24 21:39:22.769685372 +0000 UTC m=+1181.085937545" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.831387 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8hkmn" podStartSLOduration=4.697566856 podStartE2EDuration="42.831364887s" podCreationTimestamp="2025-11-24 21:38:40 +0000 UTC" firstStartedPulling="2025-11-24 21:38:42.482265299 +0000 UTC m=+1140.798517472" lastFinishedPulling="2025-11-24 21:39:20.61606333 +0000 UTC m=+1178.932315503" observedRunningTime="2025-11-24 21:39:22.821832903 +0000 UTC m=+1181.138085076" watchObservedRunningTime="2025-11-24 21:39:22.831364887 +0000 UTC m=+1181.147617060" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.890939 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-kcs7l" podStartSLOduration=38.188598496 podStartE2EDuration="42.890921406s" podCreationTimestamp="2025-11-24 21:38:40 +0000 UTC" firstStartedPulling="2025-11-24 21:39:15.719409107 +0000 UTC m=+1174.035661280" lastFinishedPulling="2025-11-24 21:39:20.421732017 +0000 UTC m=+1178.737984190" observedRunningTime="2025-11-24 21:39:22.887043313 +0000 UTC m=+1181.203295476" watchObservedRunningTime="2025-11-24 21:39:22.890921406 +0000 UTC m=+1181.207173579" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.891209 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-whss9" podStartSLOduration=4.413936124 podStartE2EDuration="42.891203383s" podCreationTimestamp="2025-11-24 21:38:40 +0000 UTC" firstStartedPulling="2025-11-24 21:38:43.380410938 +0000 UTC m=+1141.696663111" lastFinishedPulling="2025-11-24 21:39:21.857678197 +0000 UTC m=+1180.173930370" observedRunningTime="2025-11-24 21:39:22.851065483 +0000 UTC m=+1181.167317666" watchObservedRunningTime="2025-11-24 21:39:22.891203383 +0000 UTC m=+1181.207455556" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.930142 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-8bzrw" podStartSLOduration=5.770833728 podStartE2EDuration="42.930124901s" podCreationTimestamp="2025-11-24 21:38:40 +0000 UTC" firstStartedPulling="2025-11-24 21:38:43.502098636 +0000 UTC m=+1141.818350799" lastFinishedPulling="2025-11-24 21:39:20.661389789 +0000 UTC m=+1178.977641972" observedRunningTime="2025-11-24 21:39:22.927639895 +0000 UTC m=+1181.243892068" watchObservedRunningTime="2025-11-24 21:39:22.930124901 +0000 UTC m=+1181.246377074" Nov 24 21:39:22 crc kubenswrapper[4915]: I1124 21:39:22.986321 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-vft2k" podStartSLOduration=5.7215203500000005 podStartE2EDuration="42.986297379s" podCreationTimestamp="2025-11-24 21:38:40 +0000 UTC" firstStartedPulling="2025-11-24 21:38:43.380586393 +0000 UTC m=+1141.696838566" lastFinishedPulling="2025-11-24 21:39:20.645363422 +0000 UTC m=+1178.961615595" observedRunningTime="2025-11-24 21:39:22.964394645 +0000 UTC m=+1181.280646818" watchObservedRunningTime="2025-11-24 21:39:22.986297379 +0000 UTC m=+1181.302549552" Nov 24 21:39:23 crc kubenswrapper[4915]: I1124 21:39:23.048259 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-zk5k2" podStartSLOduration=5.760695816 podStartE2EDuration="43.048233831s" podCreationTimestamp="2025-11-24 21:38:40 +0000 UTC" firstStartedPulling="2025-11-24 21:38:43.418132825 +0000 UTC m=+1141.734384998" lastFinishedPulling="2025-11-24 21:39:20.70567084 +0000 UTC m=+1179.021923013" observedRunningTime="2025-11-24 21:39:23.001736192 +0000 UTC m=+1181.317988375" watchObservedRunningTime="2025-11-24 21:39:23.048233831 +0000 UTC m=+1181.364486004" Nov 24 21:39:23 crc kubenswrapper[4915]: I1124 21:39:23.109562 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-zr674" podStartSLOduration=4.626664619 podStartE2EDuration="43.109546587s" podCreationTimestamp="2025-11-24 21:38:40 +0000 UTC" firstStartedPulling="2025-11-24 21:38:43.37747857 +0000 UTC m=+1141.693730743" lastFinishedPulling="2025-11-24 21:39:21.860360538 +0000 UTC m=+1180.176612711" observedRunningTime="2025-11-24 21:39:23.106403733 +0000 UTC m=+1181.422655906" watchObservedRunningTime="2025-11-24 21:39:23.109546587 +0000 UTC m=+1181.425798750" Nov 24 21:39:23 crc kubenswrapper[4915]: I1124 21:39:23.143285 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-cc9cv" podStartSLOduration=4.55097308 podStartE2EDuration="43.14304024s" podCreationTimestamp="2025-11-24 21:38:40 +0000 UTC" firstStartedPulling="2025-11-24 21:38:43.384619682 +0000 UTC m=+1141.700871855" lastFinishedPulling="2025-11-24 21:39:21.976686842 +0000 UTC m=+1180.292939015" observedRunningTime="2025-11-24 21:39:23.138302203 +0000 UTC m=+1181.454554376" watchObservedRunningTime="2025-11-24 21:39:23.14304024 +0000 UTC m=+1181.459292413" Nov 24 21:39:23 crc kubenswrapper[4915]: I1124 21:39:23.558735 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-f55c5bd94-dck7p" event={"ID":"3f8c6903-143e-450e-8ea1-92d6ac474b48","Type":"ContainerStarted","Data":"ec14e5eb9c58fed3ddd0c9283eef9f737e7faefdf199c81736fe116ec52b16ae"} Nov 24 21:39:23 crc kubenswrapper[4915]: I1124 21:39:23.569544 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-zk5k2" Nov 24 21:39:23 crc kubenswrapper[4915]: I1124 21:39:23.569600 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-f55c5bd94-dck7p" Nov 24 21:39:23 crc kubenswrapper[4915]: I1124 21:39:23.574601 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-86mf2" Nov 24 21:39:23 crc kubenswrapper[4915]: I1124 21:39:23.590403 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-f55c5bd94-dck7p" podStartSLOduration=4.293410152 podStartE2EDuration="43.590378771s" podCreationTimestamp="2025-11-24 21:38:40 +0000 UTC" firstStartedPulling="2025-11-24 21:38:43.311441331 +0000 UTC m=+1141.627693514" lastFinishedPulling="2025-11-24 21:39:22.60840996 +0000 UTC m=+1180.924662133" observedRunningTime="2025-11-24 21:39:23.584179096 +0000 UTC m=+1181.900431269" watchObservedRunningTime="2025-11-24 21:39:23.590378771 +0000 UTC m=+1181.906630954" Nov 24 21:39:24 crc kubenswrapper[4915]: I1124 21:39:24.327389 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:39:24 crc kubenswrapper[4915]: I1124 21:39:24.327722 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:39:24 crc kubenswrapper[4915]: I1124 21:39:24.327772 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 21:39:24 crc kubenswrapper[4915]: I1124 21:39:24.328440 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c001cd4c9ce6030e46567b52ccde6925b9b174f41dc20336633c7c1d5f367107"} pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:39:24 crc kubenswrapper[4915]: I1124 21:39:24.328496 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" containerID="cri-o://c001cd4c9ce6030e46567b52ccde6925b9b174f41dc20336633c7c1d5f367107" gracePeriod=600 Nov 24 21:39:24 crc kubenswrapper[4915]: I1124 21:39:24.566291 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-qdzj4" event={"ID":"c8b0feeb-28f3-41a6-8ccc-a9eb042c4416","Type":"ContainerStarted","Data":"bf93601f36aca037fe2a63b188ec6342d57274e40eeede1d1c818b591bdae75e"} Nov 24 21:39:24 crc kubenswrapper[4915]: I1124 21:39:24.566634 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-qdzj4" Nov 24 21:39:24 crc kubenswrapper[4915]: I1124 21:39:24.568996 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7" event={"ID":"0a3128a6-4ca7-4cd5-800f-20860a97aed5","Type":"ContainerStarted","Data":"7e55f9777b42ca1256a861269c288cfbf88e7c141e3258b2b830bd1fa2d21d04"} Nov 24 21:39:24 crc kubenswrapper[4915]: I1124 21:39:24.569111 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7" Nov 24 21:39:24 crc kubenswrapper[4915]: I1124 21:39:24.571086 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-rpbgd" event={"ID":"04e18c27-a9e4-439e-994b-2c38eb126153","Type":"ContainerStarted","Data":"5747a65257ed94e339738786c9ab4f84dc373c6eb105adf82fe9a1c8633a1c81"} Nov 24 21:39:24 crc kubenswrapper[4915]: I1124 21:39:24.571213 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-rpbgd" Nov 24 21:39:24 crc kubenswrapper[4915]: I1124 21:39:24.573265 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-g9rrr" event={"ID":"d23a4b31-9618-4e98-82a6-8e32881bec59","Type":"ContainerStarted","Data":"6ee060019d5b808e2258abe27124623508d3868cab407185356cb955816756cd"} Nov 24 21:39:24 crc kubenswrapper[4915]: I1124 21:39:24.573361 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cb74df96-g9rrr" Nov 24 21:39:24 crc kubenswrapper[4915]: I1124 21:39:24.575935 4915 generic.go:334] "Generic (PLEG): container finished" podID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerID="c001cd4c9ce6030e46567b52ccde6925b9b174f41dc20336633c7c1d5f367107" exitCode=0 Nov 24 21:39:24 crc kubenswrapper[4915]: I1124 21:39:24.576072 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerDied","Data":"c001cd4c9ce6030e46567b52ccde6925b9b174f41dc20336633c7c1d5f367107"} Nov 24 21:39:24 crc kubenswrapper[4915]: I1124 21:39:24.576113 4915 scope.go:117] "RemoveContainer" containerID="9b807af914a662edc8043def20fa4b712cbac16789b5da03da771b483217896d" Nov 24 21:39:24 crc kubenswrapper[4915]: I1124 21:39:24.595493 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-774b86978c-qdzj4" podStartSLOduration=3.373226644 podStartE2EDuration="44.59547402s" podCreationTimestamp="2025-11-24 21:38:40 +0000 UTC" firstStartedPulling="2025-11-24 21:38:42.524618869 +0000 UTC m=+1140.840871042" lastFinishedPulling="2025-11-24 21:39:23.746866245 +0000 UTC m=+1182.063118418" observedRunningTime="2025-11-24 21:39:24.58984438 +0000 UTC m=+1182.906096563" watchObservedRunningTime="2025-11-24 21:39:24.59547402 +0000 UTC m=+1182.911726183" Nov 24 21:39:24 crc kubenswrapper[4915]: I1124 21:39:24.632268 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7" podStartSLOduration=8.918216411 podStartE2EDuration="44.6322505s" podCreationTimestamp="2025-11-24 21:38:40 +0000 UTC" firstStartedPulling="2025-11-24 21:38:48.034363887 +0000 UTC m=+1146.350616060" lastFinishedPulling="2025-11-24 21:39:23.748397976 +0000 UTC m=+1182.064650149" observedRunningTime="2025-11-24 21:39:24.621913674 +0000 UTC m=+1182.938165867" watchObservedRunningTime="2025-11-24 21:39:24.6322505 +0000 UTC m=+1182.948502673" Nov 24 21:39:24 crc kubenswrapper[4915]: I1124 21:39:24.638321 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-rpbgd" podStartSLOduration=3.397836937 podStartE2EDuration="44.638299402s" podCreationTimestamp="2025-11-24 21:38:40 +0000 UTC" firstStartedPulling="2025-11-24 21:38:42.5075438 +0000 UTC m=+1140.823795973" lastFinishedPulling="2025-11-24 21:39:23.748006265 +0000 UTC m=+1182.064258438" observedRunningTime="2025-11-24 21:39:24.636590376 +0000 UTC m=+1182.952842559" watchObservedRunningTime="2025-11-24 21:39:24.638299402 +0000 UTC m=+1182.954551575" Nov 24 21:39:24 crc kubenswrapper[4915]: I1124 21:39:24.657318 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cb74df96-g9rrr" podStartSLOduration=3.218834143 podStartE2EDuration="43.657303239s" podCreationTimestamp="2025-11-24 21:38:41 +0000 UTC" firstStartedPulling="2025-11-24 21:38:43.308939404 +0000 UTC m=+1141.625191577" lastFinishedPulling="2025-11-24 21:39:23.7474085 +0000 UTC m=+1182.063660673" observedRunningTime="2025-11-24 21:39:24.652952142 +0000 UTC m=+1182.969204315" watchObservedRunningTime="2025-11-24 21:39:24.657303239 +0000 UTC m=+1182.973555402" Nov 24 21:39:25 crc kubenswrapper[4915]: I1124 21:39:25.589081 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerStarted","Data":"dce5b421849bc6dedc5b880936ade9c03271bcdbe605ecf2cad976e72aebbd14"} Nov 24 21:39:25 crc kubenswrapper[4915]: I1124 21:39:25.636973 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-b6b55f9c-sl7sk" Nov 24 21:39:28 crc kubenswrapper[4915]: I1124 21:39:28.606028 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-kcs7l" Nov 24 21:39:30 crc kubenswrapper[4915]: I1124 21:39:30.993770 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-rpbgd" Nov 24 21:39:31 crc kubenswrapper[4915]: I1124 21:39:31.037090 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-774b86978c-qdzj4" Nov 24 21:39:31 crc kubenswrapper[4915]: I1124 21:39:31.376893 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-4mrzh" Nov 24 21:39:31 crc kubenswrapper[4915]: I1124 21:39:31.402339 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-xz4zz" Nov 24 21:39:31 crc kubenswrapper[4915]: I1124 21:39:31.429500 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-whss9" Nov 24 21:39:31 crc kubenswrapper[4915]: I1124 21:39:31.572133 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-zr674" Nov 24 21:39:31 crc kubenswrapper[4915]: I1124 21:39:31.600368 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-cc9cv" Nov 24 21:39:31 crc kubenswrapper[4915]: I1124 21:39:31.654521 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-f55c5bd94-dck7p" Nov 24 21:39:31 crc kubenswrapper[4915]: I1124 21:39:31.681378 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cb74df96-g9rrr" Nov 24 21:39:35 crc kubenswrapper[4915]: I1124 21:39:35.076370 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7" Nov 24 21:39:45 crc kubenswrapper[4915]: I1124 21:39:45.728278 4915 scope.go:117] "RemoveContainer" containerID="7ec97f063cb47da37514d36eee697e4b15a9a3b254071ff1cb441f8e230ccda5" Nov 24 21:39:45 crc kubenswrapper[4915]: I1124 21:39:45.768318 4915 scope.go:117] "RemoveContainer" containerID="9301232e97c2fca102b8a5abd2a6c9ca6da2bd839158b1531e753918909534e7" Nov 24 21:39:45 crc kubenswrapper[4915]: I1124 21:39:45.823305 4915 scope.go:117] "RemoveContainer" containerID="a56a768729b534a598740ce6685b7a7a6d80eb13454c398af69654d2f5203115" Nov 24 21:39:51 crc kubenswrapper[4915]: I1124 21:39:51.192291 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-28jcc"] Nov 24 21:39:51 crc kubenswrapper[4915]: I1124 21:39:51.194733 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-28jcc" Nov 24 21:39:51 crc kubenswrapper[4915]: I1124 21:39:51.199941 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 24 21:39:51 crc kubenswrapper[4915]: I1124 21:39:51.201008 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-28jcc"] Nov 24 21:39:51 crc kubenswrapper[4915]: I1124 21:39:51.202060 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 24 21:39:51 crc kubenswrapper[4915]: I1124 21:39:51.202149 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-rbcww" Nov 24 21:39:51 crc kubenswrapper[4915]: I1124 21:39:51.202179 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 24 21:39:51 crc kubenswrapper[4915]: I1124 21:39:51.275752 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-v5lml"] Nov 24 21:39:51 crc kubenswrapper[4915]: I1124 21:39:51.277458 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-v5lml" Nov 24 21:39:51 crc kubenswrapper[4915]: I1124 21:39:51.281194 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 24 21:39:51 crc kubenswrapper[4915]: I1124 21:39:51.291442 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-v5lml"] Nov 24 21:39:51 crc kubenswrapper[4915]: I1124 21:39:51.344792 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vg7v\" (UniqueName: \"kubernetes.io/projected/1e7d248b-8288-4385-86b4-30fa3d43b8a4-kube-api-access-7vg7v\") pod \"dnsmasq-dns-675f4bcbfc-28jcc\" (UID: \"1e7d248b-8288-4385-86b4-30fa3d43b8a4\") " pod="openstack/dnsmasq-dns-675f4bcbfc-28jcc" Nov 24 21:39:51 crc kubenswrapper[4915]: I1124 21:39:51.345029 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e7d248b-8288-4385-86b4-30fa3d43b8a4-config\") pod \"dnsmasq-dns-675f4bcbfc-28jcc\" (UID: \"1e7d248b-8288-4385-86b4-30fa3d43b8a4\") " pod="openstack/dnsmasq-dns-675f4bcbfc-28jcc" Nov 24 21:39:51 crc kubenswrapper[4915]: I1124 21:39:51.447249 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vg7v\" (UniqueName: \"kubernetes.io/projected/1e7d248b-8288-4385-86b4-30fa3d43b8a4-kube-api-access-7vg7v\") pod \"dnsmasq-dns-675f4bcbfc-28jcc\" (UID: \"1e7d248b-8288-4385-86b4-30fa3d43b8a4\") " pod="openstack/dnsmasq-dns-675f4bcbfc-28jcc" Nov 24 21:39:51 crc kubenswrapper[4915]: I1124 21:39:51.447312 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djpkk\" (UniqueName: \"kubernetes.io/projected/8babaf8f-1e9a-4a58-b846-010ccf01a4ab-kube-api-access-djpkk\") pod \"dnsmasq-dns-78dd6ddcc-v5lml\" (UID: \"8babaf8f-1e9a-4a58-b846-010ccf01a4ab\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v5lml" Nov 24 21:39:51 crc kubenswrapper[4915]: I1124 21:39:51.447418 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e7d248b-8288-4385-86b4-30fa3d43b8a4-config\") pod \"dnsmasq-dns-675f4bcbfc-28jcc\" (UID: \"1e7d248b-8288-4385-86b4-30fa3d43b8a4\") " pod="openstack/dnsmasq-dns-675f4bcbfc-28jcc" Nov 24 21:39:51 crc kubenswrapper[4915]: I1124 21:39:51.447448 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8babaf8f-1e9a-4a58-b846-010ccf01a4ab-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-v5lml\" (UID: \"8babaf8f-1e9a-4a58-b846-010ccf01a4ab\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v5lml" Nov 24 21:39:51 crc kubenswrapper[4915]: I1124 21:39:51.447481 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8babaf8f-1e9a-4a58-b846-010ccf01a4ab-config\") pod \"dnsmasq-dns-78dd6ddcc-v5lml\" (UID: \"8babaf8f-1e9a-4a58-b846-010ccf01a4ab\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v5lml" Nov 24 21:39:51 crc kubenswrapper[4915]: I1124 21:39:51.448719 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e7d248b-8288-4385-86b4-30fa3d43b8a4-config\") pod \"dnsmasq-dns-675f4bcbfc-28jcc\" (UID: \"1e7d248b-8288-4385-86b4-30fa3d43b8a4\") " pod="openstack/dnsmasq-dns-675f4bcbfc-28jcc" Nov 24 21:39:51 crc kubenswrapper[4915]: I1124 21:39:51.476509 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vg7v\" (UniqueName: \"kubernetes.io/projected/1e7d248b-8288-4385-86b4-30fa3d43b8a4-kube-api-access-7vg7v\") pod \"dnsmasq-dns-675f4bcbfc-28jcc\" (UID: \"1e7d248b-8288-4385-86b4-30fa3d43b8a4\") " pod="openstack/dnsmasq-dns-675f4bcbfc-28jcc" Nov 24 21:39:51 crc kubenswrapper[4915]: I1124 21:39:51.513397 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-28jcc" Nov 24 21:39:51 crc kubenswrapper[4915]: I1124 21:39:51.549275 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8babaf8f-1e9a-4a58-b846-010ccf01a4ab-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-v5lml\" (UID: \"8babaf8f-1e9a-4a58-b846-010ccf01a4ab\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v5lml" Nov 24 21:39:51 crc kubenswrapper[4915]: I1124 21:39:51.549337 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8babaf8f-1e9a-4a58-b846-010ccf01a4ab-config\") pod \"dnsmasq-dns-78dd6ddcc-v5lml\" (UID: \"8babaf8f-1e9a-4a58-b846-010ccf01a4ab\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v5lml" Nov 24 21:39:51 crc kubenswrapper[4915]: I1124 21:39:51.549441 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djpkk\" (UniqueName: \"kubernetes.io/projected/8babaf8f-1e9a-4a58-b846-010ccf01a4ab-kube-api-access-djpkk\") pod \"dnsmasq-dns-78dd6ddcc-v5lml\" (UID: \"8babaf8f-1e9a-4a58-b846-010ccf01a4ab\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v5lml" Nov 24 21:39:51 crc kubenswrapper[4915]: I1124 21:39:51.550278 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8babaf8f-1e9a-4a58-b846-010ccf01a4ab-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-v5lml\" (UID: \"8babaf8f-1e9a-4a58-b846-010ccf01a4ab\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v5lml" Nov 24 21:39:51 crc kubenswrapper[4915]: I1124 21:39:51.550757 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8babaf8f-1e9a-4a58-b846-010ccf01a4ab-config\") pod \"dnsmasq-dns-78dd6ddcc-v5lml\" (UID: \"8babaf8f-1e9a-4a58-b846-010ccf01a4ab\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v5lml" Nov 24 21:39:51 crc kubenswrapper[4915]: I1124 21:39:51.570562 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djpkk\" (UniqueName: \"kubernetes.io/projected/8babaf8f-1e9a-4a58-b846-010ccf01a4ab-kube-api-access-djpkk\") pod \"dnsmasq-dns-78dd6ddcc-v5lml\" (UID: \"8babaf8f-1e9a-4a58-b846-010ccf01a4ab\") " pod="openstack/dnsmasq-dns-78dd6ddcc-v5lml" Nov 24 21:39:51 crc kubenswrapper[4915]: I1124 21:39:51.601347 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-v5lml" Nov 24 21:39:51 crc kubenswrapper[4915]: I1124 21:39:51.913101 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-v5lml"] Nov 24 21:39:51 crc kubenswrapper[4915]: I1124 21:39:51.958747 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-28jcc"] Nov 24 21:39:51 crc kubenswrapper[4915]: W1124 21:39:51.962024 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e7d248b_8288_4385_86b4_30fa3d43b8a4.slice/crio-21485f7f68e7bdf1e303f0f7e8fec651841e67d553296aad6adae09f899ab85d WatchSource:0}: Error finding container 21485f7f68e7bdf1e303f0f7e8fec651841e67d553296aad6adae09f899ab85d: Status 404 returned error can't find the container with id 21485f7f68e7bdf1e303f0f7e8fec651841e67d553296aad6adae09f899ab85d Nov 24 21:39:52 crc kubenswrapper[4915]: I1124 21:39:52.872546 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-28jcc" event={"ID":"1e7d248b-8288-4385-86b4-30fa3d43b8a4","Type":"ContainerStarted","Data":"21485f7f68e7bdf1e303f0f7e8fec651841e67d553296aad6adae09f899ab85d"} Nov 24 21:39:52 crc kubenswrapper[4915]: I1124 21:39:52.874179 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-v5lml" event={"ID":"8babaf8f-1e9a-4a58-b846-010ccf01a4ab","Type":"ContainerStarted","Data":"8a4d83ed1cf4695bb1050535fb39aebe21328cea719f20e951de21f4bd77a636"} Nov 24 21:39:54 crc kubenswrapper[4915]: I1124 21:39:54.211061 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-28jcc"] Nov 24 21:39:54 crc kubenswrapper[4915]: I1124 21:39:54.260081 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bbxkx"] Nov 24 21:39:54 crc kubenswrapper[4915]: I1124 21:39:54.261662 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-bbxkx" Nov 24 21:39:54 crc kubenswrapper[4915]: I1124 21:39:54.281903 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bbxkx"] Nov 24 21:39:54 crc kubenswrapper[4915]: I1124 21:39:54.413120 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/086e5806-87ef-4c73-8446-192165490619-dns-svc\") pod \"dnsmasq-dns-666b6646f7-bbxkx\" (UID: \"086e5806-87ef-4c73-8446-192165490619\") " pod="openstack/dnsmasq-dns-666b6646f7-bbxkx" Nov 24 21:39:54 crc kubenswrapper[4915]: I1124 21:39:54.413249 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nppd\" (UniqueName: \"kubernetes.io/projected/086e5806-87ef-4c73-8446-192165490619-kube-api-access-9nppd\") pod \"dnsmasq-dns-666b6646f7-bbxkx\" (UID: \"086e5806-87ef-4c73-8446-192165490619\") " pod="openstack/dnsmasq-dns-666b6646f7-bbxkx" Nov 24 21:39:54 crc kubenswrapper[4915]: I1124 21:39:54.413296 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/086e5806-87ef-4c73-8446-192165490619-config\") pod \"dnsmasq-dns-666b6646f7-bbxkx\" (UID: \"086e5806-87ef-4c73-8446-192165490619\") " pod="openstack/dnsmasq-dns-666b6646f7-bbxkx" Nov 24 21:39:54 crc kubenswrapper[4915]: I1124 21:39:54.501637 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-v5lml"] Nov 24 21:39:54 crc kubenswrapper[4915]: I1124 21:39:54.518748 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/086e5806-87ef-4c73-8446-192165490619-config\") pod \"dnsmasq-dns-666b6646f7-bbxkx\" (UID: \"086e5806-87ef-4c73-8446-192165490619\") " pod="openstack/dnsmasq-dns-666b6646f7-bbxkx" Nov 24 21:39:54 crc kubenswrapper[4915]: I1124 21:39:54.518904 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/086e5806-87ef-4c73-8446-192165490619-dns-svc\") pod \"dnsmasq-dns-666b6646f7-bbxkx\" (UID: \"086e5806-87ef-4c73-8446-192165490619\") " pod="openstack/dnsmasq-dns-666b6646f7-bbxkx" Nov 24 21:39:54 crc kubenswrapper[4915]: I1124 21:39:54.518985 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nppd\" (UniqueName: \"kubernetes.io/projected/086e5806-87ef-4c73-8446-192165490619-kube-api-access-9nppd\") pod \"dnsmasq-dns-666b6646f7-bbxkx\" (UID: \"086e5806-87ef-4c73-8446-192165490619\") " pod="openstack/dnsmasq-dns-666b6646f7-bbxkx" Nov 24 21:39:54 crc kubenswrapper[4915]: I1124 21:39:54.520481 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/086e5806-87ef-4c73-8446-192165490619-config\") pod \"dnsmasq-dns-666b6646f7-bbxkx\" (UID: \"086e5806-87ef-4c73-8446-192165490619\") " pod="openstack/dnsmasq-dns-666b6646f7-bbxkx" Nov 24 21:39:54 crc kubenswrapper[4915]: I1124 21:39:54.521073 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/086e5806-87ef-4c73-8446-192165490619-dns-svc\") pod \"dnsmasq-dns-666b6646f7-bbxkx\" (UID: \"086e5806-87ef-4c73-8446-192165490619\") " pod="openstack/dnsmasq-dns-666b6646f7-bbxkx" Nov 24 21:39:54 crc kubenswrapper[4915]: I1124 21:39:54.529723 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-m8vp9"] Nov 24 21:39:54 crc kubenswrapper[4915]: I1124 21:39:54.532223 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-m8vp9" Nov 24 21:39:54 crc kubenswrapper[4915]: I1124 21:39:54.545038 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-m8vp9"] Nov 24 21:39:54 crc kubenswrapper[4915]: I1124 21:39:54.559771 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nppd\" (UniqueName: \"kubernetes.io/projected/086e5806-87ef-4c73-8446-192165490619-kube-api-access-9nppd\") pod \"dnsmasq-dns-666b6646f7-bbxkx\" (UID: \"086e5806-87ef-4c73-8446-192165490619\") " pod="openstack/dnsmasq-dns-666b6646f7-bbxkx" Nov 24 21:39:54 crc kubenswrapper[4915]: I1124 21:39:54.593324 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-bbxkx" Nov 24 21:39:54 crc kubenswrapper[4915]: I1124 21:39:54.725074 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfljs\" (UniqueName: \"kubernetes.io/projected/e084c012-7c0b-407b-8e65-31598c80a76f-kube-api-access-vfljs\") pod \"dnsmasq-dns-57d769cc4f-m8vp9\" (UID: \"e084c012-7c0b-407b-8e65-31598c80a76f\") " pod="openstack/dnsmasq-dns-57d769cc4f-m8vp9" Nov 24 21:39:54 crc kubenswrapper[4915]: I1124 21:39:54.725544 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e084c012-7c0b-407b-8e65-31598c80a76f-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-m8vp9\" (UID: \"e084c012-7c0b-407b-8e65-31598c80a76f\") " pod="openstack/dnsmasq-dns-57d769cc4f-m8vp9" Nov 24 21:39:54 crc kubenswrapper[4915]: I1124 21:39:54.725579 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e084c012-7c0b-407b-8e65-31598c80a76f-config\") pod \"dnsmasq-dns-57d769cc4f-m8vp9\" (UID: \"e084c012-7c0b-407b-8e65-31598c80a76f\") " pod="openstack/dnsmasq-dns-57d769cc4f-m8vp9" Nov 24 21:39:54 crc kubenswrapper[4915]: I1124 21:39:54.827298 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e084c012-7c0b-407b-8e65-31598c80a76f-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-m8vp9\" (UID: \"e084c012-7c0b-407b-8e65-31598c80a76f\") " pod="openstack/dnsmasq-dns-57d769cc4f-m8vp9" Nov 24 21:39:54 crc kubenswrapper[4915]: I1124 21:39:54.827341 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e084c012-7c0b-407b-8e65-31598c80a76f-config\") pod \"dnsmasq-dns-57d769cc4f-m8vp9\" (UID: \"e084c012-7c0b-407b-8e65-31598c80a76f\") " pod="openstack/dnsmasq-dns-57d769cc4f-m8vp9" Nov 24 21:39:54 crc kubenswrapper[4915]: I1124 21:39:54.827431 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfljs\" (UniqueName: \"kubernetes.io/projected/e084c012-7c0b-407b-8e65-31598c80a76f-kube-api-access-vfljs\") pod \"dnsmasq-dns-57d769cc4f-m8vp9\" (UID: \"e084c012-7c0b-407b-8e65-31598c80a76f\") " pod="openstack/dnsmasq-dns-57d769cc4f-m8vp9" Nov 24 21:39:54 crc kubenswrapper[4915]: I1124 21:39:54.828487 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e084c012-7c0b-407b-8e65-31598c80a76f-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-m8vp9\" (UID: \"e084c012-7c0b-407b-8e65-31598c80a76f\") " pod="openstack/dnsmasq-dns-57d769cc4f-m8vp9" Nov 24 21:39:54 crc kubenswrapper[4915]: I1124 21:39:54.829037 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e084c012-7c0b-407b-8e65-31598c80a76f-config\") pod \"dnsmasq-dns-57d769cc4f-m8vp9\" (UID: \"e084c012-7c0b-407b-8e65-31598c80a76f\") " pod="openstack/dnsmasq-dns-57d769cc4f-m8vp9" Nov 24 21:39:54 crc kubenswrapper[4915]: I1124 21:39:54.863645 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfljs\" (UniqueName: \"kubernetes.io/projected/e084c012-7c0b-407b-8e65-31598c80a76f-kube-api-access-vfljs\") pod \"dnsmasq-dns-57d769cc4f-m8vp9\" (UID: \"e084c012-7c0b-407b-8e65-31598c80a76f\") " pod="openstack/dnsmasq-dns-57d769cc4f-m8vp9" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.154933 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-m8vp9" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.181635 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bbxkx"] Nov 24 21:39:55 crc kubenswrapper[4915]: W1124 21:39:55.207225 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod086e5806_87ef_4c73_8446_192165490619.slice/crio-19d331378b1c794965ab9c8230f99d2d682be60022ea2b1bc49a0564274bbaf9 WatchSource:0}: Error finding container 19d331378b1c794965ab9c8230f99d2d682be60022ea2b1bc49a0564274bbaf9: Status 404 returned error can't find the container with id 19d331378b1c794965ab9c8230f99d2d682be60022ea2b1bc49a0564274bbaf9 Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.361045 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.363036 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.365580 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.365922 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.366041 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.366148 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-5zgjl" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.373057 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.373150 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.373385 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.395770 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.538506 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a45944d3-396b-4683-b9b5-8e42e9331043-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.538555 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a45944d3-396b-4683-b9b5-8e42e9331043-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.538585 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ftnx\" (UniqueName: \"kubernetes.io/projected/a45944d3-396b-4683-b9b5-8e42e9331043-kube-api-access-7ftnx\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.538605 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a45944d3-396b-4683-b9b5-8e42e9331043-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.538707 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a45944d3-396b-4683-b9b5-8e42e9331043-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.538727 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a45944d3-396b-4683-b9b5-8e42e9331043-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.538745 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a45944d3-396b-4683-b9b5-8e42e9331043-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.538760 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a45944d3-396b-4683-b9b5-8e42e9331043-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.538802 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a45944d3-396b-4683-b9b5-8e42e9331043-config-data\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.539012 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a45944d3-396b-4683-b9b5-8e42e9331043-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.539053 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.634862 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-m8vp9"] Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.641015 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a45944d3-396b-4683-b9b5-8e42e9331043-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.641078 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a45944d3-396b-4683-b9b5-8e42e9331043-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.641120 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ftnx\" (UniqueName: \"kubernetes.io/projected/a45944d3-396b-4683-b9b5-8e42e9331043-kube-api-access-7ftnx\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.641142 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a45944d3-396b-4683-b9b5-8e42e9331043-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.641218 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a45944d3-396b-4683-b9b5-8e42e9331043-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.641247 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a45944d3-396b-4683-b9b5-8e42e9331043-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.641269 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a45944d3-396b-4683-b9b5-8e42e9331043-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.641288 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a45944d3-396b-4683-b9b5-8e42e9331043-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.641321 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a45944d3-396b-4683-b9b5-8e42e9331043-config-data\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.641388 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a45944d3-396b-4683-b9b5-8e42e9331043-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.641422 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.641865 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.643265 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a45944d3-396b-4683-b9b5-8e42e9331043-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.643758 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a45944d3-396b-4683-b9b5-8e42e9331043-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.643930 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a45944d3-396b-4683-b9b5-8e42e9331043-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.644139 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a45944d3-396b-4683-b9b5-8e42e9331043-config-data\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.644217 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a45944d3-396b-4683-b9b5-8e42e9331043-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.649939 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a45944d3-396b-4683-b9b5-8e42e9331043-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.650367 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a45944d3-396b-4683-b9b5-8e42e9331043-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.652941 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a45944d3-396b-4683-b9b5-8e42e9331043-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.654588 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a45944d3-396b-4683-b9b5-8e42e9331043-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.673817 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.675139 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.681579 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.681951 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.683106 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.687976 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.688026 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.688051 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-m2qf6" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.688055 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.687985 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.706970 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.711041 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ftnx\" (UniqueName: \"kubernetes.io/projected/a45944d3-396b-4683-b9b5-8e42e9331043-kube-api-access-7ftnx\") pod \"rabbitmq-server-0\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " pod="openstack/rabbitmq-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.844584 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.844640 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8c50db1c-ac88-4299-ab96-8b750308610f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.844707 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8c50db1c-ac88-4299-ab96-8b750308610f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.844747 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8c50db1c-ac88-4299-ab96-8b750308610f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.844774 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2t62\" (UniqueName: \"kubernetes.io/projected/8c50db1c-ac88-4299-ab96-8b750308610f-kube-api-access-w2t62\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.844827 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8c50db1c-ac88-4299-ab96-8b750308610f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.844873 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8c50db1c-ac88-4299-ab96-8b750308610f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.845036 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8c50db1c-ac88-4299-ab96-8b750308610f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.845163 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8c50db1c-ac88-4299-ab96-8b750308610f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.845197 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8c50db1c-ac88-4299-ab96-8b750308610f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.845297 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c50db1c-ac88-4299-ab96-8b750308610f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.924656 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bbxkx" event={"ID":"086e5806-87ef-4c73-8446-192165490619","Type":"ContainerStarted","Data":"19d331378b1c794965ab9c8230f99d2d682be60022ea2b1bc49a0564274bbaf9"} Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.926424 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-m8vp9" event={"ID":"e084c012-7c0b-407b-8e65-31598c80a76f","Type":"ContainerStarted","Data":"30d158408e17c80b81e816ce08395ac6d1a520500222ca9d8903b4238cb112e8"} Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.948906 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8c50db1c-ac88-4299-ab96-8b750308610f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.949043 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8c50db1c-ac88-4299-ab96-8b750308610f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.949072 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8c50db1c-ac88-4299-ab96-8b750308610f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.949134 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c50db1c-ac88-4299-ab96-8b750308610f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.949157 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.949173 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8c50db1c-ac88-4299-ab96-8b750308610f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.949238 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8c50db1c-ac88-4299-ab96-8b750308610f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.949292 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8c50db1c-ac88-4299-ab96-8b750308610f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.949310 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2t62\" (UniqueName: \"kubernetes.io/projected/8c50db1c-ac88-4299-ab96-8b750308610f-kube-api-access-w2t62\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.949314 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.949352 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8c50db1c-ac88-4299-ab96-8b750308610f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.949436 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8c50db1c-ac88-4299-ab96-8b750308610f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.950040 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8c50db1c-ac88-4299-ab96-8b750308610f-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.950500 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c50db1c-ac88-4299-ab96-8b750308610f-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.951021 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8c50db1c-ac88-4299-ab96-8b750308610f-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.951293 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8c50db1c-ac88-4299-ab96-8b750308610f-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.952412 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8c50db1c-ac88-4299-ab96-8b750308610f-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.954205 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8c50db1c-ac88-4299-ab96-8b750308610f-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.956211 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8c50db1c-ac88-4299-ab96-8b750308610f-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.968427 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8c50db1c-ac88-4299-ab96-8b750308610f-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.969652 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8c50db1c-ac88-4299-ab96-8b750308610f-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.977424 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2t62\" (UniqueName: \"kubernetes.io/projected/8c50db1c-ac88-4299-ab96-8b750308610f-kube-api-access-w2t62\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:55 crc kubenswrapper[4915]: I1124 21:39:55.990786 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 21:39:56 crc kubenswrapper[4915]: I1124 21:39:56.019751 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:56 crc kubenswrapper[4915]: I1124 21:39:56.072285 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:39:56 crc kubenswrapper[4915]: I1124 21:39:56.593983 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 21:39:56 crc kubenswrapper[4915]: I1124 21:39:56.743200 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 21:39:56 crc kubenswrapper[4915]: I1124 21:39:56.936895 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a45944d3-396b-4683-b9b5-8e42e9331043","Type":"ContainerStarted","Data":"56dd50820f29c65e8a0f79d8019990f79aa29ae33e7c868a42b7bb19461c4ce4"} Nov 24 21:39:56 crc kubenswrapper[4915]: I1124 21:39:56.938675 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8c50db1c-ac88-4299-ab96-8b750308610f","Type":"ContainerStarted","Data":"e05bb94401ccc07fabe73cf4cc54ed5685ae000ff382736c28f21a1b6526e564"} Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.199849 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.201699 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.205302 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-g79x9" Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.205724 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.206189 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.206746 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.208853 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.226568 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.289290 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0914514-21db-4664-9fdf-935c0f671637-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b0914514-21db-4664-9fdf-935c0f671637\") " pod="openstack/openstack-galera-0" Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.289577 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvccz\" (UniqueName: \"kubernetes.io/projected/b0914514-21db-4664-9fdf-935c0f671637-kube-api-access-kvccz\") pod \"openstack-galera-0\" (UID: \"b0914514-21db-4664-9fdf-935c0f671637\") " pod="openstack/openstack-galera-0" Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.289693 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b0914514-21db-4664-9fdf-935c0f671637-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b0914514-21db-4664-9fdf-935c0f671637\") " pod="openstack/openstack-galera-0" Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.289783 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0914514-21db-4664-9fdf-935c0f671637-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b0914514-21db-4664-9fdf-935c0f671637\") " pod="openstack/openstack-galera-0" Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.289940 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0914514-21db-4664-9fdf-935c0f671637-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b0914514-21db-4664-9fdf-935c0f671637\") " pod="openstack/openstack-galera-0" Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.290077 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"b0914514-21db-4664-9fdf-935c0f671637\") " pod="openstack/openstack-galera-0" Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.290159 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b0914514-21db-4664-9fdf-935c0f671637-kolla-config\") pod \"openstack-galera-0\" (UID: \"b0914514-21db-4664-9fdf-935c0f671637\") " pod="openstack/openstack-galera-0" Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.290283 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b0914514-21db-4664-9fdf-935c0f671637-config-data-default\") pod \"openstack-galera-0\" (UID: \"b0914514-21db-4664-9fdf-935c0f671637\") " pod="openstack/openstack-galera-0" Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.391520 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b0914514-21db-4664-9fdf-935c0f671637-config-data-default\") pod \"openstack-galera-0\" (UID: \"b0914514-21db-4664-9fdf-935c0f671637\") " pod="openstack/openstack-galera-0" Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.391568 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0914514-21db-4664-9fdf-935c0f671637-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b0914514-21db-4664-9fdf-935c0f671637\") " pod="openstack/openstack-galera-0" Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.391592 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvccz\" (UniqueName: \"kubernetes.io/projected/b0914514-21db-4664-9fdf-935c0f671637-kube-api-access-kvccz\") pod \"openstack-galera-0\" (UID: \"b0914514-21db-4664-9fdf-935c0f671637\") " pod="openstack/openstack-galera-0" Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.391611 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b0914514-21db-4664-9fdf-935c0f671637-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b0914514-21db-4664-9fdf-935c0f671637\") " pod="openstack/openstack-galera-0" Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.391630 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0914514-21db-4664-9fdf-935c0f671637-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b0914514-21db-4664-9fdf-935c0f671637\") " pod="openstack/openstack-galera-0" Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.391674 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0914514-21db-4664-9fdf-935c0f671637-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b0914514-21db-4664-9fdf-935c0f671637\") " pod="openstack/openstack-galera-0" Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.391731 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"b0914514-21db-4664-9fdf-935c0f671637\") " pod="openstack/openstack-galera-0" Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.391765 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b0914514-21db-4664-9fdf-935c0f671637-kolla-config\") pod \"openstack-galera-0\" (UID: \"b0914514-21db-4664-9fdf-935c0f671637\") " pod="openstack/openstack-galera-0" Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.392009 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"b0914514-21db-4664-9fdf-935c0f671637\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.392134 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b0914514-21db-4664-9fdf-935c0f671637-config-data-generated\") pod \"openstack-galera-0\" (UID: \"b0914514-21db-4664-9fdf-935c0f671637\") " pod="openstack/openstack-galera-0" Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.392669 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b0914514-21db-4664-9fdf-935c0f671637-kolla-config\") pod \"openstack-galera-0\" (UID: \"b0914514-21db-4664-9fdf-935c0f671637\") " pod="openstack/openstack-galera-0" Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.393103 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0914514-21db-4664-9fdf-935c0f671637-operator-scripts\") pod \"openstack-galera-0\" (UID: \"b0914514-21db-4664-9fdf-935c0f671637\") " pod="openstack/openstack-galera-0" Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.393339 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b0914514-21db-4664-9fdf-935c0f671637-config-data-default\") pod \"openstack-galera-0\" (UID: \"b0914514-21db-4664-9fdf-935c0f671637\") " pod="openstack/openstack-galera-0" Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.397924 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0914514-21db-4664-9fdf-935c0f671637-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"b0914514-21db-4664-9fdf-935c0f671637\") " pod="openstack/openstack-galera-0" Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.400410 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0914514-21db-4664-9fdf-935c0f671637-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"b0914514-21db-4664-9fdf-935c0f671637\") " pod="openstack/openstack-galera-0" Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.423217 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"b0914514-21db-4664-9fdf-935c0f671637\") " pod="openstack/openstack-galera-0" Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.428059 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvccz\" (UniqueName: \"kubernetes.io/projected/b0914514-21db-4664-9fdf-935c0f671637-kube-api-access-kvccz\") pod \"openstack-galera-0\" (UID: \"b0914514-21db-4664-9fdf-935c0f671637\") " pod="openstack/openstack-galera-0" Nov 24 21:39:57 crc kubenswrapper[4915]: I1124 21:39:57.548609 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 24 21:39:58 crc kubenswrapper[4915]: I1124 21:39:58.209408 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 24 21:39:58 crc kubenswrapper[4915]: I1124 21:39:58.900658 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 24 21:39:58 crc kubenswrapper[4915]: I1124 21:39:58.902993 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 21:39:58 crc kubenswrapper[4915]: I1124 21:39:58.913715 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-v4qqc" Nov 24 21:39:58 crc kubenswrapper[4915]: I1124 21:39:58.914220 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 24 21:39:58 crc kubenswrapper[4915]: I1124 21:39:58.914410 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 24 21:39:58 crc kubenswrapper[4915]: I1124 21:39:58.914677 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.036431 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b0914514-21db-4664-9fdf-935c0f671637","Type":"ContainerStarted","Data":"0b902dbb388d7e3710b851d189ee6755ac1ae41bd6ddc070488ddb9958214f63"} Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.081212 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/59818a4e-515f-477f-96ad-78e6b8310657-memcached-tls-certs\") pod \"memcached-0\" (UID: \"59818a4e-515f-477f-96ad-78e6b8310657\") " pod="openstack/memcached-0" Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.081442 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdwmw\" (UniqueName: \"kubernetes.io/projected/59818a4e-515f-477f-96ad-78e6b8310657-kube-api-access-cdwmw\") pod \"memcached-0\" (UID: \"59818a4e-515f-477f-96ad-78e6b8310657\") " pod="openstack/memcached-0" Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.081510 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/59818a4e-515f-477f-96ad-78e6b8310657-kolla-config\") pod \"memcached-0\" (UID: \"59818a4e-515f-477f-96ad-78e6b8310657\") " pod="openstack/memcached-0" Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.081566 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59818a4e-515f-477f-96ad-78e6b8310657-combined-ca-bundle\") pod \"memcached-0\" (UID: \"59818a4e-515f-477f-96ad-78e6b8310657\") " pod="openstack/memcached-0" Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.081624 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59818a4e-515f-477f-96ad-78e6b8310657-config-data\") pod \"memcached-0\" (UID: \"59818a4e-515f-477f-96ad-78e6b8310657\") " pod="openstack/memcached-0" Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.185561 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdwmw\" (UniqueName: \"kubernetes.io/projected/59818a4e-515f-477f-96ad-78e6b8310657-kube-api-access-cdwmw\") pod \"memcached-0\" (UID: \"59818a4e-515f-477f-96ad-78e6b8310657\") " pod="openstack/memcached-0" Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.185611 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/59818a4e-515f-477f-96ad-78e6b8310657-kolla-config\") pod \"memcached-0\" (UID: \"59818a4e-515f-477f-96ad-78e6b8310657\") " pod="openstack/memcached-0" Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.185646 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59818a4e-515f-477f-96ad-78e6b8310657-combined-ca-bundle\") pod \"memcached-0\" (UID: \"59818a4e-515f-477f-96ad-78e6b8310657\") " pod="openstack/memcached-0" Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.185677 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59818a4e-515f-477f-96ad-78e6b8310657-config-data\") pod \"memcached-0\" (UID: \"59818a4e-515f-477f-96ad-78e6b8310657\") " pod="openstack/memcached-0" Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.185762 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/59818a4e-515f-477f-96ad-78e6b8310657-memcached-tls-certs\") pod \"memcached-0\" (UID: \"59818a4e-515f-477f-96ad-78e6b8310657\") " pod="openstack/memcached-0" Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.186724 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/59818a4e-515f-477f-96ad-78e6b8310657-kolla-config\") pod \"memcached-0\" (UID: \"59818a4e-515f-477f-96ad-78e6b8310657\") " pod="openstack/memcached-0" Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.188928 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/59818a4e-515f-477f-96ad-78e6b8310657-config-data\") pod \"memcached-0\" (UID: \"59818a4e-515f-477f-96ad-78e6b8310657\") " pod="openstack/memcached-0" Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.192857 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/59818a4e-515f-477f-96ad-78e6b8310657-memcached-tls-certs\") pod \"memcached-0\" (UID: \"59818a4e-515f-477f-96ad-78e6b8310657\") " pod="openstack/memcached-0" Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.201381 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59818a4e-515f-477f-96ad-78e6b8310657-combined-ca-bundle\") pod \"memcached-0\" (UID: \"59818a4e-515f-477f-96ad-78e6b8310657\") " pod="openstack/memcached-0" Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.209555 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdwmw\" (UniqueName: \"kubernetes.io/projected/59818a4e-515f-477f-96ad-78e6b8310657-kube-api-access-cdwmw\") pod \"memcached-0\" (UID: \"59818a4e-515f-477f-96ad-78e6b8310657\") " pod="openstack/memcached-0" Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.243733 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.744483 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.770014 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.770105 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.775479 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-t8s89" Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.775704 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.775836 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.780152 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.902527 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d00638de-cc40-405d-b271-b681f199a172\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.902656 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d00638de-cc40-405d-b271-b681f199a172-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d00638de-cc40-405d-b271-b681f199a172\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.902704 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d00638de-cc40-405d-b271-b681f199a172-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d00638de-cc40-405d-b271-b681f199a172\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.902738 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d00638de-cc40-405d-b271-b681f199a172-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d00638de-cc40-405d-b271-b681f199a172\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.902757 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d00638de-cc40-405d-b271-b681f199a172-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d00638de-cc40-405d-b271-b681f199a172\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.902818 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pst58\" (UniqueName: \"kubernetes.io/projected/d00638de-cc40-405d-b271-b681f199a172-kube-api-access-pst58\") pod \"openstack-cell1-galera-0\" (UID: \"d00638de-cc40-405d-b271-b681f199a172\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.902877 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d00638de-cc40-405d-b271-b681f199a172-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d00638de-cc40-405d-b271-b681f199a172\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:39:59 crc kubenswrapper[4915]: I1124 21:39:59.902916 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d00638de-cc40-405d-b271-b681f199a172-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d00638de-cc40-405d-b271-b681f199a172\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:40:00 crc kubenswrapper[4915]: I1124 21:40:00.004358 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d00638de-cc40-405d-b271-b681f199a172-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d00638de-cc40-405d-b271-b681f199a172\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:40:00 crc kubenswrapper[4915]: I1124 21:40:00.004662 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d00638de-cc40-405d-b271-b681f199a172-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d00638de-cc40-405d-b271-b681f199a172\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:40:00 crc kubenswrapper[4915]: I1124 21:40:00.004693 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d00638de-cc40-405d-b271-b681f199a172\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:40:00 crc kubenswrapper[4915]: I1124 21:40:00.004771 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d00638de-cc40-405d-b271-b681f199a172-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d00638de-cc40-405d-b271-b681f199a172\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:40:00 crc kubenswrapper[4915]: I1124 21:40:00.004822 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d00638de-cc40-405d-b271-b681f199a172-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d00638de-cc40-405d-b271-b681f199a172\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:40:00 crc kubenswrapper[4915]: I1124 21:40:00.004846 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d00638de-cc40-405d-b271-b681f199a172-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d00638de-cc40-405d-b271-b681f199a172\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:40:00 crc kubenswrapper[4915]: I1124 21:40:00.004865 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d00638de-cc40-405d-b271-b681f199a172-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d00638de-cc40-405d-b271-b681f199a172\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:40:00 crc kubenswrapper[4915]: I1124 21:40:00.004900 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pst58\" (UniqueName: \"kubernetes.io/projected/d00638de-cc40-405d-b271-b681f199a172-kube-api-access-pst58\") pod \"openstack-cell1-galera-0\" (UID: \"d00638de-cc40-405d-b271-b681f199a172\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:40:00 crc kubenswrapper[4915]: I1124 21:40:00.005463 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d00638de-cc40-405d-b271-b681f199a172-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d00638de-cc40-405d-b271-b681f199a172\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:40:00 crc kubenswrapper[4915]: I1124 21:40:00.005561 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d00638de-cc40-405d-b271-b681f199a172\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Nov 24 21:40:00 crc kubenswrapper[4915]: I1124 21:40:00.005597 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d00638de-cc40-405d-b271-b681f199a172-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d00638de-cc40-405d-b271-b681f199a172\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:40:00 crc kubenswrapper[4915]: I1124 21:40:00.005914 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d00638de-cc40-405d-b271-b681f199a172-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d00638de-cc40-405d-b271-b681f199a172\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:40:00 crc kubenswrapper[4915]: I1124 21:40:00.006659 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d00638de-cc40-405d-b271-b681f199a172-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d00638de-cc40-405d-b271-b681f199a172\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:40:00 crc kubenswrapper[4915]: I1124 21:40:00.010078 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d00638de-cc40-405d-b271-b681f199a172-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d00638de-cc40-405d-b271-b681f199a172\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:40:00 crc kubenswrapper[4915]: I1124 21:40:00.017270 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d00638de-cc40-405d-b271-b681f199a172-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d00638de-cc40-405d-b271-b681f199a172\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:40:00 crc kubenswrapper[4915]: I1124 21:40:00.032503 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pst58\" (UniqueName: \"kubernetes.io/projected/d00638de-cc40-405d-b271-b681f199a172-kube-api-access-pst58\") pod \"openstack-cell1-galera-0\" (UID: \"d00638de-cc40-405d-b271-b681f199a172\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:40:00 crc kubenswrapper[4915]: I1124 21:40:00.033848 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d00638de-cc40-405d-b271-b681f199a172\") " pod="openstack/openstack-cell1-galera-0" Nov 24 21:40:00 crc kubenswrapper[4915]: I1124 21:40:00.092264 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 24 21:40:00 crc kubenswrapper[4915]: I1124 21:40:00.947171 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:40:00 crc kubenswrapper[4915]: I1124 21:40:00.948674 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 21:40:00 crc kubenswrapper[4915]: I1124 21:40:00.954570 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-pcw7m" Nov 24 21:40:00 crc kubenswrapper[4915]: I1124 21:40:00.956959 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:40:01 crc kubenswrapper[4915]: I1124 21:40:01.028753 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5j7q\" (UniqueName: \"kubernetes.io/projected/f841b28e-565b-41e8-9288-582e862cdceb-kube-api-access-f5j7q\") pod \"kube-state-metrics-0\" (UID: \"f841b28e-565b-41e8-9288-582e862cdceb\") " pod="openstack/kube-state-metrics-0" Nov 24 21:40:01 crc kubenswrapper[4915]: I1124 21:40:01.131418 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5j7q\" (UniqueName: \"kubernetes.io/projected/f841b28e-565b-41e8-9288-582e862cdceb-kube-api-access-f5j7q\") pod \"kube-state-metrics-0\" (UID: \"f841b28e-565b-41e8-9288-582e862cdceb\") " pod="openstack/kube-state-metrics-0" Nov 24 21:40:01 crc kubenswrapper[4915]: I1124 21:40:01.156281 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5j7q\" (UniqueName: \"kubernetes.io/projected/f841b28e-565b-41e8-9288-582e862cdceb-kube-api-access-f5j7q\") pod \"kube-state-metrics-0\" (UID: \"f841b28e-565b-41e8-9288-582e862cdceb\") " pod="openstack/kube-state-metrics-0" Nov 24 21:40:01 crc kubenswrapper[4915]: I1124 21:40:01.285533 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 21:40:01 crc kubenswrapper[4915]: I1124 21:40:01.696299 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-8dsqc"] Nov 24 21:40:01 crc kubenswrapper[4915]: I1124 21:40:01.697508 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-8dsqc" Nov 24 21:40:01 crc kubenswrapper[4915]: I1124 21:40:01.700206 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-bvz5z" Nov 24 21:40:01 crc kubenswrapper[4915]: I1124 21:40:01.700350 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Nov 24 21:40:01 crc kubenswrapper[4915]: I1124 21:40:01.718861 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-8dsqc"] Nov 24 21:40:01 crc kubenswrapper[4915]: I1124 21:40:01.858070 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22ffe2e4-a7d4-4af8-93e6-1255daaabe16-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-8dsqc\" (UID: \"22ffe2e4-a7d4-4af8-93e6-1255daaabe16\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-8dsqc" Nov 24 21:40:01 crc kubenswrapper[4915]: I1124 21:40:01.858175 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds6q6\" (UniqueName: \"kubernetes.io/projected/22ffe2e4-a7d4-4af8-93e6-1255daaabe16-kube-api-access-ds6q6\") pod \"observability-ui-dashboards-7d5fb4cbfb-8dsqc\" (UID: \"22ffe2e4-a7d4-4af8-93e6-1255daaabe16\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-8dsqc" Nov 24 21:40:01 crc kubenswrapper[4915]: I1124 21:40:01.960351 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22ffe2e4-a7d4-4af8-93e6-1255daaabe16-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-8dsqc\" (UID: \"22ffe2e4-a7d4-4af8-93e6-1255daaabe16\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-8dsqc" Nov 24 21:40:01 crc kubenswrapper[4915]: I1124 21:40:01.960508 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds6q6\" (UniqueName: \"kubernetes.io/projected/22ffe2e4-a7d4-4af8-93e6-1255daaabe16-kube-api-access-ds6q6\") pod \"observability-ui-dashboards-7d5fb4cbfb-8dsqc\" (UID: \"22ffe2e4-a7d4-4af8-93e6-1255daaabe16\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-8dsqc" Nov 24 21:40:01 crc kubenswrapper[4915]: I1124 21:40:01.966832 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22ffe2e4-a7d4-4af8-93e6-1255daaabe16-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-8dsqc\" (UID: \"22ffe2e4-a7d4-4af8-93e6-1255daaabe16\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-8dsqc" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.006924 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds6q6\" (UniqueName: \"kubernetes.io/projected/22ffe2e4-a7d4-4af8-93e6-1255daaabe16-kube-api-access-ds6q6\") pod \"observability-ui-dashboards-7d5fb4cbfb-8dsqc\" (UID: \"22ffe2e4-a7d4-4af8-93e6-1255daaabe16\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-8dsqc" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.034691 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-8dsqc" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.058054 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-76b59b6c64-9qb78"] Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.060588 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76b59b6c64-9qb78" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.088505 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76b59b6c64-9qb78"] Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.164027 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/31d679a7-9921-4b31-a4bd-7bc758fdd651-console-oauth-config\") pod \"console-76b59b6c64-9qb78\" (UID: \"31d679a7-9921-4b31-a4bd-7bc758fdd651\") " pod="openshift-console/console-76b59b6c64-9qb78" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.164109 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31d679a7-9921-4b31-a4bd-7bc758fdd651-trusted-ca-bundle\") pod \"console-76b59b6c64-9qb78\" (UID: \"31d679a7-9921-4b31-a4bd-7bc758fdd651\") " pod="openshift-console/console-76b59b6c64-9qb78" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.164131 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/31d679a7-9921-4b31-a4bd-7bc758fdd651-service-ca\") pod \"console-76b59b6c64-9qb78\" (UID: \"31d679a7-9921-4b31-a4bd-7bc758fdd651\") " pod="openshift-console/console-76b59b6c64-9qb78" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.164183 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z27hz\" (UniqueName: \"kubernetes.io/projected/31d679a7-9921-4b31-a4bd-7bc758fdd651-kube-api-access-z27hz\") pod \"console-76b59b6c64-9qb78\" (UID: \"31d679a7-9921-4b31-a4bd-7bc758fdd651\") " pod="openshift-console/console-76b59b6c64-9qb78" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.164226 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/31d679a7-9921-4b31-a4bd-7bc758fdd651-console-config\") pod \"console-76b59b6c64-9qb78\" (UID: \"31d679a7-9921-4b31-a4bd-7bc758fdd651\") " pod="openshift-console/console-76b59b6c64-9qb78" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.164244 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/31d679a7-9921-4b31-a4bd-7bc758fdd651-oauth-serving-cert\") pod \"console-76b59b6c64-9qb78\" (UID: \"31d679a7-9921-4b31-a4bd-7bc758fdd651\") " pod="openshift-console/console-76b59b6c64-9qb78" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.164303 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31d679a7-9921-4b31-a4bd-7bc758fdd651-console-serving-cert\") pod \"console-76b59b6c64-9qb78\" (UID: \"31d679a7-9921-4b31-a4bd-7bc758fdd651\") " pod="openshift-console/console-76b59b6c64-9qb78" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.216732 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.218757 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.231511 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-9hgrx" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.234741 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.266876 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31d679a7-9921-4b31-a4bd-7bc758fdd651-trusted-ca-bundle\") pod \"console-76b59b6c64-9qb78\" (UID: \"31d679a7-9921-4b31-a4bd-7bc758fdd651\") " pod="openshift-console/console-76b59b6c64-9qb78" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.266931 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/31d679a7-9921-4b31-a4bd-7bc758fdd651-service-ca\") pod \"console-76b59b6c64-9qb78\" (UID: \"31d679a7-9921-4b31-a4bd-7bc758fdd651\") " pod="openshift-console/console-76b59b6c64-9qb78" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.267012 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z27hz\" (UniqueName: \"kubernetes.io/projected/31d679a7-9921-4b31-a4bd-7bc758fdd651-kube-api-access-z27hz\") pod \"console-76b59b6c64-9qb78\" (UID: \"31d679a7-9921-4b31-a4bd-7bc758fdd651\") " pod="openshift-console/console-76b59b6c64-9qb78" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.267067 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/31d679a7-9921-4b31-a4bd-7bc758fdd651-console-config\") pod \"console-76b59b6c64-9qb78\" (UID: \"31d679a7-9921-4b31-a4bd-7bc758fdd651\") " pod="openshift-console/console-76b59b6c64-9qb78" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.267093 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/31d679a7-9921-4b31-a4bd-7bc758fdd651-oauth-serving-cert\") pod \"console-76b59b6c64-9qb78\" (UID: \"31d679a7-9921-4b31-a4bd-7bc758fdd651\") " pod="openshift-console/console-76b59b6c64-9qb78" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.267139 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31d679a7-9921-4b31-a4bd-7bc758fdd651-console-serving-cert\") pod \"console-76b59b6c64-9qb78\" (UID: \"31d679a7-9921-4b31-a4bd-7bc758fdd651\") " pod="openshift-console/console-76b59b6c64-9qb78" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.267215 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/31d679a7-9921-4b31-a4bd-7bc758fdd651-console-oauth-config\") pod \"console-76b59b6c64-9qb78\" (UID: \"31d679a7-9921-4b31-a4bd-7bc758fdd651\") " pod="openshift-console/console-76b59b6c64-9qb78" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.268821 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31d679a7-9921-4b31-a4bd-7bc758fdd651-trusted-ca-bundle\") pod \"console-76b59b6c64-9qb78\" (UID: \"31d679a7-9921-4b31-a4bd-7bc758fdd651\") " pod="openshift-console/console-76b59b6c64-9qb78" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.269633 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/31d679a7-9921-4b31-a4bd-7bc758fdd651-service-ca\") pod \"console-76b59b6c64-9qb78\" (UID: \"31d679a7-9921-4b31-a4bd-7bc758fdd651\") " pod="openshift-console/console-76b59b6c64-9qb78" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.269876 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/31d679a7-9921-4b31-a4bd-7bc758fdd651-oauth-serving-cert\") pod \"console-76b59b6c64-9qb78\" (UID: \"31d679a7-9921-4b31-a4bd-7bc758fdd651\") " pod="openshift-console/console-76b59b6c64-9qb78" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.270125 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.270384 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.270583 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/31d679a7-9921-4b31-a4bd-7bc758fdd651-console-config\") pod \"console-76b59b6c64-9qb78\" (UID: \"31d679a7-9921-4b31-a4bd-7bc758fdd651\") " pod="openshift-console/console-76b59b6c64-9qb78" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.270711 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.307658 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/31d679a7-9921-4b31-a4bd-7bc758fdd651-console-serving-cert\") pod \"console-76b59b6c64-9qb78\" (UID: \"31d679a7-9921-4b31-a4bd-7bc758fdd651\") " pod="openshift-console/console-76b59b6c64-9qb78" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.315736 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/31d679a7-9921-4b31-a4bd-7bc758fdd651-console-oauth-config\") pod \"console-76b59b6c64-9qb78\" (UID: \"31d679a7-9921-4b31-a4bd-7bc758fdd651\") " pod="openshift-console/console-76b59b6c64-9qb78" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.369821 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/59f54e36-0033-4e91-be8b-7d447d666d04-config\") pod \"prometheus-metric-storage-0\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.369871 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f46d0998-3845-4338-bf4d-9c6294f76988\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f46d0998-3845-4338-bf4d-9c6294f76988\") pod \"prometheus-metric-storage-0\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.369901 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/59f54e36-0033-4e91-be8b-7d447d666d04-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.369943 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/59f54e36-0033-4e91-be8b-7d447d666d04-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.369978 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/59f54e36-0033-4e91-be8b-7d447d666d04-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.369999 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfh5v\" (UniqueName: \"kubernetes.io/projected/59f54e36-0033-4e91-be8b-7d447d666d04-kube-api-access-jfh5v\") pod \"prometheus-metric-storage-0\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.370031 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/59f54e36-0033-4e91-be8b-7d447d666d04-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.370070 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/59f54e36-0033-4e91-be8b-7d447d666d04-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.392443 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.399965 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z27hz\" (UniqueName: \"kubernetes.io/projected/31d679a7-9921-4b31-a4bd-7bc758fdd651-kube-api-access-z27hz\") pod \"console-76b59b6c64-9qb78\" (UID: \"31d679a7-9921-4b31-a4bd-7bc758fdd651\") " pod="openshift-console/console-76b59b6c64-9qb78" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.400394 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.473307 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/59f54e36-0033-4e91-be8b-7d447d666d04-config\") pod \"prometheus-metric-storage-0\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.473381 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f46d0998-3845-4338-bf4d-9c6294f76988\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f46d0998-3845-4338-bf4d-9c6294f76988\") pod \"prometheus-metric-storage-0\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.473417 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/59f54e36-0033-4e91-be8b-7d447d666d04-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.473458 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/59f54e36-0033-4e91-be8b-7d447d666d04-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.473498 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/59f54e36-0033-4e91-be8b-7d447d666d04-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.473520 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfh5v\" (UniqueName: \"kubernetes.io/projected/59f54e36-0033-4e91-be8b-7d447d666d04-kube-api-access-jfh5v\") pod \"prometheus-metric-storage-0\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.473550 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/59f54e36-0033-4e91-be8b-7d447d666d04-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.473590 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/59f54e36-0033-4e91-be8b-7d447d666d04-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.476583 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/59f54e36-0033-4e91-be8b-7d447d666d04-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.483427 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/59f54e36-0033-4e91-be8b-7d447d666d04-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.484312 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/59f54e36-0033-4e91-be8b-7d447d666d04-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.484318 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/59f54e36-0033-4e91-be8b-7d447d666d04-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.484559 4915 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.484582 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f46d0998-3845-4338-bf4d-9c6294f76988\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f46d0998-3845-4338-bf4d-9c6294f76988\") pod \"prometheus-metric-storage-0\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2d77608a2870e9deeb05060c3c02e327bdb3f202575835cd4a227365c50b165b/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.484703 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/59f54e36-0033-4e91-be8b-7d447d666d04-config\") pod \"prometheus-metric-storage-0\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.485812 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/59f54e36-0033-4e91-be8b-7d447d666d04-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.503722 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfh5v\" (UniqueName: \"kubernetes.io/projected/59f54e36-0033-4e91-be8b-7d447d666d04-kube-api-access-jfh5v\") pod \"prometheus-metric-storage-0\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.584324 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f46d0998-3845-4338-bf4d-9c6294f76988\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f46d0998-3845-4338-bf4d-9c6294f76988\") pod \"prometheus-metric-storage-0\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.696080 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-76b59b6c64-9qb78" Nov 24 21:40:02 crc kubenswrapper[4915]: I1124 21:40:02.883958 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 21:40:04 crc kubenswrapper[4915]: I1124 21:40:04.953214 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 21:40:04 crc kubenswrapper[4915]: I1124 21:40:04.955333 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 21:40:04 crc kubenswrapper[4915]: I1124 21:40:04.959147 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 24 21:40:04 crc kubenswrapper[4915]: I1124 21:40:04.959511 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 24 21:40:04 crc kubenswrapper[4915]: I1124 21:40:04.959693 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 24 21:40:04 crc kubenswrapper[4915]: I1124 21:40:04.959865 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 24 21:40:04 crc kubenswrapper[4915]: I1124 21:40:04.960140 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-tbr9d" Nov 24 21:40:04 crc kubenswrapper[4915]: I1124 21:40:04.970555 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.127161 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/30bb22de-3ca5-4580-92cf-09e8653a98ab-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"30bb22de-3ca5-4580-92cf-09e8653a98ab\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.127245 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30bb22de-3ca5-4580-92cf-09e8653a98ab-config\") pod \"ovsdbserver-nb-0\" (UID: \"30bb22de-3ca5-4580-92cf-09e8653a98ab\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.127286 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"30bb22de-3ca5-4580-92cf-09e8653a98ab\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.127318 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtpc4\" (UniqueName: \"kubernetes.io/projected/30bb22de-3ca5-4580-92cf-09e8653a98ab-kube-api-access-mtpc4\") pod \"ovsdbserver-nb-0\" (UID: \"30bb22de-3ca5-4580-92cf-09e8653a98ab\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.127350 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/30bb22de-3ca5-4580-92cf-09e8653a98ab-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"30bb22de-3ca5-4580-92cf-09e8653a98ab\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.127382 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bb22de-3ca5-4580-92cf-09e8653a98ab-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"30bb22de-3ca5-4580-92cf-09e8653a98ab\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.127429 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30bb22de-3ca5-4580-92cf-09e8653a98ab-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"30bb22de-3ca5-4580-92cf-09e8653a98ab\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.127477 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/30bb22de-3ca5-4580-92cf-09e8653a98ab-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"30bb22de-3ca5-4580-92cf-09e8653a98ab\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.161525 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-wj47k"] Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.164046 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wj47k" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.169226 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.169544 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.169882 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-4rfbq" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.177379 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wj47k"] Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.210161 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-mszzs"] Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.214265 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-mszzs" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.226354 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-mszzs"] Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.228544 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtpc4\" (UniqueName: \"kubernetes.io/projected/30bb22de-3ca5-4580-92cf-09e8653a98ab-kube-api-access-mtpc4\") pod \"ovsdbserver-nb-0\" (UID: \"30bb22de-3ca5-4580-92cf-09e8653a98ab\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.228687 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/30bb22de-3ca5-4580-92cf-09e8653a98ab-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"30bb22de-3ca5-4580-92cf-09e8653a98ab\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.229007 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bb22de-3ca5-4580-92cf-09e8653a98ab-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"30bb22de-3ca5-4580-92cf-09e8653a98ab\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.229817 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30bb22de-3ca5-4580-92cf-09e8653a98ab-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"30bb22de-3ca5-4580-92cf-09e8653a98ab\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.229950 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/30bb22de-3ca5-4580-92cf-09e8653a98ab-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"30bb22de-3ca5-4580-92cf-09e8653a98ab\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.230042 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/30bb22de-3ca5-4580-92cf-09e8653a98ab-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"30bb22de-3ca5-4580-92cf-09e8653a98ab\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.230120 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30bb22de-3ca5-4580-92cf-09e8653a98ab-config\") pod \"ovsdbserver-nb-0\" (UID: \"30bb22de-3ca5-4580-92cf-09e8653a98ab\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.230208 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"30bb22de-3ca5-4580-92cf-09e8653a98ab\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.230449 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"30bb22de-3ca5-4580-92cf-09e8653a98ab\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-nb-0" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.229334 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/30bb22de-3ca5-4580-92cf-09e8653a98ab-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"30bb22de-3ca5-4580-92cf-09e8653a98ab\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.232043 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30bb22de-3ca5-4580-92cf-09e8653a98ab-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"30bb22de-3ca5-4580-92cf-09e8653a98ab\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.236040 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30bb22de-3ca5-4580-92cf-09e8653a98ab-config\") pod \"ovsdbserver-nb-0\" (UID: \"30bb22de-3ca5-4580-92cf-09e8653a98ab\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.237043 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bb22de-3ca5-4580-92cf-09e8653a98ab-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"30bb22de-3ca5-4580-92cf-09e8653a98ab\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.237659 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/30bb22de-3ca5-4580-92cf-09e8653a98ab-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"30bb22de-3ca5-4580-92cf-09e8653a98ab\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.239541 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/30bb22de-3ca5-4580-92cf-09e8653a98ab-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"30bb22de-3ca5-4580-92cf-09e8653a98ab\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.248374 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtpc4\" (UniqueName: \"kubernetes.io/projected/30bb22de-3ca5-4580-92cf-09e8653a98ab-kube-api-access-mtpc4\") pod \"ovsdbserver-nb-0\" (UID: \"30bb22de-3ca5-4580-92cf-09e8653a98ab\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.279072 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"30bb22de-3ca5-4580-92cf-09e8653a98ab\") " pod="openstack/ovsdbserver-nb-0" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.331912 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67dd4a19-a0b7-4c7b-8289-40b7fc2476dc-combined-ca-bundle\") pod \"ovn-controller-wj47k\" (UID: \"67dd4a19-a0b7-4c7b-8289-40b7fc2476dc\") " pod="openstack/ovn-controller-wj47k" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.331963 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e5c7ffec-8976-487c-9bff-d3697cef3724-var-lib\") pod \"ovn-controller-ovs-mszzs\" (UID: \"e5c7ffec-8976-487c-9bff-d3697cef3724\") " pod="openstack/ovn-controller-ovs-mszzs" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.331983 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e5c7ffec-8976-487c-9bff-d3697cef3724-var-run\") pod \"ovn-controller-ovs-mszzs\" (UID: \"e5c7ffec-8976-487c-9bff-d3697cef3724\") " pod="openstack/ovn-controller-ovs-mszzs" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.332005 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/67dd4a19-a0b7-4c7b-8289-40b7fc2476dc-var-run-ovn\") pod \"ovn-controller-wj47k\" (UID: \"67dd4a19-a0b7-4c7b-8289-40b7fc2476dc\") " pod="openstack/ovn-controller-wj47k" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.332028 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e5c7ffec-8976-487c-9bff-d3697cef3724-etc-ovs\") pod \"ovn-controller-ovs-mszzs\" (UID: \"e5c7ffec-8976-487c-9bff-d3697cef3724\") " pod="openstack/ovn-controller-ovs-mszzs" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.332052 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48pnz\" (UniqueName: \"kubernetes.io/projected/e5c7ffec-8976-487c-9bff-d3697cef3724-kube-api-access-48pnz\") pod \"ovn-controller-ovs-mszzs\" (UID: \"e5c7ffec-8976-487c-9bff-d3697cef3724\") " pod="openstack/ovn-controller-ovs-mszzs" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.332360 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e5c7ffec-8976-487c-9bff-d3697cef3724-var-log\") pod \"ovn-controller-ovs-mszzs\" (UID: \"e5c7ffec-8976-487c-9bff-d3697cef3724\") " pod="openstack/ovn-controller-ovs-mszzs" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.332580 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/67dd4a19-a0b7-4c7b-8289-40b7fc2476dc-var-log-ovn\") pod \"ovn-controller-wj47k\" (UID: \"67dd4a19-a0b7-4c7b-8289-40b7fc2476dc\") " pod="openstack/ovn-controller-wj47k" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.332698 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/67dd4a19-a0b7-4c7b-8289-40b7fc2476dc-var-run\") pod \"ovn-controller-wj47k\" (UID: \"67dd4a19-a0b7-4c7b-8289-40b7fc2476dc\") " pod="openstack/ovn-controller-wj47k" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.332731 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vcl8\" (UniqueName: \"kubernetes.io/projected/67dd4a19-a0b7-4c7b-8289-40b7fc2476dc-kube-api-access-4vcl8\") pod \"ovn-controller-wj47k\" (UID: \"67dd4a19-a0b7-4c7b-8289-40b7fc2476dc\") " pod="openstack/ovn-controller-wj47k" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.332819 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5c7ffec-8976-487c-9bff-d3697cef3724-scripts\") pod \"ovn-controller-ovs-mszzs\" (UID: \"e5c7ffec-8976-487c-9bff-d3697cef3724\") " pod="openstack/ovn-controller-ovs-mszzs" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.332914 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67dd4a19-a0b7-4c7b-8289-40b7fc2476dc-scripts\") pod \"ovn-controller-wj47k\" (UID: \"67dd4a19-a0b7-4c7b-8289-40b7fc2476dc\") " pod="openstack/ovn-controller-wj47k" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.332946 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/67dd4a19-a0b7-4c7b-8289-40b7fc2476dc-ovn-controller-tls-certs\") pod \"ovn-controller-wj47k\" (UID: \"67dd4a19-a0b7-4c7b-8289-40b7fc2476dc\") " pod="openstack/ovn-controller-wj47k" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.433864 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/67dd4a19-a0b7-4c7b-8289-40b7fc2476dc-var-log-ovn\") pod \"ovn-controller-wj47k\" (UID: \"67dd4a19-a0b7-4c7b-8289-40b7fc2476dc\") " pod="openstack/ovn-controller-wj47k" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.433921 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/67dd4a19-a0b7-4c7b-8289-40b7fc2476dc-var-run\") pod \"ovn-controller-wj47k\" (UID: \"67dd4a19-a0b7-4c7b-8289-40b7fc2476dc\") " pod="openstack/ovn-controller-wj47k" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.433939 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vcl8\" (UniqueName: \"kubernetes.io/projected/67dd4a19-a0b7-4c7b-8289-40b7fc2476dc-kube-api-access-4vcl8\") pod \"ovn-controller-wj47k\" (UID: \"67dd4a19-a0b7-4c7b-8289-40b7fc2476dc\") " pod="openstack/ovn-controller-wj47k" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.433955 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5c7ffec-8976-487c-9bff-d3697cef3724-scripts\") pod \"ovn-controller-ovs-mszzs\" (UID: \"e5c7ffec-8976-487c-9bff-d3697cef3724\") " pod="openstack/ovn-controller-ovs-mszzs" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.433983 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67dd4a19-a0b7-4c7b-8289-40b7fc2476dc-scripts\") pod \"ovn-controller-wj47k\" (UID: \"67dd4a19-a0b7-4c7b-8289-40b7fc2476dc\") " pod="openstack/ovn-controller-wj47k" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.434000 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/67dd4a19-a0b7-4c7b-8289-40b7fc2476dc-ovn-controller-tls-certs\") pod \"ovn-controller-wj47k\" (UID: \"67dd4a19-a0b7-4c7b-8289-40b7fc2476dc\") " pod="openstack/ovn-controller-wj47k" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.434034 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67dd4a19-a0b7-4c7b-8289-40b7fc2476dc-combined-ca-bundle\") pod \"ovn-controller-wj47k\" (UID: \"67dd4a19-a0b7-4c7b-8289-40b7fc2476dc\") " pod="openstack/ovn-controller-wj47k" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.434050 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e5c7ffec-8976-487c-9bff-d3697cef3724-var-lib\") pod \"ovn-controller-ovs-mszzs\" (UID: \"e5c7ffec-8976-487c-9bff-d3697cef3724\") " pod="openstack/ovn-controller-ovs-mszzs" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.434063 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e5c7ffec-8976-487c-9bff-d3697cef3724-var-run\") pod \"ovn-controller-ovs-mszzs\" (UID: \"e5c7ffec-8976-487c-9bff-d3697cef3724\") " pod="openstack/ovn-controller-ovs-mszzs" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.434079 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/67dd4a19-a0b7-4c7b-8289-40b7fc2476dc-var-run-ovn\") pod \"ovn-controller-wj47k\" (UID: \"67dd4a19-a0b7-4c7b-8289-40b7fc2476dc\") " pod="openstack/ovn-controller-wj47k" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.434099 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e5c7ffec-8976-487c-9bff-d3697cef3724-etc-ovs\") pod \"ovn-controller-ovs-mszzs\" (UID: \"e5c7ffec-8976-487c-9bff-d3697cef3724\") " pod="openstack/ovn-controller-ovs-mszzs" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.434117 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48pnz\" (UniqueName: \"kubernetes.io/projected/e5c7ffec-8976-487c-9bff-d3697cef3724-kube-api-access-48pnz\") pod \"ovn-controller-ovs-mszzs\" (UID: \"e5c7ffec-8976-487c-9bff-d3697cef3724\") " pod="openstack/ovn-controller-ovs-mszzs" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.434171 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e5c7ffec-8976-487c-9bff-d3697cef3724-var-log\") pod \"ovn-controller-ovs-mszzs\" (UID: \"e5c7ffec-8976-487c-9bff-d3697cef3724\") " pod="openstack/ovn-controller-ovs-mszzs" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.434412 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/67dd4a19-a0b7-4c7b-8289-40b7fc2476dc-var-run\") pod \"ovn-controller-wj47k\" (UID: \"67dd4a19-a0b7-4c7b-8289-40b7fc2476dc\") " pod="openstack/ovn-controller-wj47k" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.434456 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/e5c7ffec-8976-487c-9bff-d3697cef3724-var-log\") pod \"ovn-controller-ovs-mszzs\" (UID: \"e5c7ffec-8976-487c-9bff-d3697cef3724\") " pod="openstack/ovn-controller-ovs-mszzs" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.434501 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/67dd4a19-a0b7-4c7b-8289-40b7fc2476dc-var-log-ovn\") pod \"ovn-controller-wj47k\" (UID: \"67dd4a19-a0b7-4c7b-8289-40b7fc2476dc\") " pod="openstack/ovn-controller-wj47k" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.434596 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/e5c7ffec-8976-487c-9bff-d3697cef3724-var-lib\") pod \"ovn-controller-ovs-mszzs\" (UID: \"e5c7ffec-8976-487c-9bff-d3697cef3724\") " pod="openstack/ovn-controller-ovs-mszzs" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.434704 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e5c7ffec-8976-487c-9bff-d3697cef3724-var-run\") pod \"ovn-controller-ovs-mszzs\" (UID: \"e5c7ffec-8976-487c-9bff-d3697cef3724\") " pod="openstack/ovn-controller-ovs-mszzs" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.434805 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/67dd4a19-a0b7-4c7b-8289-40b7fc2476dc-var-run-ovn\") pod \"ovn-controller-wj47k\" (UID: \"67dd4a19-a0b7-4c7b-8289-40b7fc2476dc\") " pod="openstack/ovn-controller-wj47k" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.434908 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/e5c7ffec-8976-487c-9bff-d3697cef3724-etc-ovs\") pod \"ovn-controller-ovs-mszzs\" (UID: \"e5c7ffec-8976-487c-9bff-d3697cef3724\") " pod="openstack/ovn-controller-ovs-mszzs" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.437579 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/67dd4a19-a0b7-4c7b-8289-40b7fc2476dc-ovn-controller-tls-certs\") pod \"ovn-controller-wj47k\" (UID: \"67dd4a19-a0b7-4c7b-8289-40b7fc2476dc\") " pod="openstack/ovn-controller-wj47k" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.437797 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e5c7ffec-8976-487c-9bff-d3697cef3724-scripts\") pod \"ovn-controller-ovs-mszzs\" (UID: \"e5c7ffec-8976-487c-9bff-d3697cef3724\") " pod="openstack/ovn-controller-ovs-mszzs" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.438105 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/67dd4a19-a0b7-4c7b-8289-40b7fc2476dc-scripts\") pod \"ovn-controller-wj47k\" (UID: \"67dd4a19-a0b7-4c7b-8289-40b7fc2476dc\") " pod="openstack/ovn-controller-wj47k" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.443303 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67dd4a19-a0b7-4c7b-8289-40b7fc2476dc-combined-ca-bundle\") pod \"ovn-controller-wj47k\" (UID: \"67dd4a19-a0b7-4c7b-8289-40b7fc2476dc\") " pod="openstack/ovn-controller-wj47k" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.453637 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vcl8\" (UniqueName: \"kubernetes.io/projected/67dd4a19-a0b7-4c7b-8289-40b7fc2476dc-kube-api-access-4vcl8\") pod \"ovn-controller-wj47k\" (UID: \"67dd4a19-a0b7-4c7b-8289-40b7fc2476dc\") " pod="openstack/ovn-controller-wj47k" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.462475 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48pnz\" (UniqueName: \"kubernetes.io/projected/e5c7ffec-8976-487c-9bff-d3697cef3724-kube-api-access-48pnz\") pod \"ovn-controller-ovs-mszzs\" (UID: \"e5c7ffec-8976-487c-9bff-d3697cef3724\") " pod="openstack/ovn-controller-ovs-mszzs" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.483397 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wj47k" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.573006 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 24 21:40:05 crc kubenswrapper[4915]: I1124 21:40:05.611839 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-mszzs" Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.018330 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.020641 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.032269 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.032489 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.032600 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-gdxn4" Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.032739 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.055451 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.153565 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"df07ff21-06da-4be5-85f8-cb5821d002bb\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.153637 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/df07ff21-06da-4be5-85f8-cb5821d002bb-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"df07ff21-06da-4be5-85f8-cb5821d002bb\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.153681 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df07ff21-06da-4be5-85f8-cb5821d002bb-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"df07ff21-06da-4be5-85f8-cb5821d002bb\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.153710 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chhsw\" (UniqueName: \"kubernetes.io/projected/df07ff21-06da-4be5-85f8-cb5821d002bb-kube-api-access-chhsw\") pod \"ovsdbserver-sb-0\" (UID: \"df07ff21-06da-4be5-85f8-cb5821d002bb\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.153738 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df07ff21-06da-4be5-85f8-cb5821d002bb-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"df07ff21-06da-4be5-85f8-cb5821d002bb\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.154076 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df07ff21-06da-4be5-85f8-cb5821d002bb-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"df07ff21-06da-4be5-85f8-cb5821d002bb\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.154406 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df07ff21-06da-4be5-85f8-cb5821d002bb-config\") pod \"ovsdbserver-sb-0\" (UID: \"df07ff21-06da-4be5-85f8-cb5821d002bb\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.154508 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/df07ff21-06da-4be5-85f8-cb5821d002bb-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"df07ff21-06da-4be5-85f8-cb5821d002bb\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.256640 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/df07ff21-06da-4be5-85f8-cb5821d002bb-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"df07ff21-06da-4be5-85f8-cb5821d002bb\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.256692 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df07ff21-06da-4be5-85f8-cb5821d002bb-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"df07ff21-06da-4be5-85f8-cb5821d002bb\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.256716 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chhsw\" (UniqueName: \"kubernetes.io/projected/df07ff21-06da-4be5-85f8-cb5821d002bb-kube-api-access-chhsw\") pod \"ovsdbserver-sb-0\" (UID: \"df07ff21-06da-4be5-85f8-cb5821d002bb\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.256740 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df07ff21-06da-4be5-85f8-cb5821d002bb-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"df07ff21-06da-4be5-85f8-cb5821d002bb\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.256791 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df07ff21-06da-4be5-85f8-cb5821d002bb-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"df07ff21-06da-4be5-85f8-cb5821d002bb\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.256866 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df07ff21-06da-4be5-85f8-cb5821d002bb-config\") pod \"ovsdbserver-sb-0\" (UID: \"df07ff21-06da-4be5-85f8-cb5821d002bb\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.256894 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/df07ff21-06da-4be5-85f8-cb5821d002bb-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"df07ff21-06da-4be5-85f8-cb5821d002bb\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.256951 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"df07ff21-06da-4be5-85f8-cb5821d002bb\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.257253 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"df07ff21-06da-4be5-85f8-cb5821d002bb\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-sb-0" Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.258093 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/df07ff21-06da-4be5-85f8-cb5821d002bb-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"df07ff21-06da-4be5-85f8-cb5821d002bb\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.258429 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/df07ff21-06da-4be5-85f8-cb5821d002bb-config\") pod \"ovsdbserver-sb-0\" (UID: \"df07ff21-06da-4be5-85f8-cb5821d002bb\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.259269 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df07ff21-06da-4be5-85f8-cb5821d002bb-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"df07ff21-06da-4be5-85f8-cb5821d002bb\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.263381 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/df07ff21-06da-4be5-85f8-cb5821d002bb-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"df07ff21-06da-4be5-85f8-cb5821d002bb\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.264356 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df07ff21-06da-4be5-85f8-cb5821d002bb-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"df07ff21-06da-4be5-85f8-cb5821d002bb\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.274164 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/df07ff21-06da-4be5-85f8-cb5821d002bb-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"df07ff21-06da-4be5-85f8-cb5821d002bb\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.277061 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chhsw\" (UniqueName: \"kubernetes.io/projected/df07ff21-06da-4be5-85f8-cb5821d002bb-kube-api-access-chhsw\") pod \"ovsdbserver-sb-0\" (UID: \"df07ff21-06da-4be5-85f8-cb5821d002bb\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.283654 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"df07ff21-06da-4be5-85f8-cb5821d002bb\") " pod="openstack/ovsdbserver-sb-0" Nov 24 21:40:09 crc kubenswrapper[4915]: I1124 21:40:09.357330 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 24 21:40:12 crc kubenswrapper[4915]: E1124 21:40:12.924286 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Nov 24 21:40:12 crc kubenswrapper[4915]: E1124 21:40:12.925088 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w2t62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(8c50db1c-ac88-4299-ab96-8b750308610f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:40:12 crc kubenswrapper[4915]: E1124 21:40:12.926388 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="8c50db1c-ac88-4299-ab96-8b750308610f" Nov 24 21:40:13 crc kubenswrapper[4915]: E1124 21:40:13.203865 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="8c50db1c-ac88-4299-ab96-8b750308610f" Nov 24 21:40:17 crc kubenswrapper[4915]: E1124 21:40:17.359923 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Nov 24 21:40:17 crc kubenswrapper[4915]: E1124 21:40:17.361547 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7ftnx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(a45944d3-396b-4683-b9b5-8e42e9331043): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:40:17 crc kubenswrapper[4915]: E1124 21:40:17.362865 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="a45944d3-396b-4683-b9b5-8e42e9331043" Nov 24 21:40:18 crc kubenswrapper[4915]: E1124 21:40:18.241960 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="a45944d3-396b-4683-b9b5-8e42e9331043" Nov 24 21:40:19 crc kubenswrapper[4915]: E1124 21:40:19.770857 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 21:40:19 crc kubenswrapper[4915]: E1124 21:40:19.771043 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7vg7v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-28jcc_openstack(1e7d248b-8288-4385-86b4-30fa3d43b8a4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:40:19 crc kubenswrapper[4915]: E1124 21:40:19.772905 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-28jcc" podUID="1e7d248b-8288-4385-86b4-30fa3d43b8a4" Nov 24 21:40:19 crc kubenswrapper[4915]: E1124 21:40:19.778495 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 21:40:19 crc kubenswrapper[4915]: E1124 21:40:19.778624 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vfljs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-m8vp9_openstack(e084c012-7c0b-407b-8e65-31598c80a76f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:40:19 crc kubenswrapper[4915]: E1124 21:40:19.779913 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-m8vp9" podUID="e084c012-7c0b-407b-8e65-31598c80a76f" Nov 24 21:40:19 crc kubenswrapper[4915]: E1124 21:40:19.783592 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 21:40:19 crc kubenswrapper[4915]: E1124 21:40:19.783680 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9nppd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-bbxkx_openstack(086e5806-87ef-4c73-8446-192165490619): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:40:19 crc kubenswrapper[4915]: E1124 21:40:19.785361 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-bbxkx" podUID="086e5806-87ef-4c73-8446-192165490619" Nov 24 21:40:20 crc kubenswrapper[4915]: E1124 21:40:20.262460 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-bbxkx" podUID="086e5806-87ef-4c73-8446-192165490619" Nov 24 21:40:20 crc kubenswrapper[4915]: E1124 21:40:20.262807 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-m8vp9" podUID="e084c012-7c0b-407b-8e65-31598c80a76f" Nov 24 21:40:21 crc kubenswrapper[4915]: E1124 21:40:21.885395 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 24 21:40:21 crc kubenswrapper[4915]: E1124 21:40:21.886045 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-djpkk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-v5lml_openstack(8babaf8f-1e9a-4a58-b846-010ccf01a4ab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:40:21 crc kubenswrapper[4915]: E1124 21:40:21.897117 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-v5lml" podUID="8babaf8f-1e9a-4a58-b846-010ccf01a4ab" Nov 24 21:40:22 crc kubenswrapper[4915]: I1124 21:40:22.151609 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-28jcc" Nov 24 21:40:22 crc kubenswrapper[4915]: I1124 21:40:22.242705 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vg7v\" (UniqueName: \"kubernetes.io/projected/1e7d248b-8288-4385-86b4-30fa3d43b8a4-kube-api-access-7vg7v\") pod \"1e7d248b-8288-4385-86b4-30fa3d43b8a4\" (UID: \"1e7d248b-8288-4385-86b4-30fa3d43b8a4\") " Nov 24 21:40:22 crc kubenswrapper[4915]: I1124 21:40:22.242764 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e7d248b-8288-4385-86b4-30fa3d43b8a4-config\") pod \"1e7d248b-8288-4385-86b4-30fa3d43b8a4\" (UID: \"1e7d248b-8288-4385-86b4-30fa3d43b8a4\") " Nov 24 21:40:22 crc kubenswrapper[4915]: I1124 21:40:22.246360 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e7d248b-8288-4385-86b4-30fa3d43b8a4-config" (OuterVolumeSpecName: "config") pod "1e7d248b-8288-4385-86b4-30fa3d43b8a4" (UID: "1e7d248b-8288-4385-86b4-30fa3d43b8a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:40:22 crc kubenswrapper[4915]: I1124 21:40:22.249581 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e7d248b-8288-4385-86b4-30fa3d43b8a4-kube-api-access-7vg7v" (OuterVolumeSpecName: "kube-api-access-7vg7v") pod "1e7d248b-8288-4385-86b4-30fa3d43b8a4" (UID: "1e7d248b-8288-4385-86b4-30fa3d43b8a4"). InnerVolumeSpecName "kube-api-access-7vg7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:40:22 crc kubenswrapper[4915]: I1124 21:40:22.297156 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-28jcc" Nov 24 21:40:22 crc kubenswrapper[4915]: I1124 21:40:22.297258 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-28jcc" event={"ID":"1e7d248b-8288-4385-86b4-30fa3d43b8a4","Type":"ContainerDied","Data":"21485f7f68e7bdf1e303f0f7e8fec651841e67d553296aad6adae09f899ab85d"} Nov 24 21:40:22 crc kubenswrapper[4915]: I1124 21:40:22.345510 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vg7v\" (UniqueName: \"kubernetes.io/projected/1e7d248b-8288-4385-86b4-30fa3d43b8a4-kube-api-access-7vg7v\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:22 crc kubenswrapper[4915]: I1124 21:40:22.345818 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e7d248b-8288-4385-86b4-30fa3d43b8a4-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:22 crc kubenswrapper[4915]: I1124 21:40:22.447467 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-28jcc"] Nov 24 21:40:22 crc kubenswrapper[4915]: W1124 21:40:22.452938 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59f54e36_0033_4e91_be8b_7d447d666d04.slice/crio-40674364d51505edffbe971664af73918e7803fc104f22e76a759d43eb969576 WatchSource:0}: Error finding container 40674364d51505edffbe971664af73918e7803fc104f22e76a759d43eb969576: Status 404 returned error can't find the container with id 40674364d51505edffbe971664af73918e7803fc104f22e76a759d43eb969576 Nov 24 21:40:22 crc kubenswrapper[4915]: I1124 21:40:22.458522 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-28jcc"] Nov 24 21:40:22 crc kubenswrapper[4915]: I1124 21:40:22.465981 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:40:22 crc kubenswrapper[4915]: I1124 21:40:22.493532 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 24 21:40:22 crc kubenswrapper[4915]: I1124 21:40:22.665503 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:40:23 crc kubenswrapper[4915]: I1124 21:40:23.263408 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 24 21:40:23 crc kubenswrapper[4915]: I1124 21:40:23.280397 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wj47k"] Nov 24 21:40:23 crc kubenswrapper[4915]: W1124 21:40:23.290428 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59818a4e_515f_477f_96ad_78e6b8310657.slice/crio-22017e7d38b312e0202136904f021ecbb15cb9a43a9d819c0f2200b29397bc3e WatchSource:0}: Error finding container 22017e7d38b312e0202136904f021ecbb15cb9a43a9d819c0f2200b29397bc3e: Status 404 returned error can't find the container with id 22017e7d38b312e0202136904f021ecbb15cb9a43a9d819c0f2200b29397bc3e Nov 24 21:40:23 crc kubenswrapper[4915]: I1124 21:40:23.305660 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wj47k" event={"ID":"67dd4a19-a0b7-4c7b-8289-40b7fc2476dc","Type":"ContainerStarted","Data":"5ed7dc1271bc3042e446d6a1ffa5b7651ab968b1f621a814bfc452663a7edb1c"} Nov 24 21:40:23 crc kubenswrapper[4915]: I1124 21:40:23.306998 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"59818a4e-515f-477f-96ad-78e6b8310657","Type":"ContainerStarted","Data":"22017e7d38b312e0202136904f021ecbb15cb9a43a9d819c0f2200b29397bc3e"} Nov 24 21:40:23 crc kubenswrapper[4915]: I1124 21:40:23.308369 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-v5lml" event={"ID":"8babaf8f-1e9a-4a58-b846-010ccf01a4ab","Type":"ContainerDied","Data":"8a4d83ed1cf4695bb1050535fb39aebe21328cea719f20e951de21f4bd77a636"} Nov 24 21:40:23 crc kubenswrapper[4915]: I1124 21:40:23.308397 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a4d83ed1cf4695bb1050535fb39aebe21328cea719f20e951de21f4bd77a636" Nov 24 21:40:23 crc kubenswrapper[4915]: I1124 21:40:23.309223 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f841b28e-565b-41e8-9288-582e862cdceb","Type":"ContainerStarted","Data":"5ea929aca1b2f9a811a01fedcdf6b8c67c43e716c1e30dbb79a6af374ca4ab72"} Nov 24 21:40:23 crc kubenswrapper[4915]: I1124 21:40:23.310114 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b0914514-21db-4664-9fdf-935c0f671637","Type":"ContainerStarted","Data":"88d015c928c1835f396562e4e804807325b75fdc083b72752fe6111424ff4f87"} Nov 24 21:40:23 crc kubenswrapper[4915]: I1124 21:40:23.311941 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"59f54e36-0033-4e91-be8b-7d447d666d04","Type":"ContainerStarted","Data":"40674364d51505edffbe971664af73918e7803fc104f22e76a759d43eb969576"} Nov 24 21:40:23 crc kubenswrapper[4915]: I1124 21:40:23.313695 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d00638de-cc40-405d-b271-b681f199a172","Type":"ContainerStarted","Data":"19e84580522d9fbcdefd5c694398389e0384f4c472a5388100c06205cf725ca2"} Nov 24 21:40:23 crc kubenswrapper[4915]: I1124 21:40:23.313723 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d00638de-cc40-405d-b271-b681f199a172","Type":"ContainerStarted","Data":"7cb3b27c1de26dcdce54ade1e0d186241633375916c661015e012c118b22271e"} Nov 24 21:40:23 crc kubenswrapper[4915]: I1124 21:40:23.380245 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-v5lml" Nov 24 21:40:23 crc kubenswrapper[4915]: I1124 21:40:23.462726 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-76b59b6c64-9qb78"] Nov 24 21:40:23 crc kubenswrapper[4915]: I1124 21:40:23.473858 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djpkk\" (UniqueName: \"kubernetes.io/projected/8babaf8f-1e9a-4a58-b846-010ccf01a4ab-kube-api-access-djpkk\") pod \"8babaf8f-1e9a-4a58-b846-010ccf01a4ab\" (UID: \"8babaf8f-1e9a-4a58-b846-010ccf01a4ab\") " Nov 24 21:40:23 crc kubenswrapper[4915]: I1124 21:40:23.473957 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8babaf8f-1e9a-4a58-b846-010ccf01a4ab-dns-svc\") pod \"8babaf8f-1e9a-4a58-b846-010ccf01a4ab\" (UID: \"8babaf8f-1e9a-4a58-b846-010ccf01a4ab\") " Nov 24 21:40:23 crc kubenswrapper[4915]: I1124 21:40:23.474062 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8babaf8f-1e9a-4a58-b846-010ccf01a4ab-config\") pod \"8babaf8f-1e9a-4a58-b846-010ccf01a4ab\" (UID: \"8babaf8f-1e9a-4a58-b846-010ccf01a4ab\") " Nov 24 21:40:23 crc kubenswrapper[4915]: I1124 21:40:23.475177 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8babaf8f-1e9a-4a58-b846-010ccf01a4ab-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8babaf8f-1e9a-4a58-b846-010ccf01a4ab" (UID: "8babaf8f-1e9a-4a58-b846-010ccf01a4ab"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:40:23 crc kubenswrapper[4915]: I1124 21:40:23.475348 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8babaf8f-1e9a-4a58-b846-010ccf01a4ab-config" (OuterVolumeSpecName: "config") pod "8babaf8f-1e9a-4a58-b846-010ccf01a4ab" (UID: "8babaf8f-1e9a-4a58-b846-010ccf01a4ab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:40:23 crc kubenswrapper[4915]: I1124 21:40:23.576565 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8babaf8f-1e9a-4a58-b846-010ccf01a4ab-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:23 crc kubenswrapper[4915]: I1124 21:40:23.576596 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8babaf8f-1e9a-4a58-b846-010ccf01a4ab-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:24 crc kubenswrapper[4915]: I1124 21:40:24.090085 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8babaf8f-1e9a-4a58-b846-010ccf01a4ab-kube-api-access-djpkk" (OuterVolumeSpecName: "kube-api-access-djpkk") pod "8babaf8f-1e9a-4a58-b846-010ccf01a4ab" (UID: "8babaf8f-1e9a-4a58-b846-010ccf01a4ab"). InnerVolumeSpecName "kube-api-access-djpkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:40:24 crc kubenswrapper[4915]: I1124 21:40:24.130958 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-8dsqc"] Nov 24 21:40:24 crc kubenswrapper[4915]: I1124 21:40:24.186686 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djpkk\" (UniqueName: \"kubernetes.io/projected/8babaf8f-1e9a-4a58-b846-010ccf01a4ab-kube-api-access-djpkk\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:24 crc kubenswrapper[4915]: I1124 21:40:24.323523 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76b59b6c64-9qb78" event={"ID":"31d679a7-9921-4b31-a4bd-7bc758fdd651","Type":"ContainerStarted","Data":"df2535d2263742c740004c737147d0bdfaa77396edc83a15e0b16ebf800da29e"} Nov 24 21:40:24 crc kubenswrapper[4915]: I1124 21:40:24.324983 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-8dsqc" event={"ID":"22ffe2e4-a7d4-4af8-93e6-1255daaabe16","Type":"ContainerStarted","Data":"cec0c06460e75be642d7b4508439a9866dffd47fb9729319605caf6738785824"} Nov 24 21:40:24 crc kubenswrapper[4915]: I1124 21:40:24.324996 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-v5lml" Nov 24 21:40:24 crc kubenswrapper[4915]: I1124 21:40:24.405101 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-v5lml"] Nov 24 21:40:24 crc kubenswrapper[4915]: I1124 21:40:24.448813 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e7d248b-8288-4385-86b4-30fa3d43b8a4" path="/var/lib/kubelet/pods/1e7d248b-8288-4385-86b4-30fa3d43b8a4/volumes" Nov 24 21:40:24 crc kubenswrapper[4915]: I1124 21:40:24.449856 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-v5lml"] Nov 24 21:40:24 crc kubenswrapper[4915]: I1124 21:40:24.525545 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-mszzs"] Nov 24 21:40:25 crc kubenswrapper[4915]: I1124 21:40:25.185734 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 24 21:40:25 crc kubenswrapper[4915]: I1124 21:40:25.298985 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 24 21:40:25 crc kubenswrapper[4915]: I1124 21:40:25.334511 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-76b59b6c64-9qb78" event={"ID":"31d679a7-9921-4b31-a4bd-7bc758fdd651","Type":"ContainerStarted","Data":"cbcfaf3a8a6998584da6c382bf614ead3777bd43fc9c351572fdd975bef08de1"} Nov 24 21:40:25 crc kubenswrapper[4915]: I1124 21:40:25.336842 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mszzs" event={"ID":"e5c7ffec-8976-487c-9bff-d3697cef3724","Type":"ContainerStarted","Data":"9edf30ec586e59177f8643dffcf516b27cd6f0865bea42f4a003b42d85b71ee1"} Nov 24 21:40:25 crc kubenswrapper[4915]: I1124 21:40:25.349949 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-76b59b6c64-9qb78" podStartSLOduration=23.349935156 podStartE2EDuration="23.349935156s" podCreationTimestamp="2025-11-24 21:40:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:40:25.348465566 +0000 UTC m=+1243.664717739" watchObservedRunningTime="2025-11-24 21:40:25.349935156 +0000 UTC m=+1243.666187319" Nov 24 21:40:25 crc kubenswrapper[4915]: W1124 21:40:25.461168 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf07ff21_06da_4be5_85f8_cb5821d002bb.slice/crio-0c4fd66b989b5828421a8a123aecf53d177fd66149444f0597fe698cb9638443 WatchSource:0}: Error finding container 0c4fd66b989b5828421a8a123aecf53d177fd66149444f0597fe698cb9638443: Status 404 returned error can't find the container with id 0c4fd66b989b5828421a8a123aecf53d177fd66149444f0597fe698cb9638443 Nov 24 21:40:26 crc kubenswrapper[4915]: I1124 21:40:26.356303 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"30bb22de-3ca5-4580-92cf-09e8653a98ab","Type":"ContainerStarted","Data":"c5d07d503fe672c17f133522e87f5cd78fe026231a37c5df2ade98e824178d44"} Nov 24 21:40:26 crc kubenswrapper[4915]: I1124 21:40:26.358887 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"df07ff21-06da-4be5-85f8-cb5821d002bb","Type":"ContainerStarted","Data":"0c4fd66b989b5828421a8a123aecf53d177fd66149444f0597fe698cb9638443"} Nov 24 21:40:26 crc kubenswrapper[4915]: I1124 21:40:26.442518 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8babaf8f-1e9a-4a58-b846-010ccf01a4ab" path="/var/lib/kubelet/pods/8babaf8f-1e9a-4a58-b846-010ccf01a4ab/volumes" Nov 24 21:40:27 crc kubenswrapper[4915]: I1124 21:40:27.366983 4915 generic.go:334] "Generic (PLEG): container finished" podID="b0914514-21db-4664-9fdf-935c0f671637" containerID="88d015c928c1835f396562e4e804807325b75fdc083b72752fe6111424ff4f87" exitCode=0 Nov 24 21:40:27 crc kubenswrapper[4915]: I1124 21:40:27.367137 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b0914514-21db-4664-9fdf-935c0f671637","Type":"ContainerDied","Data":"88d015c928c1835f396562e4e804807325b75fdc083b72752fe6111424ff4f87"} Nov 24 21:40:27 crc kubenswrapper[4915]: I1124 21:40:27.370752 4915 generic.go:334] "Generic (PLEG): container finished" podID="d00638de-cc40-405d-b271-b681f199a172" containerID="19e84580522d9fbcdefd5c694398389e0384f4c472a5388100c06205cf725ca2" exitCode=0 Nov 24 21:40:27 crc kubenswrapper[4915]: I1124 21:40:27.370826 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d00638de-cc40-405d-b271-b681f199a172","Type":"ContainerDied","Data":"19e84580522d9fbcdefd5c694398389e0384f4c472a5388100c06205cf725ca2"} Nov 24 21:40:30 crc kubenswrapper[4915]: I1124 21:40:30.405904 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"b0914514-21db-4664-9fdf-935c0f671637","Type":"ContainerStarted","Data":"61014407bb8fc7de8529b01e92224f50742574f3c6a0ee76bcb752f5f640abe1"} Nov 24 21:40:30 crc kubenswrapper[4915]: I1124 21:40:30.411922 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"59818a4e-515f-477f-96ad-78e6b8310657","Type":"ContainerStarted","Data":"46d2a714e8a4fcc52bb6f320fbf436982644d43451243a4507e6e03705751780"} Nov 24 21:40:30 crc kubenswrapper[4915]: I1124 21:40:30.412150 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 24 21:40:30 crc kubenswrapper[4915]: I1124 21:40:30.456466 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=10.778129065 podStartE2EDuration="34.456440708s" podCreationTimestamp="2025-11-24 21:39:56 +0000 UTC" firstStartedPulling="2025-11-24 21:39:58.28562794 +0000 UTC m=+1216.601880113" lastFinishedPulling="2025-11-24 21:40:21.963939583 +0000 UTC m=+1240.280191756" observedRunningTime="2025-11-24 21:40:30.439165367 +0000 UTC m=+1248.755417550" watchObservedRunningTime="2025-11-24 21:40:30.456440708 +0000 UTC m=+1248.772692891" Nov 24 21:40:30 crc kubenswrapper[4915]: I1124 21:40:30.478358 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=27.387672083 podStartE2EDuration="32.478336032s" podCreationTimestamp="2025-11-24 21:39:58 +0000 UTC" firstStartedPulling="2025-11-24 21:40:23.295337265 +0000 UTC m=+1241.611589438" lastFinishedPulling="2025-11-24 21:40:28.386001214 +0000 UTC m=+1246.702253387" observedRunningTime="2025-11-24 21:40:30.477838818 +0000 UTC m=+1248.794091031" watchObservedRunningTime="2025-11-24 21:40:30.478336032 +0000 UTC m=+1248.794588195" Nov 24 21:40:31 crc kubenswrapper[4915]: I1124 21:40:31.420743 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wj47k" event={"ID":"67dd4a19-a0b7-4c7b-8289-40b7fc2476dc","Type":"ContainerStarted","Data":"e773158eb9e6157e2064139296c3ff1d09050be8d20f6444ebf2e4c21ab76be6"} Nov 24 21:40:31 crc kubenswrapper[4915]: I1124 21:40:31.421383 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-wj47k" Nov 24 21:40:31 crc kubenswrapper[4915]: I1124 21:40:31.422266 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-8dsqc" event={"ID":"22ffe2e4-a7d4-4af8-93e6-1255daaabe16","Type":"ContainerStarted","Data":"aa02dfc76710d680b66b10ea9b425c9038b3d28671b869578f73cff55ec4659f"} Nov 24 21:40:31 crc kubenswrapper[4915]: I1124 21:40:31.423953 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mszzs" event={"ID":"e5c7ffec-8976-487c-9bff-d3697cef3724","Type":"ContainerStarted","Data":"8b05328c7e53f799bd237d632209a7c66f87df161da752ea811be60b6f6a5adf"} Nov 24 21:40:31 crc kubenswrapper[4915]: I1124 21:40:31.432487 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d00638de-cc40-405d-b271-b681f199a172","Type":"ContainerStarted","Data":"3030c4597cb748eafe0e9d213070c7e65c930075871b57e2efcc5473e9b63437"} Nov 24 21:40:31 crc kubenswrapper[4915]: I1124 21:40:31.460815 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-wj47k" podStartSLOduration=19.925644208 podStartE2EDuration="26.460799365s" podCreationTimestamp="2025-11-24 21:40:05 +0000 UTC" firstStartedPulling="2025-11-24 21:40:23.289013146 +0000 UTC m=+1241.605265319" lastFinishedPulling="2025-11-24 21:40:29.824168303 +0000 UTC m=+1248.140420476" observedRunningTime="2025-11-24 21:40:31.447029238 +0000 UTC m=+1249.763281411" watchObservedRunningTime="2025-11-24 21:40:31.460799365 +0000 UTC m=+1249.777051528" Nov 24 21:40:31 crc kubenswrapper[4915]: I1124 21:40:31.493425 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=34.493355364 podStartE2EDuration="34.493355364s" podCreationTimestamp="2025-11-24 21:39:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:40:31.472453386 +0000 UTC m=+1249.788705569" watchObservedRunningTime="2025-11-24 21:40:31.493355364 +0000 UTC m=+1249.809607547" Nov 24 21:40:31 crc kubenswrapper[4915]: I1124 21:40:31.538142 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-8dsqc" podStartSLOduration=24.868253229 podStartE2EDuration="30.538092417s" podCreationTimestamp="2025-11-24 21:40:01 +0000 UTC" firstStartedPulling="2025-11-24 21:40:24.15526234 +0000 UTC m=+1242.471514513" lastFinishedPulling="2025-11-24 21:40:29.825101528 +0000 UTC m=+1248.141353701" observedRunningTime="2025-11-24 21:40:31.530353651 +0000 UTC m=+1249.846605834" watchObservedRunningTime="2025-11-24 21:40:31.538092417 +0000 UTC m=+1249.854344600" Nov 24 21:40:32 crc kubenswrapper[4915]: I1124 21:40:32.457059 4915 generic.go:334] "Generic (PLEG): container finished" podID="e5c7ffec-8976-487c-9bff-d3697cef3724" containerID="8b05328c7e53f799bd237d632209a7c66f87df161da752ea811be60b6f6a5adf" exitCode=0 Nov 24 21:40:32 crc kubenswrapper[4915]: I1124 21:40:32.457135 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mszzs" event={"ID":"e5c7ffec-8976-487c-9bff-d3697cef3724","Type":"ContainerDied","Data":"8b05328c7e53f799bd237d632209a7c66f87df161da752ea811be60b6f6a5adf"} Nov 24 21:40:32 crc kubenswrapper[4915]: I1124 21:40:32.461082 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f841b28e-565b-41e8-9288-582e862cdceb","Type":"ContainerStarted","Data":"2cd5bdd8a6c0ab6a2ca80a5d03dd2fb0fe37c72fa61140b570070d9ff5f3eb08"} Nov 24 21:40:32 crc kubenswrapper[4915]: I1124 21:40:32.461131 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 24 21:40:32 crc kubenswrapper[4915]: I1124 21:40:32.464015 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"30bb22de-3ca5-4580-92cf-09e8653a98ab","Type":"ContainerStarted","Data":"c1a68134f5a9b84febf4b6bcf0421f84f411c8b249e6d4aabea04631e4636c38"} Nov 24 21:40:32 crc kubenswrapper[4915]: I1124 21:40:32.468369 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"df07ff21-06da-4be5-85f8-cb5821d002bb","Type":"ContainerStarted","Data":"51af42a7204b64369fecb94a50199e0ad381acbc802d28ae10ed4d071f8cc51c"} Nov 24 21:40:32 crc kubenswrapper[4915]: I1124 21:40:32.578139 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=25.269369206 podStartE2EDuration="32.578121457s" podCreationTimestamp="2025-11-24 21:40:00 +0000 UTC" firstStartedPulling="2025-11-24 21:40:22.681282906 +0000 UTC m=+1240.997535079" lastFinishedPulling="2025-11-24 21:40:29.990035157 +0000 UTC m=+1248.306287330" observedRunningTime="2025-11-24 21:40:32.575750874 +0000 UTC m=+1250.892003057" watchObservedRunningTime="2025-11-24 21:40:32.578121457 +0000 UTC m=+1250.894373620" Nov 24 21:40:32 crc kubenswrapper[4915]: I1124 21:40:32.696862 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-76b59b6c64-9qb78" Nov 24 21:40:32 crc kubenswrapper[4915]: I1124 21:40:32.696948 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-76b59b6c64-9qb78" Nov 24 21:40:32 crc kubenswrapper[4915]: I1124 21:40:32.702607 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-76b59b6c64-9qb78" Nov 24 21:40:33 crc kubenswrapper[4915]: I1124 21:40:33.478813 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a45944d3-396b-4683-b9b5-8e42e9331043","Type":"ContainerStarted","Data":"75d4058db27c5fe795eb5b76e0577c3f27bc129142a656460e05201b3b1c3c20"} Nov 24 21:40:33 crc kubenswrapper[4915]: I1124 21:40:33.481155 4915 generic.go:334] "Generic (PLEG): container finished" podID="086e5806-87ef-4c73-8446-192165490619" containerID="6cc9a1a4e7c434594ac68f395a9315e7c88d7e956745e2bcca0912bf4d36cf48" exitCode=0 Nov 24 21:40:33 crc kubenswrapper[4915]: I1124 21:40:33.481228 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bbxkx" event={"ID":"086e5806-87ef-4c73-8446-192165490619","Type":"ContainerDied","Data":"6cc9a1a4e7c434594ac68f395a9315e7c88d7e956745e2bcca0912bf4d36cf48"} Nov 24 21:40:33 crc kubenswrapper[4915]: I1124 21:40:33.482999 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8c50db1c-ac88-4299-ab96-8b750308610f","Type":"ContainerStarted","Data":"534c9537314191cda6d87d81dd98fa53f08531ad5782dccb0402569ba537e1b7"} Nov 24 21:40:33 crc kubenswrapper[4915]: I1124 21:40:33.484898 4915 generic.go:334] "Generic (PLEG): container finished" podID="e084c012-7c0b-407b-8e65-31598c80a76f" containerID="0ec84281a6cb1c943547f07ac7ac1084d3963b376be2e8a331357c1c9526fcee" exitCode=0 Nov 24 21:40:33 crc kubenswrapper[4915]: I1124 21:40:33.484978 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-m8vp9" event={"ID":"e084c012-7c0b-407b-8e65-31598c80a76f","Type":"ContainerDied","Data":"0ec84281a6cb1c943547f07ac7ac1084d3963b376be2e8a331357c1c9526fcee"} Nov 24 21:40:33 crc kubenswrapper[4915]: I1124 21:40:33.487070 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mszzs" event={"ID":"e5c7ffec-8976-487c-9bff-d3697cef3724","Type":"ContainerStarted","Data":"6fe012edf69a169036104fb3a39b5721b08cb503f9ac54beb0ef842937b0c7be"} Nov 24 21:40:33 crc kubenswrapper[4915]: I1124 21:40:33.490377 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-76b59b6c64-9qb78" Nov 24 21:40:33 crc kubenswrapper[4915]: I1124 21:40:33.617257 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6f65944744-l2cf5"] Nov 24 21:40:34 crc kubenswrapper[4915]: I1124 21:40:34.506065 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"59f54e36-0033-4e91-be8b-7d447d666d04","Type":"ContainerStarted","Data":"987d19fc7a337053a115e4ac23f2d9b55373977d0707aae3966d28182c0f6133"} Nov 24 21:40:35 crc kubenswrapper[4915]: I1124 21:40:35.518351 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"df07ff21-06da-4be5-85f8-cb5821d002bb","Type":"ContainerStarted","Data":"16b769e8fff87d9ffbfb79a242e152a895a1d51a1b1ff0559b57dd1d8eaedb4f"} Nov 24 21:40:35 crc kubenswrapper[4915]: I1124 21:40:35.521095 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-mszzs" event={"ID":"e5c7ffec-8976-487c-9bff-d3697cef3724","Type":"ContainerStarted","Data":"f761e29843efb536de65e542bcc59d9f98381b740bb2563ca2f24911ab977811"} Nov 24 21:40:35 crc kubenswrapper[4915]: I1124 21:40:35.521293 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-mszzs" Nov 24 21:40:35 crc kubenswrapper[4915]: I1124 21:40:35.523494 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bbxkx" event={"ID":"086e5806-87ef-4c73-8446-192165490619","Type":"ContainerStarted","Data":"d4470afb2edfe1ef89cbf1047782dd0ef0962ef2d3c47e9f24ac2c8dea40c843"} Nov 24 21:40:35 crc kubenswrapper[4915]: I1124 21:40:35.523707 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-bbxkx" Nov 24 21:40:35 crc kubenswrapper[4915]: I1124 21:40:35.525576 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"30bb22de-3ca5-4580-92cf-09e8653a98ab","Type":"ContainerStarted","Data":"259b46a2b143e53de36ba6a4ae2f8530c9d547fb77fa5e3372ee2804b636cdee"} Nov 24 21:40:35 crc kubenswrapper[4915]: I1124 21:40:35.528007 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-m8vp9" event={"ID":"e084c012-7c0b-407b-8e65-31598c80a76f","Type":"ContainerStarted","Data":"ad29a58ff3695ee3c2d8be1cdd726ba14efdc6968bf66fec7c47149543174d8e"} Nov 24 21:40:35 crc kubenswrapper[4915]: I1124 21:40:35.528243 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-m8vp9" Nov 24 21:40:35 crc kubenswrapper[4915]: I1124 21:40:35.546511 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=19.004206298 podStartE2EDuration="28.546488891s" podCreationTimestamp="2025-11-24 21:40:07 +0000 UTC" firstStartedPulling="2025-11-24 21:40:25.464128442 +0000 UTC m=+1243.780380615" lastFinishedPulling="2025-11-24 21:40:35.006411035 +0000 UTC m=+1253.322663208" observedRunningTime="2025-11-24 21:40:35.537305375 +0000 UTC m=+1253.853557558" watchObservedRunningTime="2025-11-24 21:40:35.546488891 +0000 UTC m=+1253.862741064" Nov 24 21:40:35 crc kubenswrapper[4915]: I1124 21:40:35.561095 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-bbxkx" podStartSLOduration=4.856780003 podStartE2EDuration="41.561079339s" podCreationTimestamp="2025-11-24 21:39:54 +0000 UTC" firstStartedPulling="2025-11-24 21:39:55.211077535 +0000 UTC m=+1213.527329718" lastFinishedPulling="2025-11-24 21:40:31.915376881 +0000 UTC m=+1250.231629054" observedRunningTime="2025-11-24 21:40:35.558992074 +0000 UTC m=+1253.875244287" watchObservedRunningTime="2025-11-24 21:40:35.561079339 +0000 UTC m=+1253.877331513" Nov 24 21:40:35 crc kubenswrapper[4915]: I1124 21:40:35.574242 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 24 21:40:35 crc kubenswrapper[4915]: I1124 21:40:35.574299 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 24 21:40:35 crc kubenswrapper[4915]: I1124 21:40:35.604072 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-mszzs" podStartSLOduration=25.777079589 podStartE2EDuration="30.604049935s" podCreationTimestamp="2025-11-24 21:40:05 +0000 UTC" firstStartedPulling="2025-11-24 21:40:24.5368791 +0000 UTC m=+1242.853131273" lastFinishedPulling="2025-11-24 21:40:29.363849426 +0000 UTC m=+1247.680101619" observedRunningTime="2025-11-24 21:40:35.600446029 +0000 UTC m=+1253.916698212" watchObservedRunningTime="2025-11-24 21:40:35.604049935 +0000 UTC m=+1253.920302128" Nov 24 21:40:35 crc kubenswrapper[4915]: I1124 21:40:35.612026 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-mszzs" Nov 24 21:40:35 crc kubenswrapper[4915]: I1124 21:40:35.617901 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 24 21:40:35 crc kubenswrapper[4915]: I1124 21:40:35.628673 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-m8vp9" podStartSLOduration=5.426584412 podStartE2EDuration="41.628651552s" podCreationTimestamp="2025-11-24 21:39:54 +0000 UTC" firstStartedPulling="2025-11-24 21:39:55.650957757 +0000 UTC m=+1213.967209930" lastFinishedPulling="2025-11-24 21:40:31.853024877 +0000 UTC m=+1250.169277070" observedRunningTime="2025-11-24 21:40:35.620956676 +0000 UTC m=+1253.937208869" watchObservedRunningTime="2025-11-24 21:40:35.628651552 +0000 UTC m=+1253.944903735" Nov 24 21:40:35 crc kubenswrapper[4915]: I1124 21:40:35.643310 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=23.817976151 podStartE2EDuration="32.643290532s" podCreationTimestamp="2025-11-24 21:40:03 +0000 UTC" firstStartedPulling="2025-11-24 21:40:26.179360688 +0000 UTC m=+1244.495612861" lastFinishedPulling="2025-11-24 21:40:35.004675069 +0000 UTC m=+1253.320927242" observedRunningTime="2025-11-24 21:40:35.64059296 +0000 UTC m=+1253.956845143" watchObservedRunningTime="2025-11-24 21:40:35.643290532 +0000 UTC m=+1253.959542705" Nov 24 21:40:36 crc kubenswrapper[4915]: I1124 21:40:36.359455 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 24 21:40:36 crc kubenswrapper[4915]: I1124 21:40:36.450765 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 24 21:40:36 crc kubenswrapper[4915]: I1124 21:40:36.538061 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 24 21:40:36 crc kubenswrapper[4915]: I1124 21:40:36.609062 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 24 21:40:36 crc kubenswrapper[4915]: I1124 21:40:36.619807 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 24 21:40:36 crc kubenswrapper[4915]: I1124 21:40:36.923588 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-m8vp9"] Nov 24 21:40:36 crc kubenswrapper[4915]: I1124 21:40:36.936362 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-wv8qp"] Nov 24 21:40:36 crc kubenswrapper[4915]: I1124 21:40:36.938633 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wv8qp" Nov 24 21:40:36 crc kubenswrapper[4915]: I1124 21:40:36.949509 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 24 21:40:36 crc kubenswrapper[4915]: I1124 21:40:36.968231 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wv8qp"] Nov 24 21:40:36 crc kubenswrapper[4915]: I1124 21:40:36.991491 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-xpvqz"] Nov 24 21:40:36 crc kubenswrapper[4915]: I1124 21:40:36.992357 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72sxw\" (UniqueName: \"kubernetes.io/projected/f8d6bb82-7d54-47c9-bc17-a572fe39d0df-kube-api-access-72sxw\") pod \"ovn-controller-metrics-wv8qp\" (UID: \"f8d6bb82-7d54-47c9-bc17-a572fe39d0df\") " pod="openstack/ovn-controller-metrics-wv8qp" Nov 24 21:40:36 crc kubenswrapper[4915]: I1124 21:40:36.992422 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d6bb82-7d54-47c9-bc17-a572fe39d0df-combined-ca-bundle\") pod \"ovn-controller-metrics-wv8qp\" (UID: \"f8d6bb82-7d54-47c9-bc17-a572fe39d0df\") " pod="openstack/ovn-controller-metrics-wv8qp" Nov 24 21:40:36 crc kubenswrapper[4915]: I1124 21:40:36.992454 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8d6bb82-7d54-47c9-bc17-a572fe39d0df-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wv8qp\" (UID: \"f8d6bb82-7d54-47c9-bc17-a572fe39d0df\") " pod="openstack/ovn-controller-metrics-wv8qp" Nov 24 21:40:36 crc kubenswrapper[4915]: I1124 21:40:36.992492 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f8d6bb82-7d54-47c9-bc17-a572fe39d0df-ovn-rundir\") pod \"ovn-controller-metrics-wv8qp\" (UID: \"f8d6bb82-7d54-47c9-bc17-a572fe39d0df\") " pod="openstack/ovn-controller-metrics-wv8qp" Nov 24 21:40:36 crc kubenswrapper[4915]: I1124 21:40:36.992544 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8d6bb82-7d54-47c9-bc17-a572fe39d0df-config\") pod \"ovn-controller-metrics-wv8qp\" (UID: \"f8d6bb82-7d54-47c9-bc17-a572fe39d0df\") " pod="openstack/ovn-controller-metrics-wv8qp" Nov 24 21:40:36 crc kubenswrapper[4915]: I1124 21:40:36.992585 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f8d6bb82-7d54-47c9-bc17-a572fe39d0df-ovs-rundir\") pod \"ovn-controller-metrics-wv8qp\" (UID: \"f8d6bb82-7d54-47c9-bc17-a572fe39d0df\") " pod="openstack/ovn-controller-metrics-wv8qp" Nov 24 21:40:36 crc kubenswrapper[4915]: I1124 21:40:36.993629 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-xpvqz" Nov 24 21:40:36 crc kubenswrapper[4915]: I1124 21:40:36.998979 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.013040 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-xpvqz"] Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.096488 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8d6bb82-7d54-47c9-bc17-a572fe39d0df-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wv8qp\" (UID: \"f8d6bb82-7d54-47c9-bc17-a572fe39d0df\") " pod="openstack/ovn-controller-metrics-wv8qp" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.096540 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f8d6bb82-7d54-47c9-bc17-a572fe39d0df-ovn-rundir\") pod \"ovn-controller-metrics-wv8qp\" (UID: \"f8d6bb82-7d54-47c9-bc17-a572fe39d0df\") " pod="openstack/ovn-controller-metrics-wv8qp" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.096562 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c220a3f-cd28-4ec2-9dd4-54bf620b874f-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-xpvqz\" (UID: \"5c220a3f-cd28-4ec2-9dd4-54bf620b874f\") " pod="openstack/dnsmasq-dns-6bc7876d45-xpvqz" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.096609 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8d6bb82-7d54-47c9-bc17-a572fe39d0df-config\") pod \"ovn-controller-metrics-wv8qp\" (UID: \"f8d6bb82-7d54-47c9-bc17-a572fe39d0df\") " pod="openstack/ovn-controller-metrics-wv8qp" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.096648 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f8d6bb82-7d54-47c9-bc17-a572fe39d0df-ovs-rundir\") pod \"ovn-controller-metrics-wv8qp\" (UID: \"f8d6bb82-7d54-47c9-bc17-a572fe39d0df\") " pod="openstack/ovn-controller-metrics-wv8qp" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.096730 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c220a3f-cd28-4ec2-9dd4-54bf620b874f-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-xpvqz\" (UID: \"5c220a3f-cd28-4ec2-9dd4-54bf620b874f\") " pod="openstack/dnsmasq-dns-6bc7876d45-xpvqz" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.096803 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlm6j\" (UniqueName: \"kubernetes.io/projected/5c220a3f-cd28-4ec2-9dd4-54bf620b874f-kube-api-access-zlm6j\") pod \"dnsmasq-dns-6bc7876d45-xpvqz\" (UID: \"5c220a3f-cd28-4ec2-9dd4-54bf620b874f\") " pod="openstack/dnsmasq-dns-6bc7876d45-xpvqz" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.096850 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72sxw\" (UniqueName: \"kubernetes.io/projected/f8d6bb82-7d54-47c9-bc17-a572fe39d0df-kube-api-access-72sxw\") pod \"ovn-controller-metrics-wv8qp\" (UID: \"f8d6bb82-7d54-47c9-bc17-a572fe39d0df\") " pod="openstack/ovn-controller-metrics-wv8qp" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.096878 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d6bb82-7d54-47c9-bc17-a572fe39d0df-combined-ca-bundle\") pod \"ovn-controller-metrics-wv8qp\" (UID: \"f8d6bb82-7d54-47c9-bc17-a572fe39d0df\") " pod="openstack/ovn-controller-metrics-wv8qp" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.096901 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c220a3f-cd28-4ec2-9dd4-54bf620b874f-config\") pod \"dnsmasq-dns-6bc7876d45-xpvqz\" (UID: \"5c220a3f-cd28-4ec2-9dd4-54bf620b874f\") " pod="openstack/dnsmasq-dns-6bc7876d45-xpvqz" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.100746 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f8d6bb82-7d54-47c9-bc17-a572fe39d0df-ovs-rundir\") pod \"ovn-controller-metrics-wv8qp\" (UID: \"f8d6bb82-7d54-47c9-bc17-a572fe39d0df\") " pod="openstack/ovn-controller-metrics-wv8qp" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.100853 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f8d6bb82-7d54-47c9-bc17-a572fe39d0df-ovn-rundir\") pod \"ovn-controller-metrics-wv8qp\" (UID: \"f8d6bb82-7d54-47c9-bc17-a572fe39d0df\") " pod="openstack/ovn-controller-metrics-wv8qp" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.101665 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8d6bb82-7d54-47c9-bc17-a572fe39d0df-config\") pod \"ovn-controller-metrics-wv8qp\" (UID: \"f8d6bb82-7d54-47c9-bc17-a572fe39d0df\") " pod="openstack/ovn-controller-metrics-wv8qp" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.102123 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bbxkx"] Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.144330 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.146413 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.156513 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.160049 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8d6bb82-7d54-47c9-bc17-a572fe39d0df-combined-ca-bundle\") pod \"ovn-controller-metrics-wv8qp\" (UID: \"f8d6bb82-7d54-47c9-bc17-a572fe39d0df\") " pod="openstack/ovn-controller-metrics-wv8qp" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.162487 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f8d6bb82-7d54-47c9-bc17-a572fe39d0df-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wv8qp\" (UID: \"f8d6bb82-7d54-47c9-bc17-a572fe39d0df\") " pod="openstack/ovn-controller-metrics-wv8qp" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.162730 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.162977 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.163110 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.163985 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-qklmt" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.171108 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-zphpj"] Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.173711 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-zphpj" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.175347 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.179870 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-zphpj"] Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.190565 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72sxw\" (UniqueName: \"kubernetes.io/projected/f8d6bb82-7d54-47c9-bc17-a572fe39d0df-kube-api-access-72sxw\") pod \"ovn-controller-metrics-wv8qp\" (UID: \"f8d6bb82-7d54-47c9-bc17-a572fe39d0df\") " pod="openstack/ovn-controller-metrics-wv8qp" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.198431 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2t9z\" (UniqueName: \"kubernetes.io/projected/1a9ca780-e98b-499f-8da0-3070a56716e4-kube-api-access-f2t9z\") pod \"dnsmasq-dns-8554648995-zphpj\" (UID: \"1a9ca780-e98b-499f-8da0-3070a56716e4\") " pod="openstack/dnsmasq-dns-8554648995-zphpj" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.198524 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a9ca780-e98b-499f-8da0-3070a56716e4-dns-svc\") pod \"dnsmasq-dns-8554648995-zphpj\" (UID: \"1a9ca780-e98b-499f-8da0-3070a56716e4\") " pod="openstack/dnsmasq-dns-8554648995-zphpj" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.198555 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a9ca780-e98b-499f-8da0-3070a56716e4-config\") pod \"dnsmasq-dns-8554648995-zphpj\" (UID: \"1a9ca780-e98b-499f-8da0-3070a56716e4\") " pod="openstack/dnsmasq-dns-8554648995-zphpj" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.198583 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c220a3f-cd28-4ec2-9dd4-54bf620b874f-config\") pod \"dnsmasq-dns-6bc7876d45-xpvqz\" (UID: \"5c220a3f-cd28-4ec2-9dd4-54bf620b874f\") " pod="openstack/dnsmasq-dns-6bc7876d45-xpvqz" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.198613 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c220a3f-cd28-4ec2-9dd4-54bf620b874f-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-xpvqz\" (UID: \"5c220a3f-cd28-4ec2-9dd4-54bf620b874f\") " pod="openstack/dnsmasq-dns-6bc7876d45-xpvqz" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.198649 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13f6ca90-aa94-45a7-803d-22cee7fd27aa-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"13f6ca90-aa94-45a7-803d-22cee7fd27aa\") " pod="openstack/ovn-northd-0" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.198667 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13f6ca90-aa94-45a7-803d-22cee7fd27aa-config\") pod \"ovn-northd-0\" (UID: \"13f6ca90-aa94-45a7-803d-22cee7fd27aa\") " pod="openstack/ovn-northd-0" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.198687 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/13f6ca90-aa94-45a7-803d-22cee7fd27aa-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"13f6ca90-aa94-45a7-803d-22cee7fd27aa\") " pod="openstack/ovn-northd-0" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.198707 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13f6ca90-aa94-45a7-803d-22cee7fd27aa-scripts\") pod \"ovn-northd-0\" (UID: \"13f6ca90-aa94-45a7-803d-22cee7fd27aa\") " pod="openstack/ovn-northd-0" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.198726 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/13f6ca90-aa94-45a7-803d-22cee7fd27aa-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"13f6ca90-aa94-45a7-803d-22cee7fd27aa\") " pod="openstack/ovn-northd-0" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.198766 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/13f6ca90-aa94-45a7-803d-22cee7fd27aa-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"13f6ca90-aa94-45a7-803d-22cee7fd27aa\") " pod="openstack/ovn-northd-0" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.198811 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a9ca780-e98b-499f-8da0-3070a56716e4-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-zphpj\" (UID: \"1a9ca780-e98b-499f-8da0-3070a56716e4\") " pod="openstack/dnsmasq-dns-8554648995-zphpj" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.198836 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a9ca780-e98b-499f-8da0-3070a56716e4-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-zphpj\" (UID: \"1a9ca780-e98b-499f-8da0-3070a56716e4\") " pod="openstack/dnsmasq-dns-8554648995-zphpj" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.198860 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q54dk\" (UniqueName: \"kubernetes.io/projected/13f6ca90-aa94-45a7-803d-22cee7fd27aa-kube-api-access-q54dk\") pod \"ovn-northd-0\" (UID: \"13f6ca90-aa94-45a7-803d-22cee7fd27aa\") " pod="openstack/ovn-northd-0" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.198897 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c220a3f-cd28-4ec2-9dd4-54bf620b874f-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-xpvqz\" (UID: \"5c220a3f-cd28-4ec2-9dd4-54bf620b874f\") " pod="openstack/dnsmasq-dns-6bc7876d45-xpvqz" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.198918 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlm6j\" (UniqueName: \"kubernetes.io/projected/5c220a3f-cd28-4ec2-9dd4-54bf620b874f-kube-api-access-zlm6j\") pod \"dnsmasq-dns-6bc7876d45-xpvqz\" (UID: \"5c220a3f-cd28-4ec2-9dd4-54bf620b874f\") " pod="openstack/dnsmasq-dns-6bc7876d45-xpvqz" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.208980 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c220a3f-cd28-4ec2-9dd4-54bf620b874f-config\") pod \"dnsmasq-dns-6bc7876d45-xpvqz\" (UID: \"5c220a3f-cd28-4ec2-9dd4-54bf620b874f\") " pod="openstack/dnsmasq-dns-6bc7876d45-xpvqz" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.210190 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c220a3f-cd28-4ec2-9dd4-54bf620b874f-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-xpvqz\" (UID: \"5c220a3f-cd28-4ec2-9dd4-54bf620b874f\") " pod="openstack/dnsmasq-dns-6bc7876d45-xpvqz" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.210652 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c220a3f-cd28-4ec2-9dd4-54bf620b874f-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-xpvqz\" (UID: \"5c220a3f-cd28-4ec2-9dd4-54bf620b874f\") " pod="openstack/dnsmasq-dns-6bc7876d45-xpvqz" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.242177 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlm6j\" (UniqueName: \"kubernetes.io/projected/5c220a3f-cd28-4ec2-9dd4-54bf620b874f-kube-api-access-zlm6j\") pod \"dnsmasq-dns-6bc7876d45-xpvqz\" (UID: \"5c220a3f-cd28-4ec2-9dd4-54bf620b874f\") " pod="openstack/dnsmasq-dns-6bc7876d45-xpvqz" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.307196 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2t9z\" (UniqueName: \"kubernetes.io/projected/1a9ca780-e98b-499f-8da0-3070a56716e4-kube-api-access-f2t9z\") pod \"dnsmasq-dns-8554648995-zphpj\" (UID: \"1a9ca780-e98b-499f-8da0-3070a56716e4\") " pod="openstack/dnsmasq-dns-8554648995-zphpj" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.307256 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a9ca780-e98b-499f-8da0-3070a56716e4-dns-svc\") pod \"dnsmasq-dns-8554648995-zphpj\" (UID: \"1a9ca780-e98b-499f-8da0-3070a56716e4\") " pod="openstack/dnsmasq-dns-8554648995-zphpj" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.307296 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a9ca780-e98b-499f-8da0-3070a56716e4-config\") pod \"dnsmasq-dns-8554648995-zphpj\" (UID: \"1a9ca780-e98b-499f-8da0-3070a56716e4\") " pod="openstack/dnsmasq-dns-8554648995-zphpj" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.307351 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13f6ca90-aa94-45a7-803d-22cee7fd27aa-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"13f6ca90-aa94-45a7-803d-22cee7fd27aa\") " pod="openstack/ovn-northd-0" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.307371 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13f6ca90-aa94-45a7-803d-22cee7fd27aa-config\") pod \"ovn-northd-0\" (UID: \"13f6ca90-aa94-45a7-803d-22cee7fd27aa\") " pod="openstack/ovn-northd-0" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.307394 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/13f6ca90-aa94-45a7-803d-22cee7fd27aa-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"13f6ca90-aa94-45a7-803d-22cee7fd27aa\") " pod="openstack/ovn-northd-0" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.307411 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13f6ca90-aa94-45a7-803d-22cee7fd27aa-scripts\") pod \"ovn-northd-0\" (UID: \"13f6ca90-aa94-45a7-803d-22cee7fd27aa\") " pod="openstack/ovn-northd-0" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.307426 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/13f6ca90-aa94-45a7-803d-22cee7fd27aa-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"13f6ca90-aa94-45a7-803d-22cee7fd27aa\") " pod="openstack/ovn-northd-0" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.307465 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/13f6ca90-aa94-45a7-803d-22cee7fd27aa-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"13f6ca90-aa94-45a7-803d-22cee7fd27aa\") " pod="openstack/ovn-northd-0" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.307489 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a9ca780-e98b-499f-8da0-3070a56716e4-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-zphpj\" (UID: \"1a9ca780-e98b-499f-8da0-3070a56716e4\") " pod="openstack/dnsmasq-dns-8554648995-zphpj" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.307513 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a9ca780-e98b-499f-8da0-3070a56716e4-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-zphpj\" (UID: \"1a9ca780-e98b-499f-8da0-3070a56716e4\") " pod="openstack/dnsmasq-dns-8554648995-zphpj" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.307533 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q54dk\" (UniqueName: \"kubernetes.io/projected/13f6ca90-aa94-45a7-803d-22cee7fd27aa-kube-api-access-q54dk\") pod \"ovn-northd-0\" (UID: \"13f6ca90-aa94-45a7-803d-22cee7fd27aa\") " pod="openstack/ovn-northd-0" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.308556 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a9ca780-e98b-499f-8da0-3070a56716e4-config\") pod \"dnsmasq-dns-8554648995-zphpj\" (UID: \"1a9ca780-e98b-499f-8da0-3070a56716e4\") " pod="openstack/dnsmasq-dns-8554648995-zphpj" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.308665 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a9ca780-e98b-499f-8da0-3070a56716e4-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-zphpj\" (UID: \"1a9ca780-e98b-499f-8da0-3070a56716e4\") " pod="openstack/dnsmasq-dns-8554648995-zphpj" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.308727 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/13f6ca90-aa94-45a7-803d-22cee7fd27aa-scripts\") pod \"ovn-northd-0\" (UID: \"13f6ca90-aa94-45a7-803d-22cee7fd27aa\") " pod="openstack/ovn-northd-0" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.308899 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a9ca780-e98b-499f-8da0-3070a56716e4-dns-svc\") pod \"dnsmasq-dns-8554648995-zphpj\" (UID: \"1a9ca780-e98b-499f-8da0-3070a56716e4\") " pod="openstack/dnsmasq-dns-8554648995-zphpj" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.309158 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13f6ca90-aa94-45a7-803d-22cee7fd27aa-config\") pod \"ovn-northd-0\" (UID: \"13f6ca90-aa94-45a7-803d-22cee7fd27aa\") " pod="openstack/ovn-northd-0" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.309358 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a9ca780-e98b-499f-8da0-3070a56716e4-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-zphpj\" (UID: \"1a9ca780-e98b-499f-8da0-3070a56716e4\") " pod="openstack/dnsmasq-dns-8554648995-zphpj" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.309536 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/13f6ca90-aa94-45a7-803d-22cee7fd27aa-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"13f6ca90-aa94-45a7-803d-22cee7fd27aa\") " pod="openstack/ovn-northd-0" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.311379 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/13f6ca90-aa94-45a7-803d-22cee7fd27aa-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"13f6ca90-aa94-45a7-803d-22cee7fd27aa\") " pod="openstack/ovn-northd-0" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.311617 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13f6ca90-aa94-45a7-803d-22cee7fd27aa-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"13f6ca90-aa94-45a7-803d-22cee7fd27aa\") " pod="openstack/ovn-northd-0" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.312255 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/13f6ca90-aa94-45a7-803d-22cee7fd27aa-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"13f6ca90-aa94-45a7-803d-22cee7fd27aa\") " pod="openstack/ovn-northd-0" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.325335 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q54dk\" (UniqueName: \"kubernetes.io/projected/13f6ca90-aa94-45a7-803d-22cee7fd27aa-kube-api-access-q54dk\") pod \"ovn-northd-0\" (UID: \"13f6ca90-aa94-45a7-803d-22cee7fd27aa\") " pod="openstack/ovn-northd-0" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.328731 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2t9z\" (UniqueName: \"kubernetes.io/projected/1a9ca780-e98b-499f-8da0-3070a56716e4-kube-api-access-f2t9z\") pod \"dnsmasq-dns-8554648995-zphpj\" (UID: \"1a9ca780-e98b-499f-8da0-3070a56716e4\") " pod="openstack/dnsmasq-dns-8554648995-zphpj" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.332023 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wv8qp" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.362595 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-xpvqz" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.549135 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.549507 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.591620 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-m8vp9" podUID="e084c012-7c0b-407b-8e65-31598c80a76f" containerName="dnsmasq-dns" containerID="cri-o://ad29a58ff3695ee3c2d8be1cdd726ba14efdc6968bf66fec7c47149543174d8e" gracePeriod=10 Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.592969 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-bbxkx" podUID="086e5806-87ef-4c73-8446-192165490619" containerName="dnsmasq-dns" containerID="cri-o://d4470afb2edfe1ef89cbf1047782dd0ef0962ef2d3c47e9f24ac2c8dea40c843" gracePeriod=10 Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.597661 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.602401 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-zphpj" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.783889 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 24 21:40:37 crc kubenswrapper[4915]: I1124 21:40:37.870836 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wv8qp"] Nov 24 21:40:37 crc kubenswrapper[4915]: W1124 21:40:37.875045 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8d6bb82_7d54_47c9_bc17_a572fe39d0df.slice/crio-c92eeab5fcf43c0da4aa7e664285dce370e392c6b3fc446c1bc39c6daa51771b WatchSource:0}: Error finding container c92eeab5fcf43c0da4aa7e664285dce370e392c6b3fc446c1bc39c6daa51771b: Status 404 returned error can't find the container with id c92eeab5fcf43c0da4aa7e664285dce370e392c6b3fc446c1bc39c6daa51771b Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.070672 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-xpvqz"] Nov 24 21:40:38 crc kubenswrapper[4915]: W1124 21:40:38.080431 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c220a3f_cd28_4ec2_9dd4_54bf620b874f.slice/crio-be7d41ae6cf51655d87b4288d3a507bea3a019a15d7386bf06f0391bd421c7e2 WatchSource:0}: Error finding container be7d41ae6cf51655d87b4288d3a507bea3a019a15d7386bf06f0391bd421c7e2: Status 404 returned error can't find the container with id be7d41ae6cf51655d87b4288d3a507bea3a019a15d7386bf06f0391bd421c7e2 Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.291250 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.298186 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-zphpj"] Nov 24 21:40:38 crc kubenswrapper[4915]: W1124 21:40:38.321926 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13f6ca90_aa94_45a7_803d_22cee7fd27aa.slice/crio-1ef9e300d6d69b252e4add0cf32b9b62958c84a8057eada46089a91c0ff6188c WatchSource:0}: Error finding container 1ef9e300d6d69b252e4add0cf32b9b62958c84a8057eada46089a91c0ff6188c: Status 404 returned error can't find the container with id 1ef9e300d6d69b252e4add0cf32b9b62958c84a8057eada46089a91c0ff6188c Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.430687 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-bbxkx" Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.544178 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-m8vp9" Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.546910 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nppd\" (UniqueName: \"kubernetes.io/projected/086e5806-87ef-4c73-8446-192165490619-kube-api-access-9nppd\") pod \"086e5806-87ef-4c73-8446-192165490619\" (UID: \"086e5806-87ef-4c73-8446-192165490619\") " Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.546958 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/086e5806-87ef-4c73-8446-192165490619-dns-svc\") pod \"086e5806-87ef-4c73-8446-192165490619\" (UID: \"086e5806-87ef-4c73-8446-192165490619\") " Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.547016 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/086e5806-87ef-4c73-8446-192165490619-config\") pod \"086e5806-87ef-4c73-8446-192165490619\" (UID: \"086e5806-87ef-4c73-8446-192165490619\") " Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.551605 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/086e5806-87ef-4c73-8446-192165490619-kube-api-access-9nppd" (OuterVolumeSpecName: "kube-api-access-9nppd") pod "086e5806-87ef-4c73-8446-192165490619" (UID: "086e5806-87ef-4c73-8446-192165490619"). InnerVolumeSpecName "kube-api-access-9nppd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.598564 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/086e5806-87ef-4c73-8446-192165490619-config" (OuterVolumeSpecName: "config") pod "086e5806-87ef-4c73-8446-192165490619" (UID: "086e5806-87ef-4c73-8446-192165490619"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.602709 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/086e5806-87ef-4c73-8446-192165490619-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "086e5806-87ef-4c73-8446-192165490619" (UID: "086e5806-87ef-4c73-8446-192165490619"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.607935 4915 generic.go:334] "Generic (PLEG): container finished" podID="5c220a3f-cd28-4ec2-9dd4-54bf620b874f" containerID="3bc1729e828a003ae84d24761ba79a9fbe3919376bdebe42cefa23d9b29b9989" exitCode=0 Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.608026 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-xpvqz" event={"ID":"5c220a3f-cd28-4ec2-9dd4-54bf620b874f","Type":"ContainerDied","Data":"3bc1729e828a003ae84d24761ba79a9fbe3919376bdebe42cefa23d9b29b9989"} Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.608056 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-xpvqz" event={"ID":"5c220a3f-cd28-4ec2-9dd4-54bf620b874f","Type":"ContainerStarted","Data":"be7d41ae6cf51655d87b4288d3a507bea3a019a15d7386bf06f0391bd421c7e2"} Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.619402 4915 generic.go:334] "Generic (PLEG): container finished" podID="1a9ca780-e98b-499f-8da0-3070a56716e4" containerID="8b527b32eee616366915a3ed2ab8b3da4b19af69cf2c10f70bc9f15d2f1c5834" exitCode=0 Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.620961 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-zphpj" event={"ID":"1a9ca780-e98b-499f-8da0-3070a56716e4","Type":"ContainerDied","Data":"8b527b32eee616366915a3ed2ab8b3da4b19af69cf2c10f70bc9f15d2f1c5834"} Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.621022 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-zphpj" event={"ID":"1a9ca780-e98b-499f-8da0-3070a56716e4","Type":"ContainerStarted","Data":"623bebd01efe76881642f105879a58fa2550f34b451565f17af47ef9ffe0a2df"} Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.624810 4915 generic.go:334] "Generic (PLEG): container finished" podID="086e5806-87ef-4c73-8446-192165490619" containerID="d4470afb2edfe1ef89cbf1047782dd0ef0962ef2d3c47e9f24ac2c8dea40c843" exitCode=0 Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.624902 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-bbxkx" Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.625951 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bbxkx" event={"ID":"086e5806-87ef-4c73-8446-192165490619","Type":"ContainerDied","Data":"d4470afb2edfe1ef89cbf1047782dd0ef0962ef2d3c47e9f24ac2c8dea40c843"} Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.625992 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bbxkx" event={"ID":"086e5806-87ef-4c73-8446-192165490619","Type":"ContainerDied","Data":"19d331378b1c794965ab9c8230f99d2d682be60022ea2b1bc49a0564274bbaf9"} Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.626012 4915 scope.go:117] "RemoveContainer" containerID="d4470afb2edfe1ef89cbf1047782dd0ef0962ef2d3c47e9f24ac2c8dea40c843" Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.632491 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"13f6ca90-aa94-45a7-803d-22cee7fd27aa","Type":"ContainerStarted","Data":"1ef9e300d6d69b252e4add0cf32b9b62958c84a8057eada46089a91c0ff6188c"} Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.635473 4915 generic.go:334] "Generic (PLEG): container finished" podID="e084c012-7c0b-407b-8e65-31598c80a76f" containerID="ad29a58ff3695ee3c2d8be1cdd726ba14efdc6968bf66fec7c47149543174d8e" exitCode=0 Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.635543 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-m8vp9" Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.635566 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-m8vp9" event={"ID":"e084c012-7c0b-407b-8e65-31598c80a76f","Type":"ContainerDied","Data":"ad29a58ff3695ee3c2d8be1cdd726ba14efdc6968bf66fec7c47149543174d8e"} Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.635598 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-m8vp9" event={"ID":"e084c012-7c0b-407b-8e65-31598c80a76f","Type":"ContainerDied","Data":"30d158408e17c80b81e816ce08395ac6d1a520500222ca9d8903b4238cb112e8"} Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.638615 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wv8qp" event={"ID":"f8d6bb82-7d54-47c9-bc17-a572fe39d0df","Type":"ContainerStarted","Data":"260780969b840c5d2a4665d0e121cc9456411df31ff315c06a4a9c5dd6996610"} Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.638648 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wv8qp" event={"ID":"f8d6bb82-7d54-47c9-bc17-a572fe39d0df","Type":"ContainerStarted","Data":"c92eeab5fcf43c0da4aa7e664285dce370e392c6b3fc446c1bc39c6daa51771b"} Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.648477 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfljs\" (UniqueName: \"kubernetes.io/projected/e084c012-7c0b-407b-8e65-31598c80a76f-kube-api-access-vfljs\") pod \"e084c012-7c0b-407b-8e65-31598c80a76f\" (UID: \"e084c012-7c0b-407b-8e65-31598c80a76f\") " Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.648524 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e084c012-7c0b-407b-8e65-31598c80a76f-dns-svc\") pod \"e084c012-7c0b-407b-8e65-31598c80a76f\" (UID: \"e084c012-7c0b-407b-8e65-31598c80a76f\") " Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.648667 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e084c012-7c0b-407b-8e65-31598c80a76f-config\") pod \"e084c012-7c0b-407b-8e65-31598c80a76f\" (UID: \"e084c012-7c0b-407b-8e65-31598c80a76f\") " Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.649664 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nppd\" (UniqueName: \"kubernetes.io/projected/086e5806-87ef-4c73-8446-192165490619-kube-api-access-9nppd\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.649707 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/086e5806-87ef-4c73-8446-192165490619-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.649720 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/086e5806-87ef-4c73-8446-192165490619-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.652536 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e084c012-7c0b-407b-8e65-31598c80a76f-kube-api-access-vfljs" (OuterVolumeSpecName: "kube-api-access-vfljs") pod "e084c012-7c0b-407b-8e65-31598c80a76f" (UID: "e084c012-7c0b-407b-8e65-31598c80a76f"). InnerVolumeSpecName "kube-api-access-vfljs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.695431 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-wv8qp" podStartSLOduration=2.69540991 podStartE2EDuration="2.69540991s" podCreationTimestamp="2025-11-24 21:40:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:40:38.673329781 +0000 UTC m=+1256.989581964" watchObservedRunningTime="2025-11-24 21:40:38.69540991 +0000 UTC m=+1257.011662083" Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.726522 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bbxkx"] Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.728241 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e084c012-7c0b-407b-8e65-31598c80a76f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e084c012-7c0b-407b-8e65-31598c80a76f" (UID: "e084c012-7c0b-407b-8e65-31598c80a76f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.729752 4915 scope.go:117] "RemoveContainer" containerID="6cc9a1a4e7c434594ac68f395a9315e7c88d7e956745e2bcca0912bf4d36cf48" Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.736883 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e084c012-7c0b-407b-8e65-31598c80a76f-config" (OuterVolumeSpecName: "config") pod "e084c012-7c0b-407b-8e65-31598c80a76f" (UID: "e084c012-7c0b-407b-8e65-31598c80a76f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.737125 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bbxkx"] Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.749525 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.751400 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfljs\" (UniqueName: \"kubernetes.io/projected/e084c012-7c0b-407b-8e65-31598c80a76f-kube-api-access-vfljs\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.751424 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e084c012-7c0b-407b-8e65-31598c80a76f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.751434 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e084c012-7c0b-407b-8e65-31598c80a76f-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.760155 4915 scope.go:117] "RemoveContainer" containerID="d4470afb2edfe1ef89cbf1047782dd0ef0962ef2d3c47e9f24ac2c8dea40c843" Nov 24 21:40:38 crc kubenswrapper[4915]: E1124 21:40:38.763061 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4470afb2edfe1ef89cbf1047782dd0ef0962ef2d3c47e9f24ac2c8dea40c843\": container with ID starting with d4470afb2edfe1ef89cbf1047782dd0ef0962ef2d3c47e9f24ac2c8dea40c843 not found: ID does not exist" containerID="d4470afb2edfe1ef89cbf1047782dd0ef0962ef2d3c47e9f24ac2c8dea40c843" Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.763107 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4470afb2edfe1ef89cbf1047782dd0ef0962ef2d3c47e9f24ac2c8dea40c843"} err="failed to get container status \"d4470afb2edfe1ef89cbf1047782dd0ef0962ef2d3c47e9f24ac2c8dea40c843\": rpc error: code = NotFound desc = could not find container \"d4470afb2edfe1ef89cbf1047782dd0ef0962ef2d3c47e9f24ac2c8dea40c843\": container with ID starting with d4470afb2edfe1ef89cbf1047782dd0ef0962ef2d3c47e9f24ac2c8dea40c843 not found: ID does not exist" Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.763136 4915 scope.go:117] "RemoveContainer" containerID="6cc9a1a4e7c434594ac68f395a9315e7c88d7e956745e2bcca0912bf4d36cf48" Nov 24 21:40:38 crc kubenswrapper[4915]: E1124 21:40:38.765679 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cc9a1a4e7c434594ac68f395a9315e7c88d7e956745e2bcca0912bf4d36cf48\": container with ID starting with 6cc9a1a4e7c434594ac68f395a9315e7c88d7e956745e2bcca0912bf4d36cf48 not found: ID does not exist" containerID="6cc9a1a4e7c434594ac68f395a9315e7c88d7e956745e2bcca0912bf4d36cf48" Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.765746 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cc9a1a4e7c434594ac68f395a9315e7c88d7e956745e2bcca0912bf4d36cf48"} err="failed to get container status \"6cc9a1a4e7c434594ac68f395a9315e7c88d7e956745e2bcca0912bf4d36cf48\": rpc error: code = NotFound desc = could not find container \"6cc9a1a4e7c434594ac68f395a9315e7c88d7e956745e2bcca0912bf4d36cf48\": container with ID starting with 6cc9a1a4e7c434594ac68f395a9315e7c88d7e956745e2bcca0912bf4d36cf48 not found: ID does not exist" Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.765797 4915 scope.go:117] "RemoveContainer" containerID="ad29a58ff3695ee3c2d8be1cdd726ba14efdc6968bf66fec7c47149543174d8e" Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.802658 4915 scope.go:117] "RemoveContainer" containerID="0ec84281a6cb1c943547f07ac7ac1084d3963b376be2e8a331357c1c9526fcee" Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.829749 4915 scope.go:117] "RemoveContainer" containerID="ad29a58ff3695ee3c2d8be1cdd726ba14efdc6968bf66fec7c47149543174d8e" Nov 24 21:40:38 crc kubenswrapper[4915]: E1124 21:40:38.830192 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad29a58ff3695ee3c2d8be1cdd726ba14efdc6968bf66fec7c47149543174d8e\": container with ID starting with ad29a58ff3695ee3c2d8be1cdd726ba14efdc6968bf66fec7c47149543174d8e not found: ID does not exist" containerID="ad29a58ff3695ee3c2d8be1cdd726ba14efdc6968bf66fec7c47149543174d8e" Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.830231 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad29a58ff3695ee3c2d8be1cdd726ba14efdc6968bf66fec7c47149543174d8e"} err="failed to get container status \"ad29a58ff3695ee3c2d8be1cdd726ba14efdc6968bf66fec7c47149543174d8e\": rpc error: code = NotFound desc = could not find container \"ad29a58ff3695ee3c2d8be1cdd726ba14efdc6968bf66fec7c47149543174d8e\": container with ID starting with ad29a58ff3695ee3c2d8be1cdd726ba14efdc6968bf66fec7c47149543174d8e not found: ID does not exist" Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.830257 4915 scope.go:117] "RemoveContainer" containerID="0ec84281a6cb1c943547f07ac7ac1084d3963b376be2e8a331357c1c9526fcee" Nov 24 21:40:38 crc kubenswrapper[4915]: E1124 21:40:38.831220 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ec84281a6cb1c943547f07ac7ac1084d3963b376be2e8a331357c1c9526fcee\": container with ID starting with 0ec84281a6cb1c943547f07ac7ac1084d3963b376be2e8a331357c1c9526fcee not found: ID does not exist" containerID="0ec84281a6cb1c943547f07ac7ac1084d3963b376be2e8a331357c1c9526fcee" Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.831247 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ec84281a6cb1c943547f07ac7ac1084d3963b376be2e8a331357c1c9526fcee"} err="failed to get container status \"0ec84281a6cb1c943547f07ac7ac1084d3963b376be2e8a331357c1c9526fcee\": rpc error: code = NotFound desc = could not find container \"0ec84281a6cb1c943547f07ac7ac1084d3963b376be2e8a331357c1c9526fcee\": container with ID starting with 0ec84281a6cb1c943547f07ac7ac1084d3963b376be2e8a331357c1c9526fcee not found: ID does not exist" Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.976724 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-m8vp9"] Nov 24 21:40:38 crc kubenswrapper[4915]: I1124 21:40:38.994765 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-m8vp9"] Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.003141 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-c522-account-create-hp224"] Nov 24 21:40:39 crc kubenswrapper[4915]: E1124 21:40:39.003589 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e084c012-7c0b-407b-8e65-31598c80a76f" containerName="dnsmasq-dns" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.003605 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e084c012-7c0b-407b-8e65-31598c80a76f" containerName="dnsmasq-dns" Nov 24 21:40:39 crc kubenswrapper[4915]: E1124 21:40:39.003624 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="086e5806-87ef-4c73-8446-192165490619" containerName="dnsmasq-dns" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.003630 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="086e5806-87ef-4c73-8446-192165490619" containerName="dnsmasq-dns" Nov 24 21:40:39 crc kubenswrapper[4915]: E1124 21:40:39.003660 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e084c012-7c0b-407b-8e65-31598c80a76f" containerName="init" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.003669 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e084c012-7c0b-407b-8e65-31598c80a76f" containerName="init" Nov 24 21:40:39 crc kubenswrapper[4915]: E1124 21:40:39.003680 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="086e5806-87ef-4c73-8446-192165490619" containerName="init" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.003685 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="086e5806-87ef-4c73-8446-192165490619" containerName="init" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.003872 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="e084c012-7c0b-407b-8e65-31598c80a76f" containerName="dnsmasq-dns" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.003894 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="086e5806-87ef-4c73-8446-192165490619" containerName="dnsmasq-dns" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.004675 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c522-account-create-hp224" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.006554 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.011702 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c522-account-create-hp224"] Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.056884 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb6d5ae7-edbd-4f11-8635-02213556ff48-operator-scripts\") pod \"keystone-c522-account-create-hp224\" (UID: \"bb6d5ae7-edbd-4f11-8635-02213556ff48\") " pod="openstack/keystone-c522-account-create-hp224" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.057043 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz7zj\" (UniqueName: \"kubernetes.io/projected/bb6d5ae7-edbd-4f11-8635-02213556ff48-kube-api-access-lz7zj\") pod \"keystone-c522-account-create-hp224\" (UID: \"bb6d5ae7-edbd-4f11-8635-02213556ff48\") " pod="openstack/keystone-c522-account-create-hp224" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.167967 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz7zj\" (UniqueName: \"kubernetes.io/projected/bb6d5ae7-edbd-4f11-8635-02213556ff48-kube-api-access-lz7zj\") pod \"keystone-c522-account-create-hp224\" (UID: \"bb6d5ae7-edbd-4f11-8635-02213556ff48\") " pod="openstack/keystone-c522-account-create-hp224" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.168189 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb6d5ae7-edbd-4f11-8635-02213556ff48-operator-scripts\") pod \"keystone-c522-account-create-hp224\" (UID: \"bb6d5ae7-edbd-4f11-8635-02213556ff48\") " pod="openstack/keystone-c522-account-create-hp224" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.183287 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-rp6qr"] Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.184188 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb6d5ae7-edbd-4f11-8635-02213556ff48-operator-scripts\") pod \"keystone-c522-account-create-hp224\" (UID: \"bb6d5ae7-edbd-4f11-8635-02213556ff48\") " pod="openstack/keystone-c522-account-create-hp224" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.185339 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rp6qr" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.202552 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz7zj\" (UniqueName: \"kubernetes.io/projected/bb6d5ae7-edbd-4f11-8635-02213556ff48-kube-api-access-lz7zj\") pod \"keystone-c522-account-create-hp224\" (UID: \"bb6d5ae7-edbd-4f11-8635-02213556ff48\") " pod="openstack/keystone-c522-account-create-hp224" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.213942 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-rp6qr"] Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.220584 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-59d9-account-create-tk9r7"] Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.222802 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-59d9-account-create-tk9r7" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.224965 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.228114 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-59d9-account-create-tk9r7"] Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.246366 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.277625 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj9bh\" (UniqueName: \"kubernetes.io/projected/8ec792db-f087-4f1f-a6d5-83e85ca536f9-kube-api-access-zj9bh\") pod \"placement-db-create-rp6qr\" (UID: \"8ec792db-f087-4f1f-a6d5-83e85ca536f9\") " pod="openstack/placement-db-create-rp6qr" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.277955 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ec792db-f087-4f1f-a6d5-83e85ca536f9-operator-scripts\") pod \"placement-db-create-rp6qr\" (UID: \"8ec792db-f087-4f1f-a6d5-83e85ca536f9\") " pod="openstack/placement-db-create-rp6qr" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.325174 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c522-account-create-hp224" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.373614 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-pxmbk"] Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.374982 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pxmbk" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.379711 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ec792db-f087-4f1f-a6d5-83e85ca536f9-operator-scripts\") pod \"placement-db-create-rp6qr\" (UID: \"8ec792db-f087-4f1f-a6d5-83e85ca536f9\") " pod="openstack/placement-db-create-rp6qr" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.379803 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e339ac3-4a26-4a61-8887-e007e1f3d35c-operator-scripts\") pod \"placement-59d9-account-create-tk9r7\" (UID: \"8e339ac3-4a26-4a61-8887-e007e1f3d35c\") " pod="openstack/placement-59d9-account-create-tk9r7" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.379860 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj9bh\" (UniqueName: \"kubernetes.io/projected/8ec792db-f087-4f1f-a6d5-83e85ca536f9-kube-api-access-zj9bh\") pod \"placement-db-create-rp6qr\" (UID: \"8ec792db-f087-4f1f-a6d5-83e85ca536f9\") " pod="openstack/placement-db-create-rp6qr" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.379897 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq984\" (UniqueName: \"kubernetes.io/projected/8e339ac3-4a26-4a61-8887-e007e1f3d35c-kube-api-access-bq984\") pod \"placement-59d9-account-create-tk9r7\" (UID: \"8e339ac3-4a26-4a61-8887-e007e1f3d35c\") " pod="openstack/placement-59d9-account-create-tk9r7" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.380417 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ec792db-f087-4f1f-a6d5-83e85ca536f9-operator-scripts\") pod \"placement-db-create-rp6qr\" (UID: \"8ec792db-f087-4f1f-a6d5-83e85ca536f9\") " pod="openstack/placement-db-create-rp6qr" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.394837 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-pxmbk"] Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.399537 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj9bh\" (UniqueName: \"kubernetes.io/projected/8ec792db-f087-4f1f-a6d5-83e85ca536f9-kube-api-access-zj9bh\") pod \"placement-db-create-rp6qr\" (UID: \"8ec792db-f087-4f1f-a6d5-83e85ca536f9\") " pod="openstack/placement-db-create-rp6qr" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.472853 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-381b-account-create-2p98p"] Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.480957 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-381b-account-create-2p98p" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.485958 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq984\" (UniqueName: \"kubernetes.io/projected/8e339ac3-4a26-4a61-8887-e007e1f3d35c-kube-api-access-bq984\") pod \"placement-59d9-account-create-tk9r7\" (UID: \"8e339ac3-4a26-4a61-8887-e007e1f3d35c\") " pod="openstack/placement-59d9-account-create-tk9r7" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.486130 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e339ac3-4a26-4a61-8887-e007e1f3d35c-operator-scripts\") pod \"placement-59d9-account-create-tk9r7\" (UID: \"8e339ac3-4a26-4a61-8887-e007e1f3d35c\") " pod="openstack/placement-59d9-account-create-tk9r7" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.486158 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b449b4ff-1966-465a-8df7-13c2c1a28f75-operator-scripts\") pod \"glance-db-create-pxmbk\" (UID: \"b449b4ff-1966-465a-8df7-13c2c1a28f75\") " pod="openstack/glance-db-create-pxmbk" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.486192 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8gt5\" (UniqueName: \"kubernetes.io/projected/b449b4ff-1966-465a-8df7-13c2c1a28f75-kube-api-access-d8gt5\") pod \"glance-db-create-pxmbk\" (UID: \"b449b4ff-1966-465a-8df7-13c2c1a28f75\") " pod="openstack/glance-db-create-pxmbk" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.487033 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e339ac3-4a26-4a61-8887-e007e1f3d35c-operator-scripts\") pod \"placement-59d9-account-create-tk9r7\" (UID: \"8e339ac3-4a26-4a61-8887-e007e1f3d35c\") " pod="openstack/placement-59d9-account-create-tk9r7" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.490511 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.493550 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-381b-account-create-2p98p"] Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.511662 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq984\" (UniqueName: \"kubernetes.io/projected/8e339ac3-4a26-4a61-8887-e007e1f3d35c-kube-api-access-bq984\") pod \"placement-59d9-account-create-tk9r7\" (UID: \"8e339ac3-4a26-4a61-8887-e007e1f3d35c\") " pod="openstack/placement-59d9-account-create-tk9r7" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.588017 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8msgt\" (UniqueName: \"kubernetes.io/projected/1bfad247-8a41-40c1-876a-a8948d106c5f-kube-api-access-8msgt\") pod \"glance-381b-account-create-2p98p\" (UID: \"1bfad247-8a41-40c1-876a-a8948d106c5f\") " pod="openstack/glance-381b-account-create-2p98p" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.588165 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b449b4ff-1966-465a-8df7-13c2c1a28f75-operator-scripts\") pod \"glance-db-create-pxmbk\" (UID: \"b449b4ff-1966-465a-8df7-13c2c1a28f75\") " pod="openstack/glance-db-create-pxmbk" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.588219 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8gt5\" (UniqueName: \"kubernetes.io/projected/b449b4ff-1966-465a-8df7-13c2c1a28f75-kube-api-access-d8gt5\") pod \"glance-db-create-pxmbk\" (UID: \"b449b4ff-1966-465a-8df7-13c2c1a28f75\") " pod="openstack/glance-db-create-pxmbk" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.588282 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bfad247-8a41-40c1-876a-a8948d106c5f-operator-scripts\") pod \"glance-381b-account-create-2p98p\" (UID: \"1bfad247-8a41-40c1-876a-a8948d106c5f\") " pod="openstack/glance-381b-account-create-2p98p" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.589175 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b449b4ff-1966-465a-8df7-13c2c1a28f75-operator-scripts\") pod \"glance-db-create-pxmbk\" (UID: \"b449b4ff-1966-465a-8df7-13c2c1a28f75\") " pod="openstack/glance-db-create-pxmbk" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.604500 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rp6qr" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.610547 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-59d9-account-create-tk9r7" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.612832 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8gt5\" (UniqueName: \"kubernetes.io/projected/b449b4ff-1966-465a-8df7-13c2c1a28f75-kube-api-access-d8gt5\") pod \"glance-db-create-pxmbk\" (UID: \"b449b4ff-1966-465a-8df7-13c2c1a28f75\") " pod="openstack/glance-db-create-pxmbk" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.672129 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-xpvqz" event={"ID":"5c220a3f-cd28-4ec2-9dd4-54bf620b874f","Type":"ContainerStarted","Data":"44c2eb5bdf0da5e246548f1d2ef03a4d19a509aec35c968db9f50e0f1a990917"} Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.672987 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bc7876d45-xpvqz" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.675925 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-zphpj" event={"ID":"1a9ca780-e98b-499f-8da0-3070a56716e4","Type":"ContainerStarted","Data":"bc0b4c1c9eb30630413ee1318499618d83fe9aa96db9a2468e03181d0e381cbf"} Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.690986 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bfad247-8a41-40c1-876a-a8948d106c5f-operator-scripts\") pod \"glance-381b-account-create-2p98p\" (UID: \"1bfad247-8a41-40c1-876a-a8948d106c5f\") " pod="openstack/glance-381b-account-create-2p98p" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.691191 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8msgt\" (UniqueName: \"kubernetes.io/projected/1bfad247-8a41-40c1-876a-a8948d106c5f-kube-api-access-8msgt\") pod \"glance-381b-account-create-2p98p\" (UID: \"1bfad247-8a41-40c1-876a-a8948d106c5f\") " pod="openstack/glance-381b-account-create-2p98p" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.692237 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bfad247-8a41-40c1-876a-a8948d106c5f-operator-scripts\") pod \"glance-381b-account-create-2p98p\" (UID: \"1bfad247-8a41-40c1-876a-a8948d106c5f\") " pod="openstack/glance-381b-account-create-2p98p" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.700627 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pxmbk" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.702105 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bc7876d45-xpvqz" podStartSLOduration=3.7020934199999997 podStartE2EDuration="3.70209342s" podCreationTimestamp="2025-11-24 21:40:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:40:39.687756308 +0000 UTC m=+1258.004008491" watchObservedRunningTime="2025-11-24 21:40:39.70209342 +0000 UTC m=+1258.018345583" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.708504 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8msgt\" (UniqueName: \"kubernetes.io/projected/1bfad247-8a41-40c1-876a-a8948d106c5f-kube-api-access-8msgt\") pod \"glance-381b-account-create-2p98p\" (UID: \"1bfad247-8a41-40c1-876a-a8948d106c5f\") " pod="openstack/glance-381b-account-create-2p98p" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.724034 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-zphpj" podStartSLOduration=2.724013665 podStartE2EDuration="2.724013665s" podCreationTimestamp="2025-11-24 21:40:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:40:39.719408291 +0000 UTC m=+1258.035660474" watchObservedRunningTime="2025-11-24 21:40:39.724013665 +0000 UTC m=+1258.040265838" Nov 24 21:40:39 crc kubenswrapper[4915]: I1124 21:40:39.804036 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-381b-account-create-2p98p" Nov 24 21:40:40 crc kubenswrapper[4915]: I1124 21:40:40.093718 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 24 21:40:40 crc kubenswrapper[4915]: I1124 21:40:40.094066 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 24 21:40:40 crc kubenswrapper[4915]: I1124 21:40:40.240370 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-59d9-account-create-tk9r7"] Nov 24 21:40:40 crc kubenswrapper[4915]: W1124 21:40:40.244940 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e339ac3_4a26_4a61_8887_e007e1f3d35c.slice/crio-55987e3e86530cd9e4d46d479e4feb19a6ca493297686901eb49d24fb151aa73 WatchSource:0}: Error finding container 55987e3e86530cd9e4d46d479e4feb19a6ca493297686901eb49d24fb151aa73: Status 404 returned error can't find the container with id 55987e3e86530cd9e4d46d479e4feb19a6ca493297686901eb49d24fb151aa73 Nov 24 21:40:40 crc kubenswrapper[4915]: I1124 21:40:40.292137 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 24 21:40:40 crc kubenswrapper[4915]: I1124 21:40:40.300296 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c522-account-create-hp224"] Nov 24 21:40:40 crc kubenswrapper[4915]: W1124 21:40:40.310998 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb6d5ae7_edbd_4f11_8635_02213556ff48.slice/crio-911f413fdcc834b9836b65c58802a93e7d5eb3b97869b2ba70f5acef31a5462a WatchSource:0}: Error finding container 911f413fdcc834b9836b65c58802a93e7d5eb3b97869b2ba70f5acef31a5462a: Status 404 returned error can't find the container with id 911f413fdcc834b9836b65c58802a93e7d5eb3b97869b2ba70f5acef31a5462a Nov 24 21:40:40 crc kubenswrapper[4915]: I1124 21:40:40.326606 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-rp6qr"] Nov 24 21:40:40 crc kubenswrapper[4915]: I1124 21:40:40.446134 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="086e5806-87ef-4c73-8446-192165490619" path="/var/lib/kubelet/pods/086e5806-87ef-4c73-8446-192165490619/volumes" Nov 24 21:40:40 crc kubenswrapper[4915]: I1124 21:40:40.447493 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e084c012-7c0b-407b-8e65-31598c80a76f" path="/var/lib/kubelet/pods/e084c012-7c0b-407b-8e65-31598c80a76f/volumes" Nov 24 21:40:40 crc kubenswrapper[4915]: I1124 21:40:40.466955 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-pxmbk"] Nov 24 21:40:40 crc kubenswrapper[4915]: W1124 21:40:40.484522 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb449b4ff_1966_465a_8df7_13c2c1a28f75.slice/crio-2e6f0aa91bb1453e6020b107cc42c86e5ca475167855f6aa37118cab186db6f6 WatchSource:0}: Error finding container 2e6f0aa91bb1453e6020b107cc42c86e5ca475167855f6aa37118cab186db6f6: Status 404 returned error can't find the container with id 2e6f0aa91bb1453e6020b107cc42c86e5ca475167855f6aa37118cab186db6f6 Nov 24 21:40:40 crc kubenswrapper[4915]: I1124 21:40:40.560945 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-381b-account-create-2p98p"] Nov 24 21:40:40 crc kubenswrapper[4915]: I1124 21:40:40.687766 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rp6qr" event={"ID":"8ec792db-f087-4f1f-a6d5-83e85ca536f9","Type":"ContainerStarted","Data":"94c360f9dd8172b2701ffde4c9b1235e3b7bd24a13029ca55ae0ab784fee5420"} Nov 24 21:40:40 crc kubenswrapper[4915]: I1124 21:40:40.687830 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rp6qr" event={"ID":"8ec792db-f087-4f1f-a6d5-83e85ca536f9","Type":"ContainerStarted","Data":"21814ab4da157a832d8889b2a836146c88cd26cd4c7316762356f0d0d5c74ec1"} Nov 24 21:40:40 crc kubenswrapper[4915]: I1124 21:40:40.689850 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"13f6ca90-aa94-45a7-803d-22cee7fd27aa","Type":"ContainerStarted","Data":"2ebb0f82a25742f0359d1631b2586930bd53580b603b4f8a3026fbce754b0d54"} Nov 24 21:40:40 crc kubenswrapper[4915]: I1124 21:40:40.689880 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"13f6ca90-aa94-45a7-803d-22cee7fd27aa","Type":"ContainerStarted","Data":"e533a918d95a88d13fa5d9b9bb092adec1ef89e3ed0d9c16e337c7afd55ef6cc"} Nov 24 21:40:40 crc kubenswrapper[4915]: I1124 21:40:40.690132 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 24 21:40:40 crc kubenswrapper[4915]: I1124 21:40:40.691063 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-381b-account-create-2p98p" event={"ID":"1bfad247-8a41-40c1-876a-a8948d106c5f","Type":"ContainerStarted","Data":"74156e1ddb815e607ba644668f5a0e96eb68975dd18e462505c299f27c3ec101"} Nov 24 21:40:40 crc kubenswrapper[4915]: I1124 21:40:40.692932 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-59d9-account-create-tk9r7" event={"ID":"8e339ac3-4a26-4a61-8887-e007e1f3d35c","Type":"ContainerStarted","Data":"8996b7c9d8affc73412b71985eac6479d526428730be7e006bc597787068b8ed"} Nov 24 21:40:40 crc kubenswrapper[4915]: I1124 21:40:40.692980 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-59d9-account-create-tk9r7" event={"ID":"8e339ac3-4a26-4a61-8887-e007e1f3d35c","Type":"ContainerStarted","Data":"55987e3e86530cd9e4d46d479e4feb19a6ca493297686901eb49d24fb151aa73"} Nov 24 21:40:40 crc kubenswrapper[4915]: I1124 21:40:40.696018 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pxmbk" event={"ID":"b449b4ff-1966-465a-8df7-13c2c1a28f75","Type":"ContainerStarted","Data":"a78fb559c775d164594a142ecac63fb8bcdd92613226ab8ccb2ac1d761a800cb"} Nov 24 21:40:40 crc kubenswrapper[4915]: I1124 21:40:40.696174 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pxmbk" event={"ID":"b449b4ff-1966-465a-8df7-13c2c1a28f75","Type":"ContainerStarted","Data":"2e6f0aa91bb1453e6020b107cc42c86e5ca475167855f6aa37118cab186db6f6"} Nov 24 21:40:40 crc kubenswrapper[4915]: I1124 21:40:40.698680 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c522-account-create-hp224" event={"ID":"bb6d5ae7-edbd-4f11-8635-02213556ff48","Type":"ContainerStarted","Data":"1a55bdb8125041c319557e9986b7c8acd09fb081321ba18793bbe20a0b54fd2f"} Nov 24 21:40:40 crc kubenswrapper[4915]: I1124 21:40:40.698723 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c522-account-create-hp224" event={"ID":"bb6d5ae7-edbd-4f11-8635-02213556ff48","Type":"ContainerStarted","Data":"911f413fdcc834b9836b65c58802a93e7d5eb3b97869b2ba70f5acef31a5462a"} Nov 24 21:40:40 crc kubenswrapper[4915]: I1124 21:40:40.699171 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-zphpj" Nov 24 21:40:40 crc kubenswrapper[4915]: I1124 21:40:40.709367 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-rp6qr" podStartSLOduration=1.709348546 podStartE2EDuration="1.709348546s" podCreationTimestamp="2025-11-24 21:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:40:40.707038194 +0000 UTC m=+1259.023290367" watchObservedRunningTime="2025-11-24 21:40:40.709348546 +0000 UTC m=+1259.025600719" Nov 24 21:40:40 crc kubenswrapper[4915]: I1124 21:40:40.729864 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-c522-account-create-hp224" podStartSLOduration=2.729844142 podStartE2EDuration="2.729844142s" podCreationTimestamp="2025-11-24 21:40:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:40:40.724693695 +0000 UTC m=+1259.040945868" watchObservedRunningTime="2025-11-24 21:40:40.729844142 +0000 UTC m=+1259.046096335" Nov 24 21:40:40 crc kubenswrapper[4915]: I1124 21:40:40.737895 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-pxmbk" podStartSLOduration=1.737876907 podStartE2EDuration="1.737876907s" podCreationTimestamp="2025-11-24 21:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:40:40.737034324 +0000 UTC m=+1259.053286497" watchObservedRunningTime="2025-11-24 21:40:40.737876907 +0000 UTC m=+1259.054129080" Nov 24 21:40:40 crc kubenswrapper[4915]: I1124 21:40:40.773503 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.323380109 podStartE2EDuration="3.773464066s" podCreationTimestamp="2025-11-24 21:40:37 +0000 UTC" firstStartedPulling="2025-11-24 21:40:38.326929981 +0000 UTC m=+1256.643182144" lastFinishedPulling="2025-11-24 21:40:39.777013928 +0000 UTC m=+1258.093266101" observedRunningTime="2025-11-24 21:40:40.755549968 +0000 UTC m=+1259.071802141" watchObservedRunningTime="2025-11-24 21:40:40.773464066 +0000 UTC m=+1259.089716259" Nov 24 21:40:40 crc kubenswrapper[4915]: I1124 21:40:40.793253 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-59d9-account-create-tk9r7" podStartSLOduration=1.793235493 podStartE2EDuration="1.793235493s" podCreationTimestamp="2025-11-24 21:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:40:40.776962949 +0000 UTC m=+1259.093215122" watchObservedRunningTime="2025-11-24 21:40:40.793235493 +0000 UTC m=+1259.109487656" Nov 24 21:40:40 crc kubenswrapper[4915]: I1124 21:40:40.801397 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 24 21:40:40 crc kubenswrapper[4915]: I1124 21:40:40.995574 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-fwvf6"] Nov 24 21:40:40 crc kubenswrapper[4915]: I1124 21:40:40.996789 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-fwvf6" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.015161 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-fwvf6"] Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.136400 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58f1a4a9-6f26-483a-966a-f29142a74b4e-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-fwvf6\" (UID: \"58f1a4a9-6f26-483a-966a-f29142a74b4e\") " pod="openstack/mysqld-exporter-openstack-db-create-fwvf6" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.136828 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h4wj\" (UniqueName: \"kubernetes.io/projected/58f1a4a9-6f26-483a-966a-f29142a74b4e-kube-api-access-2h4wj\") pod \"mysqld-exporter-openstack-db-create-fwvf6\" (UID: \"58f1a4a9-6f26-483a-966a-f29142a74b4e\") " pod="openstack/mysqld-exporter-openstack-db-create-fwvf6" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.237138 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-xpvqz"] Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.238239 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58f1a4a9-6f26-483a-966a-f29142a74b4e-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-fwvf6\" (UID: \"58f1a4a9-6f26-483a-966a-f29142a74b4e\") " pod="openstack/mysqld-exporter-openstack-db-create-fwvf6" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.238359 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h4wj\" (UniqueName: \"kubernetes.io/projected/58f1a4a9-6f26-483a-966a-f29142a74b4e-kube-api-access-2h4wj\") pod \"mysqld-exporter-openstack-db-create-fwvf6\" (UID: \"58f1a4a9-6f26-483a-966a-f29142a74b4e\") " pod="openstack/mysqld-exporter-openstack-db-create-fwvf6" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.239082 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58f1a4a9-6f26-483a-966a-f29142a74b4e-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-fwvf6\" (UID: \"58f1a4a9-6f26-483a-966a-f29142a74b4e\") " pod="openstack/mysqld-exporter-openstack-db-create-fwvf6" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.257111 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-a004-account-create-rfvch"] Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.259458 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-a004-account-create-rfvch" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.263321 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.279479 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-a004-account-create-rfvch"] Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.300084 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.310895 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-rgp58"] Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.315632 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-rgp58" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.339427 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h4wj\" (UniqueName: \"kubernetes.io/projected/58f1a4a9-6f26-483a-966a-f29142a74b4e-kube-api-access-2h4wj\") pod \"mysqld-exporter-openstack-db-create-fwvf6\" (UID: \"58f1a4a9-6f26-483a-966a-f29142a74b4e\") " pod="openstack/mysqld-exporter-openstack-db-create-fwvf6" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.340717 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dbv9\" (UniqueName: \"kubernetes.io/projected/bb9cf31d-94cf-471c-ab71-668c1cbdcffd-kube-api-access-2dbv9\") pod \"mysqld-exporter-a004-account-create-rfvch\" (UID: \"bb9cf31d-94cf-471c-ab71-668c1cbdcffd\") " pod="openstack/mysqld-exporter-a004-account-create-rfvch" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.346912 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb9cf31d-94cf-471c-ab71-668c1cbdcffd-operator-scripts\") pod \"mysqld-exporter-a004-account-create-rfvch\" (UID: \"bb9cf31d-94cf-471c-ab71-668c1cbdcffd\") " pod="openstack/mysqld-exporter-a004-account-create-rfvch" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.346675 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-fwvf6" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.398518 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-rgp58"] Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.455016 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d81fc37-e6f7-45ac-b468-cb1868ed5bc4-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-rgp58\" (UID: \"9d81fc37-e6f7-45ac-b468-cb1868ed5bc4\") " pod="openstack/dnsmasq-dns-b8fbc5445-rgp58" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.455053 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d81fc37-e6f7-45ac-b468-cb1868ed5bc4-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-rgp58\" (UID: \"9d81fc37-e6f7-45ac-b468-cb1868ed5bc4\") " pod="openstack/dnsmasq-dns-b8fbc5445-rgp58" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.455092 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d81fc37-e6f7-45ac-b468-cb1868ed5bc4-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-rgp58\" (UID: \"9d81fc37-e6f7-45ac-b468-cb1868ed5bc4\") " pod="openstack/dnsmasq-dns-b8fbc5445-rgp58" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.455126 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dbv9\" (UniqueName: \"kubernetes.io/projected/bb9cf31d-94cf-471c-ab71-668c1cbdcffd-kube-api-access-2dbv9\") pod \"mysqld-exporter-a004-account-create-rfvch\" (UID: \"bb9cf31d-94cf-471c-ab71-668c1cbdcffd\") " pod="openstack/mysqld-exporter-a004-account-create-rfvch" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.455146 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7tzw\" (UniqueName: \"kubernetes.io/projected/9d81fc37-e6f7-45ac-b468-cb1868ed5bc4-kube-api-access-t7tzw\") pod \"dnsmasq-dns-b8fbc5445-rgp58\" (UID: \"9d81fc37-e6f7-45ac-b468-cb1868ed5bc4\") " pod="openstack/dnsmasq-dns-b8fbc5445-rgp58" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.455234 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb9cf31d-94cf-471c-ab71-668c1cbdcffd-operator-scripts\") pod \"mysqld-exporter-a004-account-create-rfvch\" (UID: \"bb9cf31d-94cf-471c-ab71-668c1cbdcffd\") " pod="openstack/mysqld-exporter-a004-account-create-rfvch" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.455312 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d81fc37-e6f7-45ac-b468-cb1868ed5bc4-config\") pod \"dnsmasq-dns-b8fbc5445-rgp58\" (UID: \"9d81fc37-e6f7-45ac-b468-cb1868ed5bc4\") " pod="openstack/dnsmasq-dns-b8fbc5445-rgp58" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.456311 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb9cf31d-94cf-471c-ab71-668c1cbdcffd-operator-scripts\") pod \"mysqld-exporter-a004-account-create-rfvch\" (UID: \"bb9cf31d-94cf-471c-ab71-668c1cbdcffd\") " pod="openstack/mysqld-exporter-a004-account-create-rfvch" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.510352 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dbv9\" (UniqueName: \"kubernetes.io/projected/bb9cf31d-94cf-471c-ab71-668c1cbdcffd-kube-api-access-2dbv9\") pod \"mysqld-exporter-a004-account-create-rfvch\" (UID: \"bb9cf31d-94cf-471c-ab71-668c1cbdcffd\") " pod="openstack/mysqld-exporter-a004-account-create-rfvch" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.567443 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d81fc37-e6f7-45ac-b468-cb1868ed5bc4-config\") pod \"dnsmasq-dns-b8fbc5445-rgp58\" (UID: \"9d81fc37-e6f7-45ac-b468-cb1868ed5bc4\") " pod="openstack/dnsmasq-dns-b8fbc5445-rgp58" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.567499 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d81fc37-e6f7-45ac-b468-cb1868ed5bc4-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-rgp58\" (UID: \"9d81fc37-e6f7-45ac-b468-cb1868ed5bc4\") " pod="openstack/dnsmasq-dns-b8fbc5445-rgp58" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.567517 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d81fc37-e6f7-45ac-b468-cb1868ed5bc4-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-rgp58\" (UID: \"9d81fc37-e6f7-45ac-b468-cb1868ed5bc4\") " pod="openstack/dnsmasq-dns-b8fbc5445-rgp58" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.567581 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d81fc37-e6f7-45ac-b468-cb1868ed5bc4-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-rgp58\" (UID: \"9d81fc37-e6f7-45ac-b468-cb1868ed5bc4\") " pod="openstack/dnsmasq-dns-b8fbc5445-rgp58" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.567651 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7tzw\" (UniqueName: \"kubernetes.io/projected/9d81fc37-e6f7-45ac-b468-cb1868ed5bc4-kube-api-access-t7tzw\") pod \"dnsmasq-dns-b8fbc5445-rgp58\" (UID: \"9d81fc37-e6f7-45ac-b468-cb1868ed5bc4\") " pod="openstack/dnsmasq-dns-b8fbc5445-rgp58" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.568755 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d81fc37-e6f7-45ac-b468-cb1868ed5bc4-config\") pod \"dnsmasq-dns-b8fbc5445-rgp58\" (UID: \"9d81fc37-e6f7-45ac-b468-cb1868ed5bc4\") " pod="openstack/dnsmasq-dns-b8fbc5445-rgp58" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.569339 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d81fc37-e6f7-45ac-b468-cb1868ed5bc4-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-rgp58\" (UID: \"9d81fc37-e6f7-45ac-b468-cb1868ed5bc4\") " pod="openstack/dnsmasq-dns-b8fbc5445-rgp58" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.569893 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d81fc37-e6f7-45ac-b468-cb1868ed5bc4-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-rgp58\" (UID: \"9d81fc37-e6f7-45ac-b468-cb1868ed5bc4\") " pod="openstack/dnsmasq-dns-b8fbc5445-rgp58" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.570470 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d81fc37-e6f7-45ac-b468-cb1868ed5bc4-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-rgp58\" (UID: \"9d81fc37-e6f7-45ac-b468-cb1868ed5bc4\") " pod="openstack/dnsmasq-dns-b8fbc5445-rgp58" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.585852 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-a004-account-create-rfvch" Nov 24 21:40:41 crc kubenswrapper[4915]: I1124 21:40:41.599960 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7tzw\" (UniqueName: \"kubernetes.io/projected/9d81fc37-e6f7-45ac-b468-cb1868ed5bc4-kube-api-access-t7tzw\") pod \"dnsmasq-dns-b8fbc5445-rgp58\" (UID: \"9d81fc37-e6f7-45ac-b468-cb1868ed5bc4\") " pod="openstack/dnsmasq-dns-b8fbc5445-rgp58" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:41.677026 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-rgp58" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:41.710612 4915 generic.go:334] "Generic (PLEG): container finished" podID="8e339ac3-4a26-4a61-8887-e007e1f3d35c" containerID="8996b7c9d8affc73412b71985eac6479d526428730be7e006bc597787068b8ed" exitCode=0 Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:41.710879 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-59d9-account-create-tk9r7" event={"ID":"8e339ac3-4a26-4a61-8887-e007e1f3d35c","Type":"ContainerDied","Data":"8996b7c9d8affc73412b71985eac6479d526428730be7e006bc597787068b8ed"} Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:41.712170 4915 generic.go:334] "Generic (PLEG): container finished" podID="bb6d5ae7-edbd-4f11-8635-02213556ff48" containerID="1a55bdb8125041c319557e9986b7c8acd09fb081321ba18793bbe20a0b54fd2f" exitCode=0 Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:41.712231 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c522-account-create-hp224" event={"ID":"bb6d5ae7-edbd-4f11-8635-02213556ff48","Type":"ContainerDied","Data":"1a55bdb8125041c319557e9986b7c8acd09fb081321ba18793bbe20a0b54fd2f"} Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:41.715923 4915 generic.go:334] "Generic (PLEG): container finished" podID="8ec792db-f087-4f1f-a6d5-83e85ca536f9" containerID="94c360f9dd8172b2701ffde4c9b1235e3b7bd24a13029ca55ae0ab784fee5420" exitCode=0 Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:41.716016 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rp6qr" event={"ID":"8ec792db-f087-4f1f-a6d5-83e85ca536f9","Type":"ContainerDied","Data":"94c360f9dd8172b2701ffde4c9b1235e3b7bd24a13029ca55ae0ab784fee5420"} Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:41.720917 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-381b-account-create-2p98p" event={"ID":"1bfad247-8a41-40c1-876a-a8948d106c5f","Type":"ContainerStarted","Data":"a753f3b8ae1ff69ea19c039af8cf90c9f555d2220aa8cfe8e9252a90a2c85980"} Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:41.721945 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bc7876d45-xpvqz" podUID="5c220a3f-cd28-4ec2-9dd4-54bf620b874f" containerName="dnsmasq-dns" containerID="cri-o://44c2eb5bdf0da5e246548f1d2ef03a4d19a509aec35c968db9f50e0f1a990917" gracePeriod=10 Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:41.773010 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-381b-account-create-2p98p" podStartSLOduration=2.772993285 podStartE2EDuration="2.772993285s" podCreationTimestamp="2025-11-24 21:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:40:41.768147906 +0000 UTC m=+1260.084400099" watchObservedRunningTime="2025-11-24 21:40:41.772993285 +0000 UTC m=+1260.089245458" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:41.932055 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-fwvf6"] Nov 24 21:40:46 crc kubenswrapper[4915]: W1124 21:40:41.934270 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58f1a4a9_6f26_483a_966a_f29142a74b4e.slice/crio-9e67d203f82bf90a55eef9da2a682fb8b2b1041654e16572b6e372cc5fbb71ec WatchSource:0}: Error finding container 9e67d203f82bf90a55eef9da2a682fb8b2b1041654e16572b6e372cc5fbb71ec: Status 404 returned error can't find the container with id 9e67d203f82bf90a55eef9da2a682fb8b2b1041654e16572b6e372cc5fbb71ec Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.447017 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.518452 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.518707 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.522201 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.524230 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.524266 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-khgsq" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.524302 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.598560 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"ea1dafcf-631a-4ae6-8aad-d716b977402d\") " pod="openstack/swift-storage-0" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.598693 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea1dafcf-631a-4ae6-8aad-d716b977402d-etc-swift\") pod \"swift-storage-0\" (UID: \"ea1dafcf-631a-4ae6-8aad-d716b977402d\") " pod="openstack/swift-storage-0" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.599357 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ea1dafcf-631a-4ae6-8aad-d716b977402d-lock\") pod \"swift-storage-0\" (UID: \"ea1dafcf-631a-4ae6-8aad-d716b977402d\") " pod="openstack/swift-storage-0" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.599425 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ea1dafcf-631a-4ae6-8aad-d716b977402d-cache\") pod \"swift-storage-0\" (UID: \"ea1dafcf-631a-4ae6-8aad-d716b977402d\") " pod="openstack/swift-storage-0" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.599641 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwmjm\" (UniqueName: \"kubernetes.io/projected/ea1dafcf-631a-4ae6-8aad-d716b977402d-kube-api-access-mwmjm\") pod \"swift-storage-0\" (UID: \"ea1dafcf-631a-4ae6-8aad-d716b977402d\") " pod="openstack/swift-storage-0" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.701109 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"ea1dafcf-631a-4ae6-8aad-d716b977402d\") " pod="openstack/swift-storage-0" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.701211 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea1dafcf-631a-4ae6-8aad-d716b977402d-etc-swift\") pod \"swift-storage-0\" (UID: \"ea1dafcf-631a-4ae6-8aad-d716b977402d\") " pod="openstack/swift-storage-0" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.701268 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ea1dafcf-631a-4ae6-8aad-d716b977402d-lock\") pod \"swift-storage-0\" (UID: \"ea1dafcf-631a-4ae6-8aad-d716b977402d\") " pod="openstack/swift-storage-0" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.701300 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ea1dafcf-631a-4ae6-8aad-d716b977402d-cache\") pod \"swift-storage-0\" (UID: \"ea1dafcf-631a-4ae6-8aad-d716b977402d\") " pod="openstack/swift-storage-0" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.701345 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwmjm\" (UniqueName: \"kubernetes.io/projected/ea1dafcf-631a-4ae6-8aad-d716b977402d-kube-api-access-mwmjm\") pod \"swift-storage-0\" (UID: \"ea1dafcf-631a-4ae6-8aad-d716b977402d\") " pod="openstack/swift-storage-0" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.701484 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"ea1dafcf-631a-4ae6-8aad-d716b977402d\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/swift-storage-0" Nov 24 21:40:46 crc kubenswrapper[4915]: E1124 21:40:42.701767 4915 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 21:40:46 crc kubenswrapper[4915]: E1124 21:40:42.701801 4915 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 21:40:46 crc kubenswrapper[4915]: E1124 21:40:42.701836 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ea1dafcf-631a-4ae6-8aad-d716b977402d-etc-swift podName:ea1dafcf-631a-4ae6-8aad-d716b977402d nodeName:}" failed. No retries permitted until 2025-11-24 21:40:43.201824129 +0000 UTC m=+1261.518076302 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ea1dafcf-631a-4ae6-8aad-d716b977402d-etc-swift") pod "swift-storage-0" (UID: "ea1dafcf-631a-4ae6-8aad-d716b977402d") : configmap "swift-ring-files" not found Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.702218 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ea1dafcf-631a-4ae6-8aad-d716b977402d-cache\") pod \"swift-storage-0\" (UID: \"ea1dafcf-631a-4ae6-8aad-d716b977402d\") " pod="openstack/swift-storage-0" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.702346 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ea1dafcf-631a-4ae6-8aad-d716b977402d-lock\") pod \"swift-storage-0\" (UID: \"ea1dafcf-631a-4ae6-8aad-d716b977402d\") " pod="openstack/swift-storage-0" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.735241 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwmjm\" (UniqueName: \"kubernetes.io/projected/ea1dafcf-631a-4ae6-8aad-d716b977402d-kube-api-access-mwmjm\") pod \"swift-storage-0\" (UID: \"ea1dafcf-631a-4ae6-8aad-d716b977402d\") " pod="openstack/swift-storage-0" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.748947 4915 generic.go:334] "Generic (PLEG): container finished" podID="5c220a3f-cd28-4ec2-9dd4-54bf620b874f" containerID="44c2eb5bdf0da5e246548f1d2ef03a4d19a509aec35c968db9f50e0f1a990917" exitCode=0 Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.749006 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-xpvqz" event={"ID":"5c220a3f-cd28-4ec2-9dd4-54bf620b874f","Type":"ContainerDied","Data":"44c2eb5bdf0da5e246548f1d2ef03a4d19a509aec35c968db9f50e0f1a990917"} Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.750536 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"ea1dafcf-631a-4ae6-8aad-d716b977402d\") " pod="openstack/swift-storage-0" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.772136 4915 generic.go:334] "Generic (PLEG): container finished" podID="59f54e36-0033-4e91-be8b-7d447d666d04" containerID="987d19fc7a337053a115e4ac23f2d9b55373977d0707aae3966d28182c0f6133" exitCode=0 Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.772226 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"59f54e36-0033-4e91-be8b-7d447d666d04","Type":"ContainerDied","Data":"987d19fc7a337053a115e4ac23f2d9b55373977d0707aae3966d28182c0f6133"} Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.782979 4915 generic.go:334] "Generic (PLEG): container finished" podID="b449b4ff-1966-465a-8df7-13c2c1a28f75" containerID="a78fb559c775d164594a142ecac63fb8bcdd92613226ab8ccb2ac1d761a800cb" exitCode=0 Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.783033 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pxmbk" event={"ID":"b449b4ff-1966-465a-8df7-13c2c1a28f75","Type":"ContainerDied","Data":"a78fb559c775d164594a142ecac63fb8bcdd92613226ab8ccb2ac1d761a800cb"} Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.808367 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-fwvf6" event={"ID":"58f1a4a9-6f26-483a-966a-f29142a74b4e","Type":"ContainerStarted","Data":"9e67d203f82bf90a55eef9da2a682fb8b2b1041654e16572b6e372cc5fbb71ec"} Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.824933 4915 generic.go:334] "Generic (PLEG): container finished" podID="1bfad247-8a41-40c1-876a-a8948d106c5f" containerID="a753f3b8ae1ff69ea19c039af8cf90c9f555d2220aa8cfe8e9252a90a2c85980" exitCode=0 Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.825220 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-381b-account-create-2p98p" event={"ID":"1bfad247-8a41-40c1-876a-a8948d106c5f","Type":"ContainerDied","Data":"a753f3b8ae1ff69ea19c039af8cf90c9f555d2220aa8cfe8e9252a90a2c85980"} Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.858130 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-lfsfp"] Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.872382 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lfsfp" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.874998 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.888445 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.888580 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:42.893847 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-lfsfp"] Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:43.024126 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d354b8f9-3820-4897-8a8a-021d4a98668a-dispersionconf\") pod \"swift-ring-rebalance-lfsfp\" (UID: \"d354b8f9-3820-4897-8a8a-021d4a98668a\") " pod="openstack/swift-ring-rebalance-lfsfp" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:43.024215 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d354b8f9-3820-4897-8a8a-021d4a98668a-scripts\") pod \"swift-ring-rebalance-lfsfp\" (UID: \"d354b8f9-3820-4897-8a8a-021d4a98668a\") " pod="openstack/swift-ring-rebalance-lfsfp" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:43.024279 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d354b8f9-3820-4897-8a8a-021d4a98668a-ring-data-devices\") pod \"swift-ring-rebalance-lfsfp\" (UID: \"d354b8f9-3820-4897-8a8a-021d4a98668a\") " pod="openstack/swift-ring-rebalance-lfsfp" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:43.024310 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d354b8f9-3820-4897-8a8a-021d4a98668a-swiftconf\") pod \"swift-ring-rebalance-lfsfp\" (UID: \"d354b8f9-3820-4897-8a8a-021d4a98668a\") " pod="openstack/swift-ring-rebalance-lfsfp" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:43.024390 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtjrt\" (UniqueName: \"kubernetes.io/projected/d354b8f9-3820-4897-8a8a-021d4a98668a-kube-api-access-jtjrt\") pod \"swift-ring-rebalance-lfsfp\" (UID: \"d354b8f9-3820-4897-8a8a-021d4a98668a\") " pod="openstack/swift-ring-rebalance-lfsfp" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:43.024449 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d354b8f9-3820-4897-8a8a-021d4a98668a-etc-swift\") pod \"swift-ring-rebalance-lfsfp\" (UID: \"d354b8f9-3820-4897-8a8a-021d4a98668a\") " pod="openstack/swift-ring-rebalance-lfsfp" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:43.024516 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d354b8f9-3820-4897-8a8a-021d4a98668a-combined-ca-bundle\") pod \"swift-ring-rebalance-lfsfp\" (UID: \"d354b8f9-3820-4897-8a8a-021d4a98668a\") " pod="openstack/swift-ring-rebalance-lfsfp" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:43.126460 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d354b8f9-3820-4897-8a8a-021d4a98668a-combined-ca-bundle\") pod \"swift-ring-rebalance-lfsfp\" (UID: \"d354b8f9-3820-4897-8a8a-021d4a98668a\") " pod="openstack/swift-ring-rebalance-lfsfp" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:43.126541 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d354b8f9-3820-4897-8a8a-021d4a98668a-dispersionconf\") pod \"swift-ring-rebalance-lfsfp\" (UID: \"d354b8f9-3820-4897-8a8a-021d4a98668a\") " pod="openstack/swift-ring-rebalance-lfsfp" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:43.126626 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d354b8f9-3820-4897-8a8a-021d4a98668a-scripts\") pod \"swift-ring-rebalance-lfsfp\" (UID: \"d354b8f9-3820-4897-8a8a-021d4a98668a\") " pod="openstack/swift-ring-rebalance-lfsfp" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:43.126691 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d354b8f9-3820-4897-8a8a-021d4a98668a-ring-data-devices\") pod \"swift-ring-rebalance-lfsfp\" (UID: \"d354b8f9-3820-4897-8a8a-021d4a98668a\") " pod="openstack/swift-ring-rebalance-lfsfp" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:43.126718 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d354b8f9-3820-4897-8a8a-021d4a98668a-swiftconf\") pod \"swift-ring-rebalance-lfsfp\" (UID: \"d354b8f9-3820-4897-8a8a-021d4a98668a\") " pod="openstack/swift-ring-rebalance-lfsfp" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:43.126800 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtjrt\" (UniqueName: \"kubernetes.io/projected/d354b8f9-3820-4897-8a8a-021d4a98668a-kube-api-access-jtjrt\") pod \"swift-ring-rebalance-lfsfp\" (UID: \"d354b8f9-3820-4897-8a8a-021d4a98668a\") " pod="openstack/swift-ring-rebalance-lfsfp" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:43.126855 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d354b8f9-3820-4897-8a8a-021d4a98668a-etc-swift\") pod \"swift-ring-rebalance-lfsfp\" (UID: \"d354b8f9-3820-4897-8a8a-021d4a98668a\") " pod="openstack/swift-ring-rebalance-lfsfp" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:43.127339 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d354b8f9-3820-4897-8a8a-021d4a98668a-etc-swift\") pod \"swift-ring-rebalance-lfsfp\" (UID: \"d354b8f9-3820-4897-8a8a-021d4a98668a\") " pod="openstack/swift-ring-rebalance-lfsfp" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:43.127421 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d354b8f9-3820-4897-8a8a-021d4a98668a-scripts\") pod \"swift-ring-rebalance-lfsfp\" (UID: \"d354b8f9-3820-4897-8a8a-021d4a98668a\") " pod="openstack/swift-ring-rebalance-lfsfp" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:43.127896 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d354b8f9-3820-4897-8a8a-021d4a98668a-ring-data-devices\") pod \"swift-ring-rebalance-lfsfp\" (UID: \"d354b8f9-3820-4897-8a8a-021d4a98668a\") " pod="openstack/swift-ring-rebalance-lfsfp" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:43.132296 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d354b8f9-3820-4897-8a8a-021d4a98668a-dispersionconf\") pod \"swift-ring-rebalance-lfsfp\" (UID: \"d354b8f9-3820-4897-8a8a-021d4a98668a\") " pod="openstack/swift-ring-rebalance-lfsfp" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:43.132846 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d354b8f9-3820-4897-8a8a-021d4a98668a-combined-ca-bundle\") pod \"swift-ring-rebalance-lfsfp\" (UID: \"d354b8f9-3820-4897-8a8a-021d4a98668a\") " pod="openstack/swift-ring-rebalance-lfsfp" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:43.145000 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d354b8f9-3820-4897-8a8a-021d4a98668a-swiftconf\") pod \"swift-ring-rebalance-lfsfp\" (UID: \"d354b8f9-3820-4897-8a8a-021d4a98668a\") " pod="openstack/swift-ring-rebalance-lfsfp" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:43.165474 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtjrt\" (UniqueName: \"kubernetes.io/projected/d354b8f9-3820-4897-8a8a-021d4a98668a-kube-api-access-jtjrt\") pod \"swift-ring-rebalance-lfsfp\" (UID: \"d354b8f9-3820-4897-8a8a-021d4a98668a\") " pod="openstack/swift-ring-rebalance-lfsfp" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:43.228752 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea1dafcf-631a-4ae6-8aad-d716b977402d-etc-swift\") pod \"swift-storage-0\" (UID: \"ea1dafcf-631a-4ae6-8aad-d716b977402d\") " pod="openstack/swift-storage-0" Nov 24 21:40:46 crc kubenswrapper[4915]: E1124 21:40:43.229052 4915 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 21:40:46 crc kubenswrapper[4915]: E1124 21:40:43.229074 4915 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 21:40:46 crc kubenswrapper[4915]: E1124 21:40:43.229135 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ea1dafcf-631a-4ae6-8aad-d716b977402d-etc-swift podName:ea1dafcf-631a-4ae6-8aad-d716b977402d nodeName:}" failed. No retries permitted until 2025-11-24 21:40:44.229118383 +0000 UTC m=+1262.545370556 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ea1dafcf-631a-4ae6-8aad-d716b977402d-etc-swift") pod "swift-storage-0" (UID: "ea1dafcf-631a-4ae6-8aad-d716b977402d") : configmap "swift-ring-files" not found Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:43.258972 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lfsfp" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:44.249311 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea1dafcf-631a-4ae6-8aad-d716b977402d-etc-swift\") pod \"swift-storage-0\" (UID: \"ea1dafcf-631a-4ae6-8aad-d716b977402d\") " pod="openstack/swift-storage-0" Nov 24 21:40:46 crc kubenswrapper[4915]: E1124 21:40:44.249528 4915 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 21:40:46 crc kubenswrapper[4915]: E1124 21:40:44.249845 4915 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 21:40:46 crc kubenswrapper[4915]: E1124 21:40:44.249927 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ea1dafcf-631a-4ae6-8aad-d716b977402d-etc-swift podName:ea1dafcf-631a-4ae6-8aad-d716b977402d nodeName:}" failed. No retries permitted until 2025-11-24 21:40:46.24990269 +0000 UTC m=+1264.566154883 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ea1dafcf-631a-4ae6-8aad-d716b977402d-etc-swift") pod "swift-storage-0" (UID: "ea1dafcf-631a-4ae6-8aad-d716b977402d") : configmap "swift-ring-files" not found Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:45.855076 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-fwvf6" event={"ID":"58f1a4a9-6f26-483a-966a-f29142a74b4e","Type":"ContainerStarted","Data":"58a1174e7fb92acce5c3a2ba0125a276a4ff6fb58e91a800cc375746bfc34084"} Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.298421 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea1dafcf-631a-4ae6-8aad-d716b977402d-etc-swift\") pod \"swift-storage-0\" (UID: \"ea1dafcf-631a-4ae6-8aad-d716b977402d\") " pod="openstack/swift-storage-0" Nov 24 21:40:46 crc kubenswrapper[4915]: E1124 21:40:46.298596 4915 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 21:40:46 crc kubenswrapper[4915]: E1124 21:40:46.299055 4915 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 21:40:46 crc kubenswrapper[4915]: E1124 21:40:46.299113 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ea1dafcf-631a-4ae6-8aad-d716b977402d-etc-swift podName:ea1dafcf-631a-4ae6-8aad-d716b977402d nodeName:}" failed. No retries permitted until 2025-11-24 21:40:50.299095916 +0000 UTC m=+1268.615348079 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ea1dafcf-631a-4ae6-8aad-d716b977402d-etc-swift") pod "swift-storage-0" (UID: "ea1dafcf-631a-4ae6-8aad-d716b977402d") : configmap "swift-ring-files" not found Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.817553 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c522-account-create-hp224" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.826665 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rp6qr" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.894105 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-59d9-account-create-tk9r7" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.903103 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-59d9-account-create-tk9r7" event={"ID":"8e339ac3-4a26-4a61-8887-e007e1f3d35c","Type":"ContainerDied","Data":"55987e3e86530cd9e4d46d479e4feb19a6ca493297686901eb49d24fb151aa73"} Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.903147 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55987e3e86530cd9e4d46d479e4feb19a6ca493297686901eb49d24fb151aa73" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.903111 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-59d9-account-create-tk9r7" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.906108 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pxmbk" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.906343 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c522-account-create-hp224" event={"ID":"bb6d5ae7-edbd-4f11-8635-02213556ff48","Type":"ContainerDied","Data":"911f413fdcc834b9836b65c58802a93e7d5eb3b97869b2ba70f5acef31a5462a"} Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.906367 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="911f413fdcc834b9836b65c58802a93e7d5eb3b97869b2ba70f5acef31a5462a" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.906422 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c522-account-create-hp224" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.910411 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq984\" (UniqueName: \"kubernetes.io/projected/8e339ac3-4a26-4a61-8887-e007e1f3d35c-kube-api-access-bq984\") pod \"8e339ac3-4a26-4a61-8887-e007e1f3d35c\" (UID: \"8e339ac3-4a26-4a61-8887-e007e1f3d35c\") " Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.910466 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz7zj\" (UniqueName: \"kubernetes.io/projected/bb6d5ae7-edbd-4f11-8635-02213556ff48-kube-api-access-lz7zj\") pod \"bb6d5ae7-edbd-4f11-8635-02213556ff48\" (UID: \"bb6d5ae7-edbd-4f11-8635-02213556ff48\") " Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.910515 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ec792db-f087-4f1f-a6d5-83e85ca536f9-operator-scripts\") pod \"8ec792db-f087-4f1f-a6d5-83e85ca536f9\" (UID: \"8ec792db-f087-4f1f-a6d5-83e85ca536f9\") " Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.910677 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zj9bh\" (UniqueName: \"kubernetes.io/projected/8ec792db-f087-4f1f-a6d5-83e85ca536f9-kube-api-access-zj9bh\") pod \"8ec792db-f087-4f1f-a6d5-83e85ca536f9\" (UID: \"8ec792db-f087-4f1f-a6d5-83e85ca536f9\") " Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.910709 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb6d5ae7-edbd-4f11-8635-02213556ff48-operator-scripts\") pod \"bb6d5ae7-edbd-4f11-8635-02213556ff48\" (UID: \"bb6d5ae7-edbd-4f11-8635-02213556ff48\") " Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.910735 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e339ac3-4a26-4a61-8887-e007e1f3d35c-operator-scripts\") pod \"8e339ac3-4a26-4a61-8887-e007e1f3d35c\" (UID: \"8e339ac3-4a26-4a61-8887-e007e1f3d35c\") " Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.911416 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ec792db-f087-4f1f-a6d5-83e85ca536f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8ec792db-f087-4f1f-a6d5-83e85ca536f9" (UID: "8ec792db-f087-4f1f-a6d5-83e85ca536f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.911572 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e339ac3-4a26-4a61-8887-e007e1f3d35c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e339ac3-4a26-4a61-8887-e007e1f3d35c" (UID: "8e339ac3-4a26-4a61-8887-e007e1f3d35c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.912428 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb6d5ae7-edbd-4f11-8635-02213556ff48-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bb6d5ae7-edbd-4f11-8635-02213556ff48" (UID: "bb6d5ae7-edbd-4f11-8635-02213556ff48"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.917154 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ec792db-f087-4f1f-a6d5-83e85ca536f9-kube-api-access-zj9bh" (OuterVolumeSpecName: "kube-api-access-zj9bh") pod "8ec792db-f087-4f1f-a6d5-83e85ca536f9" (UID: "8ec792db-f087-4f1f-a6d5-83e85ca536f9"). InnerVolumeSpecName "kube-api-access-zj9bh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.917892 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb6d5ae7-edbd-4f11-8635-02213556ff48-kube-api-access-lz7zj" (OuterVolumeSpecName: "kube-api-access-lz7zj") pod "bb6d5ae7-edbd-4f11-8635-02213556ff48" (UID: "bb6d5ae7-edbd-4f11-8635-02213556ff48"). InnerVolumeSpecName "kube-api-access-lz7zj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.919978 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-381b-account-create-2p98p" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.920154 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-rp6qr" event={"ID":"8ec792db-f087-4f1f-a6d5-83e85ca536f9","Type":"ContainerDied","Data":"21814ab4da157a832d8889b2a836146c88cd26cd4c7316762356f0d0d5c74ec1"} Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.920215 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21814ab4da157a832d8889b2a836146c88cd26cd4c7316762356f0d0d5c74ec1" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.920725 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-rp6qr" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.922212 4915 generic.go:334] "Generic (PLEG): container finished" podID="58f1a4a9-6f26-483a-966a-f29142a74b4e" containerID="58a1174e7fb92acce5c3a2ba0125a276a4ff6fb58e91a800cc375746bfc34084" exitCode=0 Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.922267 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-fwvf6" event={"ID":"58f1a4a9-6f26-483a-966a-f29142a74b4e","Type":"ContainerDied","Data":"58a1174e7fb92acce5c3a2ba0125a276a4ff6fb58e91a800cc375746bfc34084"} Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.922674 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e339ac3-4a26-4a61-8887-e007e1f3d35c-kube-api-access-bq984" (OuterVolumeSpecName: "kube-api-access-bq984") pod "8e339ac3-4a26-4a61-8887-e007e1f3d35c" (UID: "8e339ac3-4a26-4a61-8887-e007e1f3d35c"). InnerVolumeSpecName "kube-api-access-bq984". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.926010 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-xpvqz" event={"ID":"5c220a3f-cd28-4ec2-9dd4-54bf620b874f","Type":"ContainerDied","Data":"be7d41ae6cf51655d87b4288d3a507bea3a019a15d7386bf06f0391bd421c7e2"} Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.926037 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be7d41ae6cf51655d87b4288d3a507bea3a019a15d7386bf06f0391bd421c7e2" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.945422 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pxmbk" event={"ID":"b449b4ff-1966-465a-8df7-13c2c1a28f75","Type":"ContainerDied","Data":"2e6f0aa91bb1453e6020b107cc42c86e5ca475167855f6aa37118cab186db6f6"} Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.945470 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e6f0aa91bb1453e6020b107cc42c86e5ca475167855f6aa37118cab186db6f6" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.945733 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pxmbk" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.953081 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-381b-account-create-2p98p" event={"ID":"1bfad247-8a41-40c1-876a-a8948d106c5f","Type":"ContainerDied","Data":"74156e1ddb815e607ba644668f5a0e96eb68975dd18e462505c299f27c3ec101"} Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.953126 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74156e1ddb815e607ba644668f5a0e96eb68975dd18e462505c299f27c3ec101" Nov 24 21:40:46 crc kubenswrapper[4915]: I1124 21:40:46.953173 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-381b-account-create-2p98p" Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.013301 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bfad247-8a41-40c1-876a-a8948d106c5f-operator-scripts\") pod \"1bfad247-8a41-40c1-876a-a8948d106c5f\" (UID: \"1bfad247-8a41-40c1-876a-a8948d106c5f\") " Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.013824 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8msgt\" (UniqueName: \"kubernetes.io/projected/1bfad247-8a41-40c1-876a-a8948d106c5f-kube-api-access-8msgt\") pod \"1bfad247-8a41-40c1-876a-a8948d106c5f\" (UID: \"1bfad247-8a41-40c1-876a-a8948d106c5f\") " Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.014101 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8gt5\" (UniqueName: \"kubernetes.io/projected/b449b4ff-1966-465a-8df7-13c2c1a28f75-kube-api-access-d8gt5\") pod \"b449b4ff-1966-465a-8df7-13c2c1a28f75\" (UID: \"b449b4ff-1966-465a-8df7-13c2c1a28f75\") " Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.014329 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b449b4ff-1966-465a-8df7-13c2c1a28f75-operator-scripts\") pod \"b449b4ff-1966-465a-8df7-13c2c1a28f75\" (UID: \"b449b4ff-1966-465a-8df7-13c2c1a28f75\") " Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.014844 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bfad247-8a41-40c1-876a-a8948d106c5f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1bfad247-8a41-40c1-876a-a8948d106c5f" (UID: "1bfad247-8a41-40c1-876a-a8948d106c5f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.015093 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b449b4ff-1966-465a-8df7-13c2c1a28f75-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b449b4ff-1966-465a-8df7-13c2c1a28f75" (UID: "b449b4ff-1966-465a-8df7-13c2c1a28f75"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.015310 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zj9bh\" (UniqueName: \"kubernetes.io/projected/8ec792db-f087-4f1f-a6d5-83e85ca536f9-kube-api-access-zj9bh\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.015331 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb6d5ae7-edbd-4f11-8635-02213556ff48-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.015344 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e339ac3-4a26-4a61-8887-e007e1f3d35c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.015378 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq984\" (UniqueName: \"kubernetes.io/projected/8e339ac3-4a26-4a61-8887-e007e1f3d35c-kube-api-access-bq984\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.015391 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz7zj\" (UniqueName: \"kubernetes.io/projected/bb6d5ae7-edbd-4f11-8635-02213556ff48-kube-api-access-lz7zj\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.015402 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ec792db-f087-4f1f-a6d5-83e85ca536f9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.017580 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bfad247-8a41-40c1-876a-a8948d106c5f-kube-api-access-8msgt" (OuterVolumeSpecName: "kube-api-access-8msgt") pod "1bfad247-8a41-40c1-876a-a8948d106c5f" (UID: "1bfad247-8a41-40c1-876a-a8948d106c5f"). InnerVolumeSpecName "kube-api-access-8msgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.019020 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b449b4ff-1966-465a-8df7-13c2c1a28f75-kube-api-access-d8gt5" (OuterVolumeSpecName: "kube-api-access-d8gt5") pod "b449b4ff-1966-465a-8df7-13c2c1a28f75" (UID: "b449b4ff-1966-465a-8df7-13c2c1a28f75"). InnerVolumeSpecName "kube-api-access-d8gt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.024254 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-xpvqz" Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.117024 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c220a3f-cd28-4ec2-9dd4-54bf620b874f-config\") pod \"5c220a3f-cd28-4ec2-9dd4-54bf620b874f\" (UID: \"5c220a3f-cd28-4ec2-9dd4-54bf620b874f\") " Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.117168 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlm6j\" (UniqueName: \"kubernetes.io/projected/5c220a3f-cd28-4ec2-9dd4-54bf620b874f-kube-api-access-zlm6j\") pod \"5c220a3f-cd28-4ec2-9dd4-54bf620b874f\" (UID: \"5c220a3f-cd28-4ec2-9dd4-54bf620b874f\") " Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.117212 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c220a3f-cd28-4ec2-9dd4-54bf620b874f-ovsdbserver-sb\") pod \"5c220a3f-cd28-4ec2-9dd4-54bf620b874f\" (UID: \"5c220a3f-cd28-4ec2-9dd4-54bf620b874f\") " Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.117286 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c220a3f-cd28-4ec2-9dd4-54bf620b874f-dns-svc\") pod \"5c220a3f-cd28-4ec2-9dd4-54bf620b874f\" (UID: \"5c220a3f-cd28-4ec2-9dd4-54bf620b874f\") " Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.117828 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8msgt\" (UniqueName: \"kubernetes.io/projected/1bfad247-8a41-40c1-876a-a8948d106c5f-kube-api-access-8msgt\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.117845 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8gt5\" (UniqueName: \"kubernetes.io/projected/b449b4ff-1966-465a-8df7-13c2c1a28f75-kube-api-access-d8gt5\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.117854 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b449b4ff-1966-465a-8df7-13c2c1a28f75-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.117866 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bfad247-8a41-40c1-876a-a8948d106c5f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.120736 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c220a3f-cd28-4ec2-9dd4-54bf620b874f-kube-api-access-zlm6j" (OuterVolumeSpecName: "kube-api-access-zlm6j") pod "5c220a3f-cd28-4ec2-9dd4-54bf620b874f" (UID: "5c220a3f-cd28-4ec2-9dd4-54bf620b874f"). InnerVolumeSpecName "kube-api-access-zlm6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.135082 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-a004-account-create-rfvch"] Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.158633 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-lfsfp"] Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.168864 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.173935 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-rgp58"] Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.200217 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c220a3f-cd28-4ec2-9dd4-54bf620b874f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5c220a3f-cd28-4ec2-9dd4-54bf620b874f" (UID: "5c220a3f-cd28-4ec2-9dd4-54bf620b874f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.215653 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c220a3f-cd28-4ec2-9dd4-54bf620b874f-config" (OuterVolumeSpecName: "config") pod "5c220a3f-cd28-4ec2-9dd4-54bf620b874f" (UID: "5c220a3f-cd28-4ec2-9dd4-54bf620b874f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.221698 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c220a3f-cd28-4ec2-9dd4-54bf620b874f-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.221732 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlm6j\" (UniqueName: \"kubernetes.io/projected/5c220a3f-cd28-4ec2-9dd4-54bf620b874f-kube-api-access-zlm6j\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.221746 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5c220a3f-cd28-4ec2-9dd4-54bf620b874f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.232482 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c220a3f-cd28-4ec2-9dd4-54bf620b874f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5c220a3f-cd28-4ec2-9dd4-54bf620b874f" (UID: "5c220a3f-cd28-4ec2-9dd4-54bf620b874f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.324138 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5c220a3f-cd28-4ec2-9dd4-54bf620b874f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.604956 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-zphpj" Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.971639 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-lfsfp" event={"ID":"d354b8f9-3820-4897-8a8a-021d4a98668a","Type":"ContainerStarted","Data":"3d603db72715e33a52eb4bc4a0636e91878d2ef498d0599ebd3dfb84017add43"} Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.973873 4915 generic.go:334] "Generic (PLEG): container finished" podID="9d81fc37-e6f7-45ac-b468-cb1868ed5bc4" containerID="972ed35f4ad1137b9f6bb8bdd959a849033053ce8567e5808558e4bed6722c94" exitCode=0 Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.973934 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-rgp58" event={"ID":"9d81fc37-e6f7-45ac-b468-cb1868ed5bc4","Type":"ContainerDied","Data":"972ed35f4ad1137b9f6bb8bdd959a849033053ce8567e5808558e4bed6722c94"} Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.973956 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-rgp58" event={"ID":"9d81fc37-e6f7-45ac-b468-cb1868ed5bc4","Type":"ContainerStarted","Data":"18206cc990716ea46d0639005f4a12b02a156934b7c2e527e447c1c548e00a15"} Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.976042 4915 generic.go:334] "Generic (PLEG): container finished" podID="bb9cf31d-94cf-471c-ab71-668c1cbdcffd" containerID="e3b2b4d7bf833ee1b510ba541b0232acb16649f203d91419473b6ab4dd29d0c2" exitCode=0 Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.976244 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-a004-account-create-rfvch" event={"ID":"bb9cf31d-94cf-471c-ab71-668c1cbdcffd","Type":"ContainerDied","Data":"e3b2b4d7bf833ee1b510ba541b0232acb16649f203d91419473b6ab4dd29d0c2"} Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.976269 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-a004-account-create-rfvch" event={"ID":"bb9cf31d-94cf-471c-ab71-668c1cbdcffd","Type":"ContainerStarted","Data":"d1303b434c9828efead1b571ac9ddbd89601865d6c6471dac841a39d36e3633b"} Nov 24 21:40:47 crc kubenswrapper[4915]: I1124 21:40:47.976318 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-xpvqz" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.046800 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-xpvqz"] Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.054093 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-xpvqz"] Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.404429 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-fwvf6" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.444464 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c220a3f-cd28-4ec2-9dd4-54bf620b874f" path="/var/lib/kubelet/pods/5c220a3f-cd28-4ec2-9dd4-54bf620b874f/volumes" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.455570 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h4wj\" (UniqueName: \"kubernetes.io/projected/58f1a4a9-6f26-483a-966a-f29142a74b4e-kube-api-access-2h4wj\") pod \"58f1a4a9-6f26-483a-966a-f29142a74b4e\" (UID: \"58f1a4a9-6f26-483a-966a-f29142a74b4e\") " Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.455628 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58f1a4a9-6f26-483a-966a-f29142a74b4e-operator-scripts\") pod \"58f1a4a9-6f26-483a-966a-f29142a74b4e\" (UID: \"58f1a4a9-6f26-483a-966a-f29142a74b4e\") " Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.458052 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58f1a4a9-6f26-483a-966a-f29142a74b4e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "58f1a4a9-6f26-483a-966a-f29142a74b4e" (UID: "58f1a4a9-6f26-483a-966a-f29142a74b4e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.465026 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58f1a4a9-6f26-483a-966a-f29142a74b4e-kube-api-access-2h4wj" (OuterVolumeSpecName: "kube-api-access-2h4wj") pod "58f1a4a9-6f26-483a-966a-f29142a74b4e" (UID: "58f1a4a9-6f26-483a-966a-f29142a74b4e"). InnerVolumeSpecName "kube-api-access-2h4wj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.557571 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2h4wj\" (UniqueName: \"kubernetes.io/projected/58f1a4a9-6f26-483a-966a-f29142a74b4e-kube-api-access-2h4wj\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.557609 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58f1a4a9-6f26-483a-966a-f29142a74b4e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.822758 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-qgvss"] Nov 24 21:40:48 crc kubenswrapper[4915]: E1124 21:40:48.823293 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb6d5ae7-edbd-4f11-8635-02213556ff48" containerName="mariadb-account-create" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.823316 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb6d5ae7-edbd-4f11-8635-02213556ff48" containerName="mariadb-account-create" Nov 24 21:40:48 crc kubenswrapper[4915]: E1124 21:40:48.823335 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e339ac3-4a26-4a61-8887-e007e1f3d35c" containerName="mariadb-account-create" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.823342 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e339ac3-4a26-4a61-8887-e007e1f3d35c" containerName="mariadb-account-create" Nov 24 21:40:48 crc kubenswrapper[4915]: E1124 21:40:48.823361 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c220a3f-cd28-4ec2-9dd4-54bf620b874f" containerName="dnsmasq-dns" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.823371 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c220a3f-cd28-4ec2-9dd4-54bf620b874f" containerName="dnsmasq-dns" Nov 24 21:40:48 crc kubenswrapper[4915]: E1124 21:40:48.823385 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b449b4ff-1966-465a-8df7-13c2c1a28f75" containerName="mariadb-database-create" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.823391 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b449b4ff-1966-465a-8df7-13c2c1a28f75" containerName="mariadb-database-create" Nov 24 21:40:48 crc kubenswrapper[4915]: E1124 21:40:48.823402 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c220a3f-cd28-4ec2-9dd4-54bf620b874f" containerName="init" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.823407 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c220a3f-cd28-4ec2-9dd4-54bf620b874f" containerName="init" Nov 24 21:40:48 crc kubenswrapper[4915]: E1124 21:40:48.823427 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ec792db-f087-4f1f-a6d5-83e85ca536f9" containerName="mariadb-database-create" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.823434 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ec792db-f087-4f1f-a6d5-83e85ca536f9" containerName="mariadb-database-create" Nov 24 21:40:48 crc kubenswrapper[4915]: E1124 21:40:48.823443 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58f1a4a9-6f26-483a-966a-f29142a74b4e" containerName="mariadb-database-create" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.823448 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="58f1a4a9-6f26-483a-966a-f29142a74b4e" containerName="mariadb-database-create" Nov 24 21:40:48 crc kubenswrapper[4915]: E1124 21:40:48.823459 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bfad247-8a41-40c1-876a-a8948d106c5f" containerName="mariadb-account-create" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.823465 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bfad247-8a41-40c1-876a-a8948d106c5f" containerName="mariadb-account-create" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.823641 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c220a3f-cd28-4ec2-9dd4-54bf620b874f" containerName="dnsmasq-dns" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.823656 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="b449b4ff-1966-465a-8df7-13c2c1a28f75" containerName="mariadb-database-create" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.823668 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb6d5ae7-edbd-4f11-8635-02213556ff48" containerName="mariadb-account-create" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.823684 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bfad247-8a41-40c1-876a-a8948d106c5f" containerName="mariadb-account-create" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.823692 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e339ac3-4a26-4a61-8887-e007e1f3d35c" containerName="mariadb-account-create" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.823704 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ec792db-f087-4f1f-a6d5-83e85ca536f9" containerName="mariadb-database-create" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.823716 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="58f1a4a9-6f26-483a-966a-f29142a74b4e" containerName="mariadb-database-create" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.824484 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qgvss" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.830475 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-qgvss"] Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.864419 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-479xb\" (UniqueName: \"kubernetes.io/projected/3b5f98ba-ac45-4e86-b59f-34018eebddf8-kube-api-access-479xb\") pod \"keystone-db-create-qgvss\" (UID: \"3b5f98ba-ac45-4e86-b59f-34018eebddf8\") " pod="openstack/keystone-db-create-qgvss" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.864537 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b5f98ba-ac45-4e86-b59f-34018eebddf8-operator-scripts\") pod \"keystone-db-create-qgvss\" (UID: \"3b5f98ba-ac45-4e86-b59f-34018eebddf8\") " pod="openstack/keystone-db-create-qgvss" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.966404 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-479xb\" (UniqueName: \"kubernetes.io/projected/3b5f98ba-ac45-4e86-b59f-34018eebddf8-kube-api-access-479xb\") pod \"keystone-db-create-qgvss\" (UID: \"3b5f98ba-ac45-4e86-b59f-34018eebddf8\") " pod="openstack/keystone-db-create-qgvss" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.966496 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b5f98ba-ac45-4e86-b59f-34018eebddf8-operator-scripts\") pod \"keystone-db-create-qgvss\" (UID: \"3b5f98ba-ac45-4e86-b59f-34018eebddf8\") " pod="openstack/keystone-db-create-qgvss" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.967287 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b5f98ba-ac45-4e86-b59f-34018eebddf8-operator-scripts\") pod \"keystone-db-create-qgvss\" (UID: \"3b5f98ba-ac45-4e86-b59f-34018eebddf8\") " pod="openstack/keystone-db-create-qgvss" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.984550 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-479xb\" (UniqueName: \"kubernetes.io/projected/3b5f98ba-ac45-4e86-b59f-34018eebddf8-kube-api-access-479xb\") pod \"keystone-db-create-qgvss\" (UID: \"3b5f98ba-ac45-4e86-b59f-34018eebddf8\") " pod="openstack/keystone-db-create-qgvss" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.987730 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-rgp58" event={"ID":"9d81fc37-e6f7-45ac-b468-cb1868ed5bc4","Type":"ContainerStarted","Data":"fe90e0d9d4005b95e72cc51282a80126cf2845bfec3d239fe19a87421eec3a4e"} Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.987834 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-rgp58" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.990138 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-fwvf6" event={"ID":"58f1a4a9-6f26-483a-966a-f29142a74b4e","Type":"ContainerDied","Data":"9e67d203f82bf90a55eef9da2a682fb8b2b1041654e16572b6e372cc5fbb71ec"} Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.990166 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e67d203f82bf90a55eef9da2a682fb8b2b1041654e16572b6e372cc5fbb71ec" Nov 24 21:40:48 crc kubenswrapper[4915]: I1124 21:40:48.990211 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-fwvf6" Nov 24 21:40:49 crc kubenswrapper[4915]: I1124 21:40:49.149086 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qgvss" Nov 24 21:40:49 crc kubenswrapper[4915]: I1124 21:40:49.445389 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-rgp58" podStartSLOduration=8.445361634 podStartE2EDuration="8.445361634s" podCreationTimestamp="2025-11-24 21:40:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:40:49.011384898 +0000 UTC m=+1267.327637071" watchObservedRunningTime="2025-11-24 21:40:49.445361634 +0000 UTC m=+1267.761613817" Nov 24 21:40:49 crc kubenswrapper[4915]: I1124 21:40:49.877005 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-wtkdf"] Nov 24 21:40:49 crc kubenswrapper[4915]: I1124 21:40:49.878711 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wtkdf" Nov 24 21:40:49 crc kubenswrapper[4915]: I1124 21:40:49.880952 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qmbqs" Nov 24 21:40:49 crc kubenswrapper[4915]: I1124 21:40:49.881201 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 24 21:40:49 crc kubenswrapper[4915]: I1124 21:40:49.898708 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-wtkdf"] Nov 24 21:40:49 crc kubenswrapper[4915]: I1124 21:40:49.989229 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/815268b1-2fe4-4c1b-a56f-73cc4a0ccf46-combined-ca-bundle\") pod \"glance-db-sync-wtkdf\" (UID: \"815268b1-2fe4-4c1b-a56f-73cc4a0ccf46\") " pod="openstack/glance-db-sync-wtkdf" Nov 24 21:40:49 crc kubenswrapper[4915]: I1124 21:40:49.989278 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpjdq\" (UniqueName: \"kubernetes.io/projected/815268b1-2fe4-4c1b-a56f-73cc4a0ccf46-kube-api-access-hpjdq\") pod \"glance-db-sync-wtkdf\" (UID: \"815268b1-2fe4-4c1b-a56f-73cc4a0ccf46\") " pod="openstack/glance-db-sync-wtkdf" Nov 24 21:40:49 crc kubenswrapper[4915]: I1124 21:40:49.989409 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/815268b1-2fe4-4c1b-a56f-73cc4a0ccf46-config-data\") pod \"glance-db-sync-wtkdf\" (UID: \"815268b1-2fe4-4c1b-a56f-73cc4a0ccf46\") " pod="openstack/glance-db-sync-wtkdf" Nov 24 21:40:49 crc kubenswrapper[4915]: I1124 21:40:49.989479 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/815268b1-2fe4-4c1b-a56f-73cc4a0ccf46-db-sync-config-data\") pod \"glance-db-sync-wtkdf\" (UID: \"815268b1-2fe4-4c1b-a56f-73cc4a0ccf46\") " pod="openstack/glance-db-sync-wtkdf" Nov 24 21:40:50 crc kubenswrapper[4915]: I1124 21:40:50.091184 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/815268b1-2fe4-4c1b-a56f-73cc4a0ccf46-config-data\") pod \"glance-db-sync-wtkdf\" (UID: \"815268b1-2fe4-4c1b-a56f-73cc4a0ccf46\") " pod="openstack/glance-db-sync-wtkdf" Nov 24 21:40:50 crc kubenswrapper[4915]: I1124 21:40:50.091315 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/815268b1-2fe4-4c1b-a56f-73cc4a0ccf46-db-sync-config-data\") pod \"glance-db-sync-wtkdf\" (UID: \"815268b1-2fe4-4c1b-a56f-73cc4a0ccf46\") " pod="openstack/glance-db-sync-wtkdf" Nov 24 21:40:50 crc kubenswrapper[4915]: I1124 21:40:50.091386 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/815268b1-2fe4-4c1b-a56f-73cc4a0ccf46-combined-ca-bundle\") pod \"glance-db-sync-wtkdf\" (UID: \"815268b1-2fe4-4c1b-a56f-73cc4a0ccf46\") " pod="openstack/glance-db-sync-wtkdf" Nov 24 21:40:50 crc kubenswrapper[4915]: I1124 21:40:50.091412 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpjdq\" (UniqueName: \"kubernetes.io/projected/815268b1-2fe4-4c1b-a56f-73cc4a0ccf46-kube-api-access-hpjdq\") pod \"glance-db-sync-wtkdf\" (UID: \"815268b1-2fe4-4c1b-a56f-73cc4a0ccf46\") " pod="openstack/glance-db-sync-wtkdf" Nov 24 21:40:50 crc kubenswrapper[4915]: I1124 21:40:50.097792 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/815268b1-2fe4-4c1b-a56f-73cc4a0ccf46-db-sync-config-data\") pod \"glance-db-sync-wtkdf\" (UID: \"815268b1-2fe4-4c1b-a56f-73cc4a0ccf46\") " pod="openstack/glance-db-sync-wtkdf" Nov 24 21:40:50 crc kubenswrapper[4915]: I1124 21:40:50.099482 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/815268b1-2fe4-4c1b-a56f-73cc4a0ccf46-config-data\") pod \"glance-db-sync-wtkdf\" (UID: \"815268b1-2fe4-4c1b-a56f-73cc4a0ccf46\") " pod="openstack/glance-db-sync-wtkdf" Nov 24 21:40:50 crc kubenswrapper[4915]: I1124 21:40:50.109523 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/815268b1-2fe4-4c1b-a56f-73cc4a0ccf46-combined-ca-bundle\") pod \"glance-db-sync-wtkdf\" (UID: \"815268b1-2fe4-4c1b-a56f-73cc4a0ccf46\") " pod="openstack/glance-db-sync-wtkdf" Nov 24 21:40:50 crc kubenswrapper[4915]: I1124 21:40:50.111671 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpjdq\" (UniqueName: \"kubernetes.io/projected/815268b1-2fe4-4c1b-a56f-73cc4a0ccf46-kube-api-access-hpjdq\") pod \"glance-db-sync-wtkdf\" (UID: \"815268b1-2fe4-4c1b-a56f-73cc4a0ccf46\") " pod="openstack/glance-db-sync-wtkdf" Nov 24 21:40:50 crc kubenswrapper[4915]: I1124 21:40:50.203676 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wtkdf" Nov 24 21:40:50 crc kubenswrapper[4915]: I1124 21:40:50.396112 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea1dafcf-631a-4ae6-8aad-d716b977402d-etc-swift\") pod \"swift-storage-0\" (UID: \"ea1dafcf-631a-4ae6-8aad-d716b977402d\") " pod="openstack/swift-storage-0" Nov 24 21:40:50 crc kubenswrapper[4915]: E1124 21:40:50.396577 4915 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 21:40:50 crc kubenswrapper[4915]: E1124 21:40:50.396595 4915 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 21:40:50 crc kubenswrapper[4915]: E1124 21:40:50.396644 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ea1dafcf-631a-4ae6-8aad-d716b977402d-etc-swift podName:ea1dafcf-631a-4ae6-8aad-d716b977402d nodeName:}" failed. No retries permitted until 2025-11-24 21:40:58.396628056 +0000 UTC m=+1276.712880229 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ea1dafcf-631a-4ae6-8aad-d716b977402d-etc-swift") pod "swift-storage-0" (UID: "ea1dafcf-631a-4ae6-8aad-d716b977402d") : configmap "swift-ring-files" not found Nov 24 21:40:51 crc kubenswrapper[4915]: I1124 21:40:51.143919 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-a004-account-create-rfvch" Nov 24 21:40:51 crc kubenswrapper[4915]: I1124 21:40:51.222181 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dbv9\" (UniqueName: \"kubernetes.io/projected/bb9cf31d-94cf-471c-ab71-668c1cbdcffd-kube-api-access-2dbv9\") pod \"bb9cf31d-94cf-471c-ab71-668c1cbdcffd\" (UID: \"bb9cf31d-94cf-471c-ab71-668c1cbdcffd\") " Nov 24 21:40:51 crc kubenswrapper[4915]: I1124 21:40:51.222230 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb9cf31d-94cf-471c-ab71-668c1cbdcffd-operator-scripts\") pod \"bb9cf31d-94cf-471c-ab71-668c1cbdcffd\" (UID: \"bb9cf31d-94cf-471c-ab71-668c1cbdcffd\") " Nov 24 21:40:51 crc kubenswrapper[4915]: I1124 21:40:51.223725 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb9cf31d-94cf-471c-ab71-668c1cbdcffd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bb9cf31d-94cf-471c-ab71-668c1cbdcffd" (UID: "bb9cf31d-94cf-471c-ab71-668c1cbdcffd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:40:51 crc kubenswrapper[4915]: I1124 21:40:51.230397 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb9cf31d-94cf-471c-ab71-668c1cbdcffd-kube-api-access-2dbv9" (OuterVolumeSpecName: "kube-api-access-2dbv9") pod "bb9cf31d-94cf-471c-ab71-668c1cbdcffd" (UID: "bb9cf31d-94cf-471c-ab71-668c1cbdcffd"). InnerVolumeSpecName "kube-api-access-2dbv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:40:51 crc kubenswrapper[4915]: I1124 21:40:51.325643 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dbv9\" (UniqueName: \"kubernetes.io/projected/bb9cf31d-94cf-471c-ab71-668c1cbdcffd-kube-api-access-2dbv9\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:51 crc kubenswrapper[4915]: I1124 21:40:51.325693 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb9cf31d-94cf-471c-ab71-668c1cbdcffd-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:52 crc kubenswrapper[4915]: I1124 21:40:52.034053 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-a004-account-create-rfvch" event={"ID":"bb9cf31d-94cf-471c-ab71-668c1cbdcffd","Type":"ContainerDied","Data":"d1303b434c9828efead1b571ac9ddbd89601865d6c6471dac841a39d36e3633b"} Nov 24 21:40:52 crc kubenswrapper[4915]: I1124 21:40:52.034389 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1303b434c9828efead1b571ac9ddbd89601865d6c6471dac841a39d36e3633b" Nov 24 21:40:52 crc kubenswrapper[4915]: I1124 21:40:52.034454 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-a004-account-create-rfvch" Nov 24 21:40:52 crc kubenswrapper[4915]: I1124 21:40:52.685579 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 24 21:40:55 crc kubenswrapper[4915]: I1124 21:40:55.037089 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-wtkdf"] Nov 24 21:40:55 crc kubenswrapper[4915]: W1124 21:40:55.042169 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod815268b1_2fe4_4c1b_a56f_73cc4a0ccf46.slice/crio-80913142c66dece8975985eb6890a00cca9abf7d21d475d9376122f761ffea4f WatchSource:0}: Error finding container 80913142c66dece8975985eb6890a00cca9abf7d21d475d9376122f761ffea4f: Status 404 returned error can't find the container with id 80913142c66dece8975985eb6890a00cca9abf7d21d475d9376122f761ffea4f Nov 24 21:40:55 crc kubenswrapper[4915]: I1124 21:40:55.060439 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wtkdf" event={"ID":"815268b1-2fe4-4c1b-a56f-73cc4a0ccf46","Type":"ContainerStarted","Data":"80913142c66dece8975985eb6890a00cca9abf7d21d475d9376122f761ffea4f"} Nov 24 21:40:55 crc kubenswrapper[4915]: I1124 21:40:55.061449 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-qgvss"] Nov 24 21:40:55 crc kubenswrapper[4915]: I1124 21:40:55.061734 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-lfsfp" event={"ID":"d354b8f9-3820-4897-8a8a-021d4a98668a","Type":"ContainerStarted","Data":"5a5680eda45ab1e1c29acc4a67d4032bd075799238243710e4a18a78ebaa9789"} Nov 24 21:40:55 crc kubenswrapper[4915]: I1124 21:40:55.065405 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"59f54e36-0033-4e91-be8b-7d447d666d04","Type":"ContainerStarted","Data":"6ab40da7d3b07856ae020c83a7642ec8892d72dea0fbefc475370708062b6f36"} Nov 24 21:40:55 crc kubenswrapper[4915]: I1124 21:40:55.082086 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-lfsfp" podStartSLOduration=5.714499577 podStartE2EDuration="13.082068997s" podCreationTimestamp="2025-11-24 21:40:42 +0000 UTC" firstStartedPulling="2025-11-24 21:40:47.173538569 +0000 UTC m=+1265.489790742" lastFinishedPulling="2025-11-24 21:40:54.541107949 +0000 UTC m=+1272.857360162" observedRunningTime="2025-11-24 21:40:55.079335444 +0000 UTC m=+1273.395587627" watchObservedRunningTime="2025-11-24 21:40:55.082068997 +0000 UTC m=+1273.398321180" Nov 24 21:40:56 crc kubenswrapper[4915]: I1124 21:40:56.076515 4915 generic.go:334] "Generic (PLEG): container finished" podID="3b5f98ba-ac45-4e86-b59f-34018eebddf8" containerID="9f6c9aee891e4922f3827aa4b4d956552726de373a24435c32c33572652bb41a" exitCode=0 Nov 24 21:40:56 crc kubenswrapper[4915]: I1124 21:40:56.076664 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qgvss" event={"ID":"3b5f98ba-ac45-4e86-b59f-34018eebddf8","Type":"ContainerDied","Data":"9f6c9aee891e4922f3827aa4b4d956552726de373a24435c32c33572652bb41a"} Nov 24 21:40:56 crc kubenswrapper[4915]: I1124 21:40:56.076874 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qgvss" event={"ID":"3b5f98ba-ac45-4e86-b59f-34018eebddf8","Type":"ContainerStarted","Data":"f44744969e7f6d2555f7eb3792e257e1d50b8b77613c42c78aedfc7ec875e19d"} Nov 24 21:40:56 crc kubenswrapper[4915]: I1124 21:40:56.494479 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-gv8q6"] Nov 24 21:40:56 crc kubenswrapper[4915]: E1124 21:40:56.495255 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb9cf31d-94cf-471c-ab71-668c1cbdcffd" containerName="mariadb-account-create" Nov 24 21:40:56 crc kubenswrapper[4915]: I1124 21:40:56.495273 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb9cf31d-94cf-471c-ab71-668c1cbdcffd" containerName="mariadb-account-create" Nov 24 21:40:56 crc kubenswrapper[4915]: I1124 21:40:56.495470 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb9cf31d-94cf-471c-ab71-668c1cbdcffd" containerName="mariadb-account-create" Nov 24 21:40:56 crc kubenswrapper[4915]: I1124 21:40:56.496400 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-gv8q6" Nov 24 21:40:56 crc kubenswrapper[4915]: I1124 21:40:56.508273 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-gv8q6"] Nov 24 21:40:56 crc kubenswrapper[4915]: I1124 21:40:56.548321 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l2x7\" (UniqueName: \"kubernetes.io/projected/c73858f3-e174-4a29-a454-1e7a222fd34e-kube-api-access-8l2x7\") pod \"mysqld-exporter-openstack-cell1-db-create-gv8q6\" (UID: \"c73858f3-e174-4a29-a454-1e7a222fd34e\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-gv8q6" Nov 24 21:40:56 crc kubenswrapper[4915]: I1124 21:40:56.549063 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c73858f3-e174-4a29-a454-1e7a222fd34e-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-gv8q6\" (UID: \"c73858f3-e174-4a29-a454-1e7a222fd34e\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-gv8q6" Nov 24 21:40:56 crc kubenswrapper[4915]: I1124 21:40:56.651451 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c73858f3-e174-4a29-a454-1e7a222fd34e-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-gv8q6\" (UID: \"c73858f3-e174-4a29-a454-1e7a222fd34e\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-gv8q6" Nov 24 21:40:56 crc kubenswrapper[4915]: I1124 21:40:56.651629 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l2x7\" (UniqueName: \"kubernetes.io/projected/c73858f3-e174-4a29-a454-1e7a222fd34e-kube-api-access-8l2x7\") pod \"mysqld-exporter-openstack-cell1-db-create-gv8q6\" (UID: \"c73858f3-e174-4a29-a454-1e7a222fd34e\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-gv8q6" Nov 24 21:40:56 crc kubenswrapper[4915]: I1124 21:40:56.652206 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c73858f3-e174-4a29-a454-1e7a222fd34e-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-gv8q6\" (UID: \"c73858f3-e174-4a29-a454-1e7a222fd34e\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-gv8q6" Nov 24 21:40:56 crc kubenswrapper[4915]: I1124 21:40:56.680394 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-rgp58" Nov 24 21:40:56 crc kubenswrapper[4915]: I1124 21:40:56.687670 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l2x7\" (UniqueName: \"kubernetes.io/projected/c73858f3-e174-4a29-a454-1e7a222fd34e-kube-api-access-8l2x7\") pod \"mysqld-exporter-openstack-cell1-db-create-gv8q6\" (UID: \"c73858f3-e174-4a29-a454-1e7a222fd34e\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-gv8q6" Nov 24 21:40:56 crc kubenswrapper[4915]: I1124 21:40:56.768114 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-1c22-account-create-slgtx"] Nov 24 21:40:56 crc kubenswrapper[4915]: I1124 21:40:56.770001 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-1c22-account-create-slgtx" Nov 24 21:40:56 crc kubenswrapper[4915]: I1124 21:40:56.773824 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Nov 24 21:40:56 crc kubenswrapper[4915]: I1124 21:40:56.780332 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-1c22-account-create-slgtx"] Nov 24 21:40:56 crc kubenswrapper[4915]: I1124 21:40:56.805669 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-zphpj"] Nov 24 21:40:56 crc kubenswrapper[4915]: I1124 21:40:56.805971 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-zphpj" podUID="1a9ca780-e98b-499f-8da0-3070a56716e4" containerName="dnsmasq-dns" containerID="cri-o://bc0b4c1c9eb30630413ee1318499618d83fe9aa96db9a2468e03181d0e381cbf" gracePeriod=10 Nov 24 21:40:56 crc kubenswrapper[4915]: I1124 21:40:56.820465 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-gv8q6" Nov 24 21:40:56 crc kubenswrapper[4915]: I1124 21:40:56.854264 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23b95ea3-815a-4b5a-a623-012022b46b78-operator-scripts\") pod \"mysqld-exporter-1c22-account-create-slgtx\" (UID: \"23b95ea3-815a-4b5a-a623-012022b46b78\") " pod="openstack/mysqld-exporter-1c22-account-create-slgtx" Nov 24 21:40:56 crc kubenswrapper[4915]: I1124 21:40:56.854342 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5xjn\" (UniqueName: \"kubernetes.io/projected/23b95ea3-815a-4b5a-a623-012022b46b78-kube-api-access-v5xjn\") pod \"mysqld-exporter-1c22-account-create-slgtx\" (UID: \"23b95ea3-815a-4b5a-a623-012022b46b78\") " pod="openstack/mysqld-exporter-1c22-account-create-slgtx" Nov 24 21:40:56 crc kubenswrapper[4915]: I1124 21:40:56.956643 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23b95ea3-815a-4b5a-a623-012022b46b78-operator-scripts\") pod \"mysqld-exporter-1c22-account-create-slgtx\" (UID: \"23b95ea3-815a-4b5a-a623-012022b46b78\") " pod="openstack/mysqld-exporter-1c22-account-create-slgtx" Nov 24 21:40:56 crc kubenswrapper[4915]: I1124 21:40:56.956934 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5xjn\" (UniqueName: \"kubernetes.io/projected/23b95ea3-815a-4b5a-a623-012022b46b78-kube-api-access-v5xjn\") pod \"mysqld-exporter-1c22-account-create-slgtx\" (UID: \"23b95ea3-815a-4b5a-a623-012022b46b78\") " pod="openstack/mysqld-exporter-1c22-account-create-slgtx" Nov 24 21:40:56 crc kubenswrapper[4915]: I1124 21:40:56.958414 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23b95ea3-815a-4b5a-a623-012022b46b78-operator-scripts\") pod \"mysqld-exporter-1c22-account-create-slgtx\" (UID: \"23b95ea3-815a-4b5a-a623-012022b46b78\") " pod="openstack/mysqld-exporter-1c22-account-create-slgtx" Nov 24 21:40:57 crc kubenswrapper[4915]: I1124 21:40:57.060615 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5xjn\" (UniqueName: \"kubernetes.io/projected/23b95ea3-815a-4b5a-a623-012022b46b78-kube-api-access-v5xjn\") pod \"mysqld-exporter-1c22-account-create-slgtx\" (UID: \"23b95ea3-815a-4b5a-a623-012022b46b78\") " pod="openstack/mysqld-exporter-1c22-account-create-slgtx" Nov 24 21:40:57 crc kubenswrapper[4915]: I1124 21:40:57.098905 4915 generic.go:334] "Generic (PLEG): container finished" podID="1a9ca780-e98b-499f-8da0-3070a56716e4" containerID="bc0b4c1c9eb30630413ee1318499618d83fe9aa96db9a2468e03181d0e381cbf" exitCode=0 Nov 24 21:40:57 crc kubenswrapper[4915]: I1124 21:40:57.098995 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-zphpj" event={"ID":"1a9ca780-e98b-499f-8da0-3070a56716e4","Type":"ContainerDied","Data":"bc0b4c1c9eb30630413ee1318499618d83fe9aa96db9a2468e03181d0e381cbf"} Nov 24 21:40:57 crc kubenswrapper[4915]: I1124 21:40:57.101832 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-1c22-account-create-slgtx" Nov 24 21:40:57 crc kubenswrapper[4915]: I1124 21:40:57.319259 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-gv8q6"] Nov 24 21:40:57 crc kubenswrapper[4915]: I1124 21:40:57.652039 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-1c22-account-create-slgtx"] Nov 24 21:40:57 crc kubenswrapper[4915]: W1124 21:40:57.661320 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23b95ea3_815a_4b5a_a623_012022b46b78.slice/crio-3560b97256b293128c93c50620afad2a506b8b04da494a844b3a4d7ba2b51625 WatchSource:0}: Error finding container 3560b97256b293128c93c50620afad2a506b8b04da494a844b3a4d7ba2b51625: Status 404 returned error can't find the container with id 3560b97256b293128c93c50620afad2a506b8b04da494a844b3a4d7ba2b51625 Nov 24 21:40:57 crc kubenswrapper[4915]: I1124 21:40:57.679576 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-zphpj" Nov 24 21:40:57 crc kubenswrapper[4915]: I1124 21:40:57.779203 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a9ca780-e98b-499f-8da0-3070a56716e4-ovsdbserver-sb\") pod \"1a9ca780-e98b-499f-8da0-3070a56716e4\" (UID: \"1a9ca780-e98b-499f-8da0-3070a56716e4\") " Nov 24 21:40:57 crc kubenswrapper[4915]: I1124 21:40:57.779307 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a9ca780-e98b-499f-8da0-3070a56716e4-ovsdbserver-nb\") pod \"1a9ca780-e98b-499f-8da0-3070a56716e4\" (UID: \"1a9ca780-e98b-499f-8da0-3070a56716e4\") " Nov 24 21:40:57 crc kubenswrapper[4915]: I1124 21:40:57.779378 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a9ca780-e98b-499f-8da0-3070a56716e4-dns-svc\") pod \"1a9ca780-e98b-499f-8da0-3070a56716e4\" (UID: \"1a9ca780-e98b-499f-8da0-3070a56716e4\") " Nov 24 21:40:57 crc kubenswrapper[4915]: I1124 21:40:57.779459 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2t9z\" (UniqueName: \"kubernetes.io/projected/1a9ca780-e98b-499f-8da0-3070a56716e4-kube-api-access-f2t9z\") pod \"1a9ca780-e98b-499f-8da0-3070a56716e4\" (UID: \"1a9ca780-e98b-499f-8da0-3070a56716e4\") " Nov 24 21:40:57 crc kubenswrapper[4915]: I1124 21:40:57.779609 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a9ca780-e98b-499f-8da0-3070a56716e4-config\") pod \"1a9ca780-e98b-499f-8da0-3070a56716e4\" (UID: \"1a9ca780-e98b-499f-8da0-3070a56716e4\") " Nov 24 21:40:57 crc kubenswrapper[4915]: I1124 21:40:57.868096 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a9ca780-e98b-499f-8da0-3070a56716e4-kube-api-access-f2t9z" (OuterVolumeSpecName: "kube-api-access-f2t9z") pod "1a9ca780-e98b-499f-8da0-3070a56716e4" (UID: "1a9ca780-e98b-499f-8da0-3070a56716e4"). InnerVolumeSpecName "kube-api-access-f2t9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:40:57 crc kubenswrapper[4915]: I1124 21:40:57.883420 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2t9z\" (UniqueName: \"kubernetes.io/projected/1a9ca780-e98b-499f-8da0-3070a56716e4-kube-api-access-f2t9z\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:57 crc kubenswrapper[4915]: I1124 21:40:57.932845 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a9ca780-e98b-499f-8da0-3070a56716e4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1a9ca780-e98b-499f-8da0-3070a56716e4" (UID: "1a9ca780-e98b-499f-8da0-3070a56716e4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:40:57 crc kubenswrapper[4915]: I1124 21:40:57.942744 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a9ca780-e98b-499f-8da0-3070a56716e4-config" (OuterVolumeSpecName: "config") pod "1a9ca780-e98b-499f-8da0-3070a56716e4" (UID: "1a9ca780-e98b-499f-8da0-3070a56716e4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:40:57 crc kubenswrapper[4915]: I1124 21:40:57.971009 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a9ca780-e98b-499f-8da0-3070a56716e4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1a9ca780-e98b-499f-8da0-3070a56716e4" (UID: "1a9ca780-e98b-499f-8da0-3070a56716e4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:40:57 crc kubenswrapper[4915]: I1124 21:40:57.990337 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a9ca780-e98b-499f-8da0-3070a56716e4-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:57 crc kubenswrapper[4915]: I1124 21:40:57.990366 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1a9ca780-e98b-499f-8da0-3070a56716e4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:57 crc kubenswrapper[4915]: I1124 21:40:57.990377 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1a9ca780-e98b-499f-8da0-3070a56716e4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:58 crc kubenswrapper[4915]: I1124 21:40:58.016105 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a9ca780-e98b-499f-8da0-3070a56716e4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1a9ca780-e98b-499f-8da0-3070a56716e4" (UID: "1a9ca780-e98b-499f-8da0-3070a56716e4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:40:58 crc kubenswrapper[4915]: I1124 21:40:58.079931 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qgvss" Nov 24 21:40:58 crc kubenswrapper[4915]: I1124 21:40:58.092546 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1a9ca780-e98b-499f-8da0-3070a56716e4-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:58 crc kubenswrapper[4915]: I1124 21:40:58.110919 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-qgvss" Nov 24 21:40:58 crc kubenswrapper[4915]: I1124 21:40:58.110951 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-qgvss" event={"ID":"3b5f98ba-ac45-4e86-b59f-34018eebddf8","Type":"ContainerDied","Data":"f44744969e7f6d2555f7eb3792e257e1d50b8b77613c42c78aedfc7ec875e19d"} Nov 24 21:40:58 crc kubenswrapper[4915]: I1124 21:40:58.110999 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f44744969e7f6d2555f7eb3792e257e1d50b8b77613c42c78aedfc7ec875e19d" Nov 24 21:40:58 crc kubenswrapper[4915]: I1124 21:40:58.114207 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-1c22-account-create-slgtx" event={"ID":"23b95ea3-815a-4b5a-a623-012022b46b78","Type":"ContainerStarted","Data":"3560b97256b293128c93c50620afad2a506b8b04da494a844b3a4d7ba2b51625"} Nov 24 21:40:58 crc kubenswrapper[4915]: I1124 21:40:58.125253 4915 generic.go:334] "Generic (PLEG): container finished" podID="c73858f3-e174-4a29-a454-1e7a222fd34e" containerID="1bcc7f9d9daea9311d26122f9ea7ec24798aca0eb4f750d10487450e766ad1c6" exitCode=0 Nov 24 21:40:58 crc kubenswrapper[4915]: I1124 21:40:58.125331 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-gv8q6" event={"ID":"c73858f3-e174-4a29-a454-1e7a222fd34e","Type":"ContainerDied","Data":"1bcc7f9d9daea9311d26122f9ea7ec24798aca0eb4f750d10487450e766ad1c6"} Nov 24 21:40:58 crc kubenswrapper[4915]: I1124 21:40:58.125362 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-gv8q6" event={"ID":"c73858f3-e174-4a29-a454-1e7a222fd34e","Type":"ContainerStarted","Data":"f6c07776e73c7592cd0cf6ec8663b39c03fdda3432eaed152b8e9b0af1400923"} Nov 24 21:40:58 crc kubenswrapper[4915]: I1124 21:40:58.128607 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-zphpj" event={"ID":"1a9ca780-e98b-499f-8da0-3070a56716e4","Type":"ContainerDied","Data":"623bebd01efe76881642f105879a58fa2550f34b451565f17af47ef9ffe0a2df"} Nov 24 21:40:58 crc kubenswrapper[4915]: I1124 21:40:58.128672 4915 scope.go:117] "RemoveContainer" containerID="bc0b4c1c9eb30630413ee1318499618d83fe9aa96db9a2468e03181d0e381cbf" Nov 24 21:40:58 crc kubenswrapper[4915]: I1124 21:40:58.128866 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-zphpj" Nov 24 21:40:58 crc kubenswrapper[4915]: I1124 21:40:58.164287 4915 scope.go:117] "RemoveContainer" containerID="8b527b32eee616366915a3ed2ab8b3da4b19af69cf2c10f70bc9f15d2f1c5834" Nov 24 21:40:58 crc kubenswrapper[4915]: I1124 21:40:58.193264 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b5f98ba-ac45-4e86-b59f-34018eebddf8-operator-scripts\") pod \"3b5f98ba-ac45-4e86-b59f-34018eebddf8\" (UID: \"3b5f98ba-ac45-4e86-b59f-34018eebddf8\") " Nov 24 21:40:58 crc kubenswrapper[4915]: I1124 21:40:58.193497 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-479xb\" (UniqueName: \"kubernetes.io/projected/3b5f98ba-ac45-4e86-b59f-34018eebddf8-kube-api-access-479xb\") pod \"3b5f98ba-ac45-4e86-b59f-34018eebddf8\" (UID: \"3b5f98ba-ac45-4e86-b59f-34018eebddf8\") " Nov 24 21:40:58 crc kubenswrapper[4915]: I1124 21:40:58.194509 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b5f98ba-ac45-4e86-b59f-34018eebddf8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b5f98ba-ac45-4e86-b59f-34018eebddf8" (UID: "3b5f98ba-ac45-4e86-b59f-34018eebddf8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:40:58 crc kubenswrapper[4915]: I1124 21:40:58.196595 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-zphpj"] Nov 24 21:40:58 crc kubenswrapper[4915]: I1124 21:40:58.203859 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-zphpj"] Nov 24 21:40:58 crc kubenswrapper[4915]: I1124 21:40:58.268008 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b5f98ba-ac45-4e86-b59f-34018eebddf8-kube-api-access-479xb" (OuterVolumeSpecName: "kube-api-access-479xb") pod "3b5f98ba-ac45-4e86-b59f-34018eebddf8" (UID: "3b5f98ba-ac45-4e86-b59f-34018eebddf8"). InnerVolumeSpecName "kube-api-access-479xb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:40:58 crc kubenswrapper[4915]: I1124 21:40:58.296082 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b5f98ba-ac45-4e86-b59f-34018eebddf8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:58 crc kubenswrapper[4915]: I1124 21:40:58.296110 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-479xb\" (UniqueName: \"kubernetes.io/projected/3b5f98ba-ac45-4e86-b59f-34018eebddf8-kube-api-access-479xb\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:58 crc kubenswrapper[4915]: I1124 21:40:58.397261 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea1dafcf-631a-4ae6-8aad-d716b977402d-etc-swift\") pod \"swift-storage-0\" (UID: \"ea1dafcf-631a-4ae6-8aad-d716b977402d\") " pod="openstack/swift-storage-0" Nov 24 21:40:58 crc kubenswrapper[4915]: E1124 21:40:58.397448 4915 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 24 21:40:58 crc kubenswrapper[4915]: E1124 21:40:58.397476 4915 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 24 21:40:58 crc kubenswrapper[4915]: E1124 21:40:58.397527 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ea1dafcf-631a-4ae6-8aad-d716b977402d-etc-swift podName:ea1dafcf-631a-4ae6-8aad-d716b977402d nodeName:}" failed. No retries permitted until 2025-11-24 21:41:14.397511188 +0000 UTC m=+1292.713763361 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ea1dafcf-631a-4ae6-8aad-d716b977402d-etc-swift") pod "swift-storage-0" (UID: "ea1dafcf-631a-4ae6-8aad-d716b977402d") : configmap "swift-ring-files" not found Nov 24 21:40:58 crc kubenswrapper[4915]: I1124 21:40:58.453448 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a9ca780-e98b-499f-8da0-3070a56716e4" path="/var/lib/kubelet/pods/1a9ca780-e98b-499f-8da0-3070a56716e4/volumes" Nov 24 21:40:58 crc kubenswrapper[4915]: E1124 21:40:58.589161 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23b95ea3_815a_4b5a_a623_012022b46b78.slice/crio-3dc6890cf19b43df6cc27b302bd755aea07fb56994ee11b98a31fbad56367aaa.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b5f98ba_ac45_4e86_b59f_34018eebddf8.slice/crio-f44744969e7f6d2555f7eb3792e257e1d50b8b77613c42c78aedfc7ec875e19d\": RecentStats: unable to find data in memory cache]" Nov 24 21:40:58 crc kubenswrapper[4915]: I1124 21:40:58.659009 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6f65944744-l2cf5" podUID="51dde57d-c05e-4785-b9f6-33f710b32806" containerName="console" containerID="cri-o://dcf0facf4614ea7610099141ed459282e1d9189cb1453b5d69ea86df5a4448dc" gracePeriod=15 Nov 24 21:40:59 crc kubenswrapper[4915]: I1124 21:40:59.139687 4915 generic.go:334] "Generic (PLEG): container finished" podID="23b95ea3-815a-4b5a-a623-012022b46b78" containerID="3dc6890cf19b43df6cc27b302bd755aea07fb56994ee11b98a31fbad56367aaa" exitCode=0 Nov 24 21:40:59 crc kubenswrapper[4915]: I1124 21:40:59.139808 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-1c22-account-create-slgtx" event={"ID":"23b95ea3-815a-4b5a-a623-012022b46b78","Type":"ContainerDied","Data":"3dc6890cf19b43df6cc27b302bd755aea07fb56994ee11b98a31fbad56367aaa"} Nov 24 21:40:59 crc kubenswrapper[4915]: I1124 21:40:59.145956 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6f65944744-l2cf5_51dde57d-c05e-4785-b9f6-33f710b32806/console/0.log" Nov 24 21:40:59 crc kubenswrapper[4915]: I1124 21:40:59.146274 4915 generic.go:334] "Generic (PLEG): container finished" podID="51dde57d-c05e-4785-b9f6-33f710b32806" containerID="dcf0facf4614ea7610099141ed459282e1d9189cb1453b5d69ea86df5a4448dc" exitCode=2 Nov 24 21:40:59 crc kubenswrapper[4915]: I1124 21:40:59.146327 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f65944744-l2cf5" event={"ID":"51dde57d-c05e-4785-b9f6-33f710b32806","Type":"ContainerDied","Data":"dcf0facf4614ea7610099141ed459282e1d9189cb1453b5d69ea86df5a4448dc"} Nov 24 21:40:59 crc kubenswrapper[4915]: I1124 21:40:59.152166 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"59f54e36-0033-4e91-be8b-7d447d666d04","Type":"ContainerStarted","Data":"29978866ffa333011eb6603adf00aa7e921e247bf421a7aa334dda98b7686292"} Nov 24 21:40:59 crc kubenswrapper[4915]: I1124 21:40:59.564989 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-gv8q6" Nov 24 21:40:59 crc kubenswrapper[4915]: I1124 21:40:59.625791 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c73858f3-e174-4a29-a454-1e7a222fd34e-operator-scripts\") pod \"c73858f3-e174-4a29-a454-1e7a222fd34e\" (UID: \"c73858f3-e174-4a29-a454-1e7a222fd34e\") " Nov 24 21:40:59 crc kubenswrapper[4915]: I1124 21:40:59.626107 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8l2x7\" (UniqueName: \"kubernetes.io/projected/c73858f3-e174-4a29-a454-1e7a222fd34e-kube-api-access-8l2x7\") pod \"c73858f3-e174-4a29-a454-1e7a222fd34e\" (UID: \"c73858f3-e174-4a29-a454-1e7a222fd34e\") " Nov 24 21:40:59 crc kubenswrapper[4915]: I1124 21:40:59.626501 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c73858f3-e174-4a29-a454-1e7a222fd34e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c73858f3-e174-4a29-a454-1e7a222fd34e" (UID: "c73858f3-e174-4a29-a454-1e7a222fd34e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:40:59 crc kubenswrapper[4915]: I1124 21:40:59.626898 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c73858f3-e174-4a29-a454-1e7a222fd34e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:40:59 crc kubenswrapper[4915]: I1124 21:40:59.632239 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c73858f3-e174-4a29-a454-1e7a222fd34e-kube-api-access-8l2x7" (OuterVolumeSpecName: "kube-api-access-8l2x7") pod "c73858f3-e174-4a29-a454-1e7a222fd34e" (UID: "c73858f3-e174-4a29-a454-1e7a222fd34e"). InnerVolumeSpecName "kube-api-access-8l2x7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:40:59 crc kubenswrapper[4915]: I1124 21:40:59.728684 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8l2x7\" (UniqueName: \"kubernetes.io/projected/c73858f3-e174-4a29-a454-1e7a222fd34e-kube-api-access-8l2x7\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:00 crc kubenswrapper[4915]: I1124 21:41:00.176848 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-gv8q6" Nov 24 21:41:00 crc kubenswrapper[4915]: I1124 21:41:00.177505 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-gv8q6" event={"ID":"c73858f3-e174-4a29-a454-1e7a222fd34e","Type":"ContainerDied","Data":"f6c07776e73c7592cd0cf6ec8663b39c03fdda3432eaed152b8e9b0af1400923"} Nov 24 21:41:00 crc kubenswrapper[4915]: I1124 21:41:00.177558 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6c07776e73c7592cd0cf6ec8663b39c03fdda3432eaed152b8e9b0af1400923" Nov 24 21:41:00 crc kubenswrapper[4915]: I1124 21:41:00.545045 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-wj47k" podUID="67dd4a19-a0b7-4c7b-8289-40b7fc2476dc" containerName="ovn-controller" probeResult="failure" output=< Nov 24 21:41:00 crc kubenswrapper[4915]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 24 21:41:00 crc kubenswrapper[4915]: > Nov 24 21:41:00 crc kubenswrapper[4915]: I1124 21:41:00.620003 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-1c22-account-create-slgtx" Nov 24 21:41:00 crc kubenswrapper[4915]: I1124 21:41:00.753368 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23b95ea3-815a-4b5a-a623-012022b46b78-operator-scripts\") pod \"23b95ea3-815a-4b5a-a623-012022b46b78\" (UID: \"23b95ea3-815a-4b5a-a623-012022b46b78\") " Nov 24 21:41:00 crc kubenswrapper[4915]: I1124 21:41:00.753479 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5xjn\" (UniqueName: \"kubernetes.io/projected/23b95ea3-815a-4b5a-a623-012022b46b78-kube-api-access-v5xjn\") pod \"23b95ea3-815a-4b5a-a623-012022b46b78\" (UID: \"23b95ea3-815a-4b5a-a623-012022b46b78\") " Nov 24 21:41:00 crc kubenswrapper[4915]: I1124 21:41:00.754422 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23b95ea3-815a-4b5a-a623-012022b46b78-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "23b95ea3-815a-4b5a-a623-012022b46b78" (UID: "23b95ea3-815a-4b5a-a623-012022b46b78"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:00 crc kubenswrapper[4915]: I1124 21:41:00.758631 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23b95ea3-815a-4b5a-a623-012022b46b78-kube-api-access-v5xjn" (OuterVolumeSpecName: "kube-api-access-v5xjn") pod "23b95ea3-815a-4b5a-a623-012022b46b78" (UID: "23b95ea3-815a-4b5a-a623-012022b46b78"). InnerVolumeSpecName "kube-api-access-v5xjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:41:00 crc kubenswrapper[4915]: I1124 21:41:00.818157 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6f65944744-l2cf5_51dde57d-c05e-4785-b9f6-33f710b32806/console/0.log" Nov 24 21:41:00 crc kubenswrapper[4915]: I1124 21:41:00.818254 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f65944744-l2cf5" Nov 24 21:41:00 crc kubenswrapper[4915]: I1124 21:41:00.856579 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23b95ea3-815a-4b5a-a623-012022b46b78-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:00 crc kubenswrapper[4915]: I1124 21:41:00.856643 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5xjn\" (UniqueName: \"kubernetes.io/projected/23b95ea3-815a-4b5a-a623-012022b46b78-kube-api-access-v5xjn\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:00 crc kubenswrapper[4915]: I1124 21:41:00.957305 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/51dde57d-c05e-4785-b9f6-33f710b32806-console-config\") pod \"51dde57d-c05e-4785-b9f6-33f710b32806\" (UID: \"51dde57d-c05e-4785-b9f6-33f710b32806\") " Nov 24 21:41:00 crc kubenswrapper[4915]: I1124 21:41:00.957404 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/51dde57d-c05e-4785-b9f6-33f710b32806-console-oauth-config\") pod \"51dde57d-c05e-4785-b9f6-33f710b32806\" (UID: \"51dde57d-c05e-4785-b9f6-33f710b32806\") " Nov 24 21:41:00 crc kubenswrapper[4915]: I1124 21:41:00.957445 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbnzq\" (UniqueName: \"kubernetes.io/projected/51dde57d-c05e-4785-b9f6-33f710b32806-kube-api-access-nbnzq\") pod \"51dde57d-c05e-4785-b9f6-33f710b32806\" (UID: \"51dde57d-c05e-4785-b9f6-33f710b32806\") " Nov 24 21:41:00 crc kubenswrapper[4915]: I1124 21:41:00.957475 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/51dde57d-c05e-4785-b9f6-33f710b32806-console-serving-cert\") pod \"51dde57d-c05e-4785-b9f6-33f710b32806\" (UID: \"51dde57d-c05e-4785-b9f6-33f710b32806\") " Nov 24 21:41:00 crc kubenswrapper[4915]: I1124 21:41:00.957590 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/51dde57d-c05e-4785-b9f6-33f710b32806-oauth-serving-cert\") pod \"51dde57d-c05e-4785-b9f6-33f710b32806\" (UID: \"51dde57d-c05e-4785-b9f6-33f710b32806\") " Nov 24 21:41:00 crc kubenswrapper[4915]: I1124 21:41:00.957684 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/51dde57d-c05e-4785-b9f6-33f710b32806-service-ca\") pod \"51dde57d-c05e-4785-b9f6-33f710b32806\" (UID: \"51dde57d-c05e-4785-b9f6-33f710b32806\") " Nov 24 21:41:00 crc kubenswrapper[4915]: I1124 21:41:00.957887 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51dde57d-c05e-4785-b9f6-33f710b32806-trusted-ca-bundle\") pod \"51dde57d-c05e-4785-b9f6-33f710b32806\" (UID: \"51dde57d-c05e-4785-b9f6-33f710b32806\") " Nov 24 21:41:00 crc kubenswrapper[4915]: I1124 21:41:00.958335 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51dde57d-c05e-4785-b9f6-33f710b32806-console-config" (OuterVolumeSpecName: "console-config") pod "51dde57d-c05e-4785-b9f6-33f710b32806" (UID: "51dde57d-c05e-4785-b9f6-33f710b32806"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:00 crc kubenswrapper[4915]: I1124 21:41:00.958536 4915 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/51dde57d-c05e-4785-b9f6-33f710b32806-console-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:00 crc kubenswrapper[4915]: I1124 21:41:00.958600 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51dde57d-c05e-4785-b9f6-33f710b32806-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "51dde57d-c05e-4785-b9f6-33f710b32806" (UID: "51dde57d-c05e-4785-b9f6-33f710b32806"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:00 crc kubenswrapper[4915]: I1124 21:41:00.958967 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51dde57d-c05e-4785-b9f6-33f710b32806-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "51dde57d-c05e-4785-b9f6-33f710b32806" (UID: "51dde57d-c05e-4785-b9f6-33f710b32806"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:00 crc kubenswrapper[4915]: I1124 21:41:00.959120 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51dde57d-c05e-4785-b9f6-33f710b32806-service-ca" (OuterVolumeSpecName: "service-ca") pod "51dde57d-c05e-4785-b9f6-33f710b32806" (UID: "51dde57d-c05e-4785-b9f6-33f710b32806"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:00 crc kubenswrapper[4915]: I1124 21:41:00.962748 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51dde57d-c05e-4785-b9f6-33f710b32806-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "51dde57d-c05e-4785-b9f6-33f710b32806" (UID: "51dde57d-c05e-4785-b9f6-33f710b32806"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:41:00 crc kubenswrapper[4915]: I1124 21:41:00.963084 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51dde57d-c05e-4785-b9f6-33f710b32806-kube-api-access-nbnzq" (OuterVolumeSpecName: "kube-api-access-nbnzq") pod "51dde57d-c05e-4785-b9f6-33f710b32806" (UID: "51dde57d-c05e-4785-b9f6-33f710b32806"). InnerVolumeSpecName "kube-api-access-nbnzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:41:00 crc kubenswrapper[4915]: I1124 21:41:00.963094 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51dde57d-c05e-4785-b9f6-33f710b32806-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "51dde57d-c05e-4785-b9f6-33f710b32806" (UID: "51dde57d-c05e-4785-b9f6-33f710b32806"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.060398 4915 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/51dde57d-c05e-4785-b9f6-33f710b32806-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.060457 4915 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/51dde57d-c05e-4785-b9f6-33f710b32806-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.060481 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbnzq\" (UniqueName: \"kubernetes.io/projected/51dde57d-c05e-4785-b9f6-33f710b32806-kube-api-access-nbnzq\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.060500 4915 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/51dde57d-c05e-4785-b9f6-33f710b32806-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.060518 4915 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/51dde57d-c05e-4785-b9f6-33f710b32806-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.060538 4915 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/51dde57d-c05e-4785-b9f6-33f710b32806-service-ca\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.191702 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-1c22-account-create-slgtx" event={"ID":"23b95ea3-815a-4b5a-a623-012022b46b78","Type":"ContainerDied","Data":"3560b97256b293128c93c50620afad2a506b8b04da494a844b3a4d7ba2b51625"} Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.191799 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3560b97256b293128c93c50620afad2a506b8b04da494a844b3a4d7ba2b51625" Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.191735 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-1c22-account-create-slgtx" Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.194930 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6f65944744-l2cf5_51dde57d-c05e-4785-b9f6-33f710b32806/console/0.log" Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.195046 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f65944744-l2cf5" event={"ID":"51dde57d-c05e-4785-b9f6-33f710b32806","Type":"ContainerDied","Data":"295aadead969e766de86bbe3bf5c06c904e4dd275fc4e00765a2e18b10d0d992"} Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.195127 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f65944744-l2cf5" Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.195137 4915 scope.go:117] "RemoveContainer" containerID="dcf0facf4614ea7610099141ed459282e1d9189cb1453b5d69ea86df5a4448dc" Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.237222 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6f65944744-l2cf5"] Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.246391 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6f65944744-l2cf5"] Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.858463 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Nov 24 21:41:01 crc kubenswrapper[4915]: E1124 21:41:01.859236 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51dde57d-c05e-4785-b9f6-33f710b32806" containerName="console" Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.859259 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="51dde57d-c05e-4785-b9f6-33f710b32806" containerName="console" Nov 24 21:41:01 crc kubenswrapper[4915]: E1124 21:41:01.859273 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a9ca780-e98b-499f-8da0-3070a56716e4" containerName="dnsmasq-dns" Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.859281 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9ca780-e98b-499f-8da0-3070a56716e4" containerName="dnsmasq-dns" Nov 24 21:41:01 crc kubenswrapper[4915]: E1124 21:41:01.859289 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c73858f3-e174-4a29-a454-1e7a222fd34e" containerName="mariadb-database-create" Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.859294 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c73858f3-e174-4a29-a454-1e7a222fd34e" containerName="mariadb-database-create" Nov 24 21:41:01 crc kubenswrapper[4915]: E1124 21:41:01.859309 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b95ea3-815a-4b5a-a623-012022b46b78" containerName="mariadb-account-create" Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.859314 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b95ea3-815a-4b5a-a623-012022b46b78" containerName="mariadb-account-create" Nov 24 21:41:01 crc kubenswrapper[4915]: E1124 21:41:01.859335 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b5f98ba-ac45-4e86-b59f-34018eebddf8" containerName="mariadb-database-create" Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.859341 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b5f98ba-ac45-4e86-b59f-34018eebddf8" containerName="mariadb-database-create" Nov 24 21:41:01 crc kubenswrapper[4915]: E1124 21:41:01.859351 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a9ca780-e98b-499f-8da0-3070a56716e4" containerName="init" Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.859357 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a9ca780-e98b-499f-8da0-3070a56716e4" containerName="init" Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.859545 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="51dde57d-c05e-4785-b9f6-33f710b32806" containerName="console" Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.859561 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b5f98ba-ac45-4e86-b59f-34018eebddf8" containerName="mariadb-database-create" Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.859579 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a9ca780-e98b-499f-8da0-3070a56716e4" containerName="dnsmasq-dns" Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.859589 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="23b95ea3-815a-4b5a-a623-012022b46b78" containerName="mariadb-account-create" Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.859597 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c73858f3-e174-4a29-a454-1e7a222fd34e" containerName="mariadb-database-create" Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.860276 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.864768 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.871513 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.986382 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vd62\" (UniqueName: \"kubernetes.io/projected/43a390f0-1b26-42d4-bee7-df0e425cc7bf-kube-api-access-4vd62\") pod \"mysqld-exporter-0\" (UID: \"43a390f0-1b26-42d4-bee7-df0e425cc7bf\") " pod="openstack/mysqld-exporter-0" Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.986796 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43a390f0-1b26-42d4-bee7-df0e425cc7bf-config-data\") pod \"mysqld-exporter-0\" (UID: \"43a390f0-1b26-42d4-bee7-df0e425cc7bf\") " pod="openstack/mysqld-exporter-0" Nov 24 21:41:01 crc kubenswrapper[4915]: I1124 21:41:01.987003 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43a390f0-1b26-42d4-bee7-df0e425cc7bf-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"43a390f0-1b26-42d4-bee7-df0e425cc7bf\") " pod="openstack/mysqld-exporter-0" Nov 24 21:41:02 crc kubenswrapper[4915]: I1124 21:41:02.089610 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43a390f0-1b26-42d4-bee7-df0e425cc7bf-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"43a390f0-1b26-42d4-bee7-df0e425cc7bf\") " pod="openstack/mysqld-exporter-0" Nov 24 21:41:02 crc kubenswrapper[4915]: I1124 21:41:02.089903 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vd62\" (UniqueName: \"kubernetes.io/projected/43a390f0-1b26-42d4-bee7-df0e425cc7bf-kube-api-access-4vd62\") pod \"mysqld-exporter-0\" (UID: \"43a390f0-1b26-42d4-bee7-df0e425cc7bf\") " pod="openstack/mysqld-exporter-0" Nov 24 21:41:02 crc kubenswrapper[4915]: I1124 21:41:02.090068 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43a390f0-1b26-42d4-bee7-df0e425cc7bf-config-data\") pod \"mysqld-exporter-0\" (UID: \"43a390f0-1b26-42d4-bee7-df0e425cc7bf\") " pod="openstack/mysqld-exporter-0" Nov 24 21:41:02 crc kubenswrapper[4915]: I1124 21:41:02.095382 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43a390f0-1b26-42d4-bee7-df0e425cc7bf-config-data\") pod \"mysqld-exporter-0\" (UID: \"43a390f0-1b26-42d4-bee7-df0e425cc7bf\") " pod="openstack/mysqld-exporter-0" Nov 24 21:41:02 crc kubenswrapper[4915]: I1124 21:41:02.095474 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43a390f0-1b26-42d4-bee7-df0e425cc7bf-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"43a390f0-1b26-42d4-bee7-df0e425cc7bf\") " pod="openstack/mysqld-exporter-0" Nov 24 21:41:02 crc kubenswrapper[4915]: I1124 21:41:02.108823 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vd62\" (UniqueName: \"kubernetes.io/projected/43a390f0-1b26-42d4-bee7-df0e425cc7bf-kube-api-access-4vd62\") pod \"mysqld-exporter-0\" (UID: \"43a390f0-1b26-42d4-bee7-df0e425cc7bf\") " pod="openstack/mysqld-exporter-0" Nov 24 21:41:02 crc kubenswrapper[4915]: I1124 21:41:02.187534 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 24 21:41:02 crc kubenswrapper[4915]: I1124 21:41:02.442597 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51dde57d-c05e-4785-b9f6-33f710b32806" path="/var/lib/kubelet/pods/51dde57d-c05e-4785-b9f6-33f710b32806/volumes" Nov 24 21:41:02 crc kubenswrapper[4915]: I1124 21:41:02.603227 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-zphpj" podUID="1a9ca780-e98b-499f-8da0-3070a56716e4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.146:5353: i/o timeout" Nov 24 21:41:03 crc kubenswrapper[4915]: I1124 21:41:03.238421 4915 generic.go:334] "Generic (PLEG): container finished" podID="d354b8f9-3820-4897-8a8a-021d4a98668a" containerID="5a5680eda45ab1e1c29acc4a67d4032bd075799238243710e4a18a78ebaa9789" exitCode=0 Nov 24 21:41:03 crc kubenswrapper[4915]: I1124 21:41:03.238524 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-lfsfp" event={"ID":"d354b8f9-3820-4897-8a8a-021d4a98668a","Type":"ContainerDied","Data":"5a5680eda45ab1e1c29acc4a67d4032bd075799238243710e4a18a78ebaa9789"} Nov 24 21:41:05 crc kubenswrapper[4915]: I1124 21:41:05.258980 4915 generic.go:334] "Generic (PLEG): container finished" podID="a45944d3-396b-4683-b9b5-8e42e9331043" containerID="75d4058db27c5fe795eb5b76e0577c3f27bc129142a656460e05201b3b1c3c20" exitCode=0 Nov 24 21:41:05 crc kubenswrapper[4915]: I1124 21:41:05.259088 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a45944d3-396b-4683-b9b5-8e42e9331043","Type":"ContainerDied","Data":"75d4058db27c5fe795eb5b76e0577c3f27bc129142a656460e05201b3b1c3c20"} Nov 24 21:41:05 crc kubenswrapper[4915]: I1124 21:41:05.261140 4915 generic.go:334] "Generic (PLEG): container finished" podID="8c50db1c-ac88-4299-ab96-8b750308610f" containerID="534c9537314191cda6d87d81dd98fa53f08531ad5782dccb0402569ba537e1b7" exitCode=0 Nov 24 21:41:05 crc kubenswrapper[4915]: I1124 21:41:05.261176 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8c50db1c-ac88-4299-ab96-8b750308610f","Type":"ContainerDied","Data":"534c9537314191cda6d87d81dd98fa53f08531ad5782dccb0402569ba537e1b7"} Nov 24 21:41:05 crc kubenswrapper[4915]: I1124 21:41:05.544279 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-wj47k" podUID="67dd4a19-a0b7-4c7b-8289-40b7fc2476dc" containerName="ovn-controller" probeResult="failure" output=< Nov 24 21:41:05 crc kubenswrapper[4915]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 24 21:41:05 crc kubenswrapper[4915]: > Nov 24 21:41:05 crc kubenswrapper[4915]: I1124 21:41:05.659965 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-mszzs" Nov 24 21:41:05 crc kubenswrapper[4915]: I1124 21:41:05.662414 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-mszzs" Nov 24 21:41:05 crc kubenswrapper[4915]: I1124 21:41:05.896630 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-wj47k-config-4kvx4"] Nov 24 21:41:05 crc kubenswrapper[4915]: I1124 21:41:05.898588 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wj47k-config-4kvx4" Nov 24 21:41:05 crc kubenswrapper[4915]: I1124 21:41:05.900884 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 24 21:41:05 crc kubenswrapper[4915]: I1124 21:41:05.909509 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wj47k-config-4kvx4"] Nov 24 21:41:06 crc kubenswrapper[4915]: I1124 21:41:06.001614 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-var-run-ovn\") pod \"ovn-controller-wj47k-config-4kvx4\" (UID: \"0f2de205-100d-4ef3-ae6a-6a4119b95a5f\") " pod="openstack/ovn-controller-wj47k-config-4kvx4" Nov 24 21:41:06 crc kubenswrapper[4915]: I1124 21:41:06.001857 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-scripts\") pod \"ovn-controller-wj47k-config-4kvx4\" (UID: \"0f2de205-100d-4ef3-ae6a-6a4119b95a5f\") " pod="openstack/ovn-controller-wj47k-config-4kvx4" Nov 24 21:41:06 crc kubenswrapper[4915]: I1124 21:41:06.001934 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-var-log-ovn\") pod \"ovn-controller-wj47k-config-4kvx4\" (UID: \"0f2de205-100d-4ef3-ae6a-6a4119b95a5f\") " pod="openstack/ovn-controller-wj47k-config-4kvx4" Nov 24 21:41:06 crc kubenswrapper[4915]: I1124 21:41:06.002035 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-var-run\") pod \"ovn-controller-wj47k-config-4kvx4\" (UID: \"0f2de205-100d-4ef3-ae6a-6a4119b95a5f\") " pod="openstack/ovn-controller-wj47k-config-4kvx4" Nov 24 21:41:06 crc kubenswrapper[4915]: I1124 21:41:06.002156 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-additional-scripts\") pod \"ovn-controller-wj47k-config-4kvx4\" (UID: \"0f2de205-100d-4ef3-ae6a-6a4119b95a5f\") " pod="openstack/ovn-controller-wj47k-config-4kvx4" Nov 24 21:41:06 crc kubenswrapper[4915]: I1124 21:41:06.002299 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr8sk\" (UniqueName: \"kubernetes.io/projected/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-kube-api-access-cr8sk\") pod \"ovn-controller-wj47k-config-4kvx4\" (UID: \"0f2de205-100d-4ef3-ae6a-6a4119b95a5f\") " pod="openstack/ovn-controller-wj47k-config-4kvx4" Nov 24 21:41:06 crc kubenswrapper[4915]: I1124 21:41:06.104025 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-var-run\") pod \"ovn-controller-wj47k-config-4kvx4\" (UID: \"0f2de205-100d-4ef3-ae6a-6a4119b95a5f\") " pod="openstack/ovn-controller-wj47k-config-4kvx4" Nov 24 21:41:06 crc kubenswrapper[4915]: I1124 21:41:06.104113 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-additional-scripts\") pod \"ovn-controller-wj47k-config-4kvx4\" (UID: \"0f2de205-100d-4ef3-ae6a-6a4119b95a5f\") " pod="openstack/ovn-controller-wj47k-config-4kvx4" Nov 24 21:41:06 crc kubenswrapper[4915]: I1124 21:41:06.104184 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr8sk\" (UniqueName: \"kubernetes.io/projected/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-kube-api-access-cr8sk\") pod \"ovn-controller-wj47k-config-4kvx4\" (UID: \"0f2de205-100d-4ef3-ae6a-6a4119b95a5f\") " pod="openstack/ovn-controller-wj47k-config-4kvx4" Nov 24 21:41:06 crc kubenswrapper[4915]: I1124 21:41:06.104270 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-var-run-ovn\") pod \"ovn-controller-wj47k-config-4kvx4\" (UID: \"0f2de205-100d-4ef3-ae6a-6a4119b95a5f\") " pod="openstack/ovn-controller-wj47k-config-4kvx4" Nov 24 21:41:06 crc kubenswrapper[4915]: I1124 21:41:06.104349 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-scripts\") pod \"ovn-controller-wj47k-config-4kvx4\" (UID: \"0f2de205-100d-4ef3-ae6a-6a4119b95a5f\") " pod="openstack/ovn-controller-wj47k-config-4kvx4" Nov 24 21:41:06 crc kubenswrapper[4915]: I1124 21:41:06.104382 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-var-log-ovn\") pod \"ovn-controller-wj47k-config-4kvx4\" (UID: \"0f2de205-100d-4ef3-ae6a-6a4119b95a5f\") " pod="openstack/ovn-controller-wj47k-config-4kvx4" Nov 24 21:41:06 crc kubenswrapper[4915]: I1124 21:41:06.104720 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-var-log-ovn\") pod \"ovn-controller-wj47k-config-4kvx4\" (UID: \"0f2de205-100d-4ef3-ae6a-6a4119b95a5f\") " pod="openstack/ovn-controller-wj47k-config-4kvx4" Nov 24 21:41:06 crc kubenswrapper[4915]: I1124 21:41:06.104807 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-var-run-ovn\") pod \"ovn-controller-wj47k-config-4kvx4\" (UID: \"0f2de205-100d-4ef3-ae6a-6a4119b95a5f\") " pod="openstack/ovn-controller-wj47k-config-4kvx4" Nov 24 21:41:06 crc kubenswrapper[4915]: I1124 21:41:06.104978 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-var-run\") pod \"ovn-controller-wj47k-config-4kvx4\" (UID: \"0f2de205-100d-4ef3-ae6a-6a4119b95a5f\") " pod="openstack/ovn-controller-wj47k-config-4kvx4" Nov 24 21:41:06 crc kubenswrapper[4915]: I1124 21:41:06.105832 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-additional-scripts\") pod \"ovn-controller-wj47k-config-4kvx4\" (UID: \"0f2de205-100d-4ef3-ae6a-6a4119b95a5f\") " pod="openstack/ovn-controller-wj47k-config-4kvx4" Nov 24 21:41:06 crc kubenswrapper[4915]: I1124 21:41:06.107268 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-scripts\") pod \"ovn-controller-wj47k-config-4kvx4\" (UID: \"0f2de205-100d-4ef3-ae6a-6a4119b95a5f\") " pod="openstack/ovn-controller-wj47k-config-4kvx4" Nov 24 21:41:06 crc kubenswrapper[4915]: I1124 21:41:06.124111 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr8sk\" (UniqueName: \"kubernetes.io/projected/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-kube-api-access-cr8sk\") pod \"ovn-controller-wj47k-config-4kvx4\" (UID: \"0f2de205-100d-4ef3-ae6a-6a4119b95a5f\") " pod="openstack/ovn-controller-wj47k-config-4kvx4" Nov 24 21:41:06 crc kubenswrapper[4915]: I1124 21:41:06.227713 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wj47k-config-4kvx4" Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.047118 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lfsfp" Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.185559 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d354b8f9-3820-4897-8a8a-021d4a98668a-dispersionconf\") pod \"d354b8f9-3820-4897-8a8a-021d4a98668a\" (UID: \"d354b8f9-3820-4897-8a8a-021d4a98668a\") " Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.185930 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d354b8f9-3820-4897-8a8a-021d4a98668a-scripts\") pod \"d354b8f9-3820-4897-8a8a-021d4a98668a\" (UID: \"d354b8f9-3820-4897-8a8a-021d4a98668a\") " Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.185994 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d354b8f9-3820-4897-8a8a-021d4a98668a-etc-swift\") pod \"d354b8f9-3820-4897-8a8a-021d4a98668a\" (UID: \"d354b8f9-3820-4897-8a8a-021d4a98668a\") " Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.186211 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtjrt\" (UniqueName: \"kubernetes.io/projected/d354b8f9-3820-4897-8a8a-021d4a98668a-kube-api-access-jtjrt\") pod \"d354b8f9-3820-4897-8a8a-021d4a98668a\" (UID: \"d354b8f9-3820-4897-8a8a-021d4a98668a\") " Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.186243 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d354b8f9-3820-4897-8a8a-021d4a98668a-combined-ca-bundle\") pod \"d354b8f9-3820-4897-8a8a-021d4a98668a\" (UID: \"d354b8f9-3820-4897-8a8a-021d4a98668a\") " Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.186336 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d354b8f9-3820-4897-8a8a-021d4a98668a-swiftconf\") pod \"d354b8f9-3820-4897-8a8a-021d4a98668a\" (UID: \"d354b8f9-3820-4897-8a8a-021d4a98668a\") " Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.186370 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d354b8f9-3820-4897-8a8a-021d4a98668a-ring-data-devices\") pod \"d354b8f9-3820-4897-8a8a-021d4a98668a\" (UID: \"d354b8f9-3820-4897-8a8a-021d4a98668a\") " Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.187440 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d354b8f9-3820-4897-8a8a-021d4a98668a-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "d354b8f9-3820-4897-8a8a-021d4a98668a" (UID: "d354b8f9-3820-4897-8a8a-021d4a98668a"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.187639 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d354b8f9-3820-4897-8a8a-021d4a98668a-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "d354b8f9-3820-4897-8a8a-021d4a98668a" (UID: "d354b8f9-3820-4897-8a8a-021d4a98668a"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.209195 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d354b8f9-3820-4897-8a8a-021d4a98668a-kube-api-access-jtjrt" (OuterVolumeSpecName: "kube-api-access-jtjrt") pod "d354b8f9-3820-4897-8a8a-021d4a98668a" (UID: "d354b8f9-3820-4897-8a8a-021d4a98668a"). InnerVolumeSpecName "kube-api-access-jtjrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.212535 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d354b8f9-3820-4897-8a8a-021d4a98668a-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "d354b8f9-3820-4897-8a8a-021d4a98668a" (UID: "d354b8f9-3820-4897-8a8a-021d4a98668a"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.260889 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d354b8f9-3820-4897-8a8a-021d4a98668a-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "d354b8f9-3820-4897-8a8a-021d4a98668a" (UID: "d354b8f9-3820-4897-8a8a-021d4a98668a"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.290177 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtjrt\" (UniqueName: \"kubernetes.io/projected/d354b8f9-3820-4897-8a8a-021d4a98668a-kube-api-access-jtjrt\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.290229 4915 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/d354b8f9-3820-4897-8a8a-021d4a98668a-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.290245 4915 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/d354b8f9-3820-4897-8a8a-021d4a98668a-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.290258 4915 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/d354b8f9-3820-4897-8a8a-021d4a98668a-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.290269 4915 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/d354b8f9-3820-4897-8a8a-021d4a98668a-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.290271 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d354b8f9-3820-4897-8a8a-021d4a98668a-scripts" (OuterVolumeSpecName: "scripts") pod "d354b8f9-3820-4897-8a8a-021d4a98668a" (UID: "d354b8f9-3820-4897-8a8a-021d4a98668a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.300881 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d354b8f9-3820-4897-8a8a-021d4a98668a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d354b8f9-3820-4897-8a8a-021d4a98668a" (UID: "d354b8f9-3820-4897-8a8a-021d4a98668a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.355061 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a45944d3-396b-4683-b9b5-8e42e9331043","Type":"ContainerStarted","Data":"fb78ca6bc2439f8a59cc72a28612166857898661871c918b5ae004b3bc4f379c"} Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.356096 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.359513 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8c50db1c-ac88-4299-ab96-8b750308610f","Type":"ContainerStarted","Data":"9d7ed096d74c8eb4cd8db7f36bbb38472431a51770566de7e8a3e42a55417774"} Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.360104 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.365496 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-lfsfp" event={"ID":"d354b8f9-3820-4897-8a8a-021d4a98668a","Type":"ContainerDied","Data":"3d603db72715e33a52eb4bc4a0636e91878d2ef498d0599ebd3dfb84017add43"} Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.365538 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d603db72715e33a52eb4bc4a0636e91878d2ef498d0599ebd3dfb84017add43" Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.365590 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lfsfp" Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.373184 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"59f54e36-0033-4e91-be8b-7d447d666d04","Type":"ContainerStarted","Data":"41722e853b441032ab537c710a846e955a87bc128ec0ee06ff964101a03feed0"} Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.395055 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d354b8f9-3820-4897-8a8a-021d4a98668a-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.395104 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d354b8f9-3820-4897-8a8a-021d4a98668a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.398559 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=43.045739758 podStartE2EDuration="1m16.398541362s" podCreationTimestamp="2025-11-24 21:39:54 +0000 UTC" firstStartedPulling="2025-11-24 21:39:56.638441566 +0000 UTC m=+1214.954693739" lastFinishedPulling="2025-11-24 21:40:29.99124315 +0000 UTC m=+1248.307495343" observedRunningTime="2025-11-24 21:41:10.387150538 +0000 UTC m=+1288.703402711" watchObservedRunningTime="2025-11-24 21:41:10.398541362 +0000 UTC m=+1288.714793535" Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.440560 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=43.236716243 podStartE2EDuration="1m16.440543293s" podCreationTimestamp="2025-11-24 21:39:54 +0000 UTC" firstStartedPulling="2025-11-24 21:39:56.773356905 +0000 UTC m=+1215.089609078" lastFinishedPulling="2025-11-24 21:40:29.977183955 +0000 UTC m=+1248.293436128" observedRunningTime="2025-11-24 21:41:10.420739054 +0000 UTC m=+1288.736991227" watchObservedRunningTime="2025-11-24 21:41:10.440543293 +0000 UTC m=+1288.756795466" Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.454458 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wj47k-config-4kvx4"] Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.455997 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 24 21:41:10 crc kubenswrapper[4915]: W1124 21:41:10.462934 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43a390f0_1b26_42d4_bee7_df0e425cc7bf.slice/crio-0095eb0da3aee9919f780e69c49a9f51dd592260dd05790d0feb345f5727db6c WatchSource:0}: Error finding container 0095eb0da3aee9919f780e69c49a9f51dd592260dd05790d0feb345f5727db6c: Status 404 returned error can't find the container with id 0095eb0da3aee9919f780e69c49a9f51dd592260dd05790d0feb345f5727db6c Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.464915 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.472603 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=22.030386377 podStartE2EDuration="1m9.472582167s" podCreationTimestamp="2025-11-24 21:40:01 +0000 UTC" firstStartedPulling="2025-11-24 21:40:22.465939522 +0000 UTC m=+1240.782191695" lastFinishedPulling="2025-11-24 21:41:09.908135292 +0000 UTC m=+1288.224387485" observedRunningTime="2025-11-24 21:41:10.457508755 +0000 UTC m=+1288.773760928" watchObservedRunningTime="2025-11-24 21:41:10.472582167 +0000 UTC m=+1288.788834350" Nov 24 21:41:10 crc kubenswrapper[4915]: I1124 21:41:10.519887 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-wj47k" podUID="67dd4a19-a0b7-4c7b-8289-40b7fc2476dc" containerName="ovn-controller" probeResult="failure" output=< Nov 24 21:41:10 crc kubenswrapper[4915]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 24 21:41:10 crc kubenswrapper[4915]: > Nov 24 21:41:11 crc kubenswrapper[4915]: I1124 21:41:11.390400 4915 generic.go:334] "Generic (PLEG): container finished" podID="0f2de205-100d-4ef3-ae6a-6a4119b95a5f" containerID="a91d57e8e1a5a27871050dae5a121ab8d7418eb4ef962432bbcf37df14437731" exitCode=0 Nov 24 21:41:11 crc kubenswrapper[4915]: I1124 21:41:11.390501 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wj47k-config-4kvx4" event={"ID":"0f2de205-100d-4ef3-ae6a-6a4119b95a5f","Type":"ContainerDied","Data":"a91d57e8e1a5a27871050dae5a121ab8d7418eb4ef962432bbcf37df14437731"} Nov 24 21:41:11 crc kubenswrapper[4915]: I1124 21:41:11.390834 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wj47k-config-4kvx4" event={"ID":"0f2de205-100d-4ef3-ae6a-6a4119b95a5f","Type":"ContainerStarted","Data":"3f9e8837b1e2e822a0a448b6a5ec99faf57c2f569f79dbe2b20df0b845c06b8e"} Nov 24 21:41:11 crc kubenswrapper[4915]: I1124 21:41:11.392460 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"43a390f0-1b26-42d4-bee7-df0e425cc7bf","Type":"ContainerStarted","Data":"0095eb0da3aee9919f780e69c49a9f51dd592260dd05790d0feb345f5727db6c"} Nov 24 21:41:11 crc kubenswrapper[4915]: I1124 21:41:11.395854 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wtkdf" event={"ID":"815268b1-2fe4-4c1b-a56f-73cc4a0ccf46","Type":"ContainerStarted","Data":"c017670a0bfcff348bb4123c9a8d89bbd8e82682209fc96e8d1d194264d0bad7"} Nov 24 21:41:11 crc kubenswrapper[4915]: I1124 21:41:11.436734 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-wtkdf" podStartSLOduration=7.557651324 podStartE2EDuration="22.436714222s" podCreationTimestamp="2025-11-24 21:40:49 +0000 UTC" firstStartedPulling="2025-11-24 21:40:55.044452534 +0000 UTC m=+1273.360704707" lastFinishedPulling="2025-11-24 21:41:09.923515432 +0000 UTC m=+1288.239767605" observedRunningTime="2025-11-24 21:41:11.42015432 +0000 UTC m=+1289.736406483" watchObservedRunningTime="2025-11-24 21:41:11.436714222 +0000 UTC m=+1289.752966415" Nov 24 21:41:12 crc kubenswrapper[4915]: I1124 21:41:12.407373 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"43a390f0-1b26-42d4-bee7-df0e425cc7bf","Type":"ContainerStarted","Data":"7a8c5db09fd8c6c74407980359318c5a0bf7c0fbc2e8ef895ee300dcbb2d6260"} Nov 24 21:41:12 crc kubenswrapper[4915]: I1124 21:41:12.441445 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=10.110592514 podStartE2EDuration="11.44142172s" podCreationTimestamp="2025-11-24 21:41:01 +0000 UTC" firstStartedPulling="2025-11-24 21:41:10.464626804 +0000 UTC m=+1288.780878977" lastFinishedPulling="2025-11-24 21:41:11.795456 +0000 UTC m=+1290.111708183" observedRunningTime="2025-11-24 21:41:12.435101482 +0000 UTC m=+1290.751353705" watchObservedRunningTime="2025-11-24 21:41:12.44142172 +0000 UTC m=+1290.757673923" Nov 24 21:41:12 crc kubenswrapper[4915]: I1124 21:41:12.841328 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wj47k-config-4kvx4" Nov 24 21:41:12 crc kubenswrapper[4915]: I1124 21:41:12.884873 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:12 crc kubenswrapper[4915]: I1124 21:41:12.953129 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-var-run-ovn\") pod \"0f2de205-100d-4ef3-ae6a-6a4119b95a5f\" (UID: \"0f2de205-100d-4ef3-ae6a-6a4119b95a5f\") " Nov 24 21:41:12 crc kubenswrapper[4915]: I1124 21:41:12.953179 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-var-log-ovn\") pod \"0f2de205-100d-4ef3-ae6a-6a4119b95a5f\" (UID: \"0f2de205-100d-4ef3-ae6a-6a4119b95a5f\") " Nov 24 21:41:12 crc kubenswrapper[4915]: I1124 21:41:12.953227 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cr8sk\" (UniqueName: \"kubernetes.io/projected/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-kube-api-access-cr8sk\") pod \"0f2de205-100d-4ef3-ae6a-6a4119b95a5f\" (UID: \"0f2de205-100d-4ef3-ae6a-6a4119b95a5f\") " Nov 24 21:41:12 crc kubenswrapper[4915]: I1124 21:41:12.953314 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "0f2de205-100d-4ef3-ae6a-6a4119b95a5f" (UID: "0f2de205-100d-4ef3-ae6a-6a4119b95a5f"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:41:12 crc kubenswrapper[4915]: I1124 21:41:12.953352 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-var-run\") pod \"0f2de205-100d-4ef3-ae6a-6a4119b95a5f\" (UID: \"0f2de205-100d-4ef3-ae6a-6a4119b95a5f\") " Nov 24 21:41:12 crc kubenswrapper[4915]: I1124 21:41:12.953346 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "0f2de205-100d-4ef3-ae6a-6a4119b95a5f" (UID: "0f2de205-100d-4ef3-ae6a-6a4119b95a5f"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:41:12 crc kubenswrapper[4915]: I1124 21:41:12.953417 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-additional-scripts\") pod \"0f2de205-100d-4ef3-ae6a-6a4119b95a5f\" (UID: \"0f2de205-100d-4ef3-ae6a-6a4119b95a5f\") " Nov 24 21:41:12 crc kubenswrapper[4915]: I1124 21:41:12.953449 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-scripts\") pod \"0f2de205-100d-4ef3-ae6a-6a4119b95a5f\" (UID: \"0f2de205-100d-4ef3-ae6a-6a4119b95a5f\") " Nov 24 21:41:12 crc kubenswrapper[4915]: I1124 21:41:12.953491 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-var-run" (OuterVolumeSpecName: "var-run") pod "0f2de205-100d-4ef3-ae6a-6a4119b95a5f" (UID: "0f2de205-100d-4ef3-ae6a-6a4119b95a5f"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:41:12 crc kubenswrapper[4915]: I1124 21:41:12.954085 4915 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:12 crc kubenswrapper[4915]: I1124 21:41:12.954107 4915 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:12 crc kubenswrapper[4915]: I1124 21:41:12.954116 4915 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-var-run\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:12 crc kubenswrapper[4915]: I1124 21:41:12.954302 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "0f2de205-100d-4ef3-ae6a-6a4119b95a5f" (UID: "0f2de205-100d-4ef3-ae6a-6a4119b95a5f"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:12 crc kubenswrapper[4915]: I1124 21:41:12.954446 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-scripts" (OuterVolumeSpecName: "scripts") pod "0f2de205-100d-4ef3-ae6a-6a4119b95a5f" (UID: "0f2de205-100d-4ef3-ae6a-6a4119b95a5f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:12 crc kubenswrapper[4915]: I1124 21:41:12.960920 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-kube-api-access-cr8sk" (OuterVolumeSpecName: "kube-api-access-cr8sk") pod "0f2de205-100d-4ef3-ae6a-6a4119b95a5f" (UID: "0f2de205-100d-4ef3-ae6a-6a4119b95a5f"). InnerVolumeSpecName "kube-api-access-cr8sk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:41:13 crc kubenswrapper[4915]: I1124 21:41:13.055952 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cr8sk\" (UniqueName: \"kubernetes.io/projected/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-kube-api-access-cr8sk\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:13 crc kubenswrapper[4915]: I1124 21:41:13.055977 4915 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:13 crc kubenswrapper[4915]: I1124 21:41:13.055987 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f2de205-100d-4ef3-ae6a-6a4119b95a5f-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:13 crc kubenswrapper[4915]: I1124 21:41:13.417256 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wj47k-config-4kvx4" event={"ID":"0f2de205-100d-4ef3-ae6a-6a4119b95a5f","Type":"ContainerDied","Data":"3f9e8837b1e2e822a0a448b6a5ec99faf57c2f569f79dbe2b20df0b845c06b8e"} Nov 24 21:41:13 crc kubenswrapper[4915]: I1124 21:41:13.417298 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f9e8837b1e2e822a0a448b6a5ec99faf57c2f569f79dbe2b20df0b845c06b8e" Nov 24 21:41:13 crc kubenswrapper[4915]: I1124 21:41:13.417272 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wj47k-config-4kvx4" Nov 24 21:41:13 crc kubenswrapper[4915]: I1124 21:41:13.956551 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-wj47k-config-4kvx4"] Nov 24 21:41:13 crc kubenswrapper[4915]: I1124 21:41:13.965741 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-wj47k-config-4kvx4"] Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.055725 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-wj47k-config-gf29n"] Nov 24 21:41:14 crc kubenswrapper[4915]: E1124 21:41:14.056200 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d354b8f9-3820-4897-8a8a-021d4a98668a" containerName="swift-ring-rebalance" Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.056218 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="d354b8f9-3820-4897-8a8a-021d4a98668a" containerName="swift-ring-rebalance" Nov 24 21:41:14 crc kubenswrapper[4915]: E1124 21:41:14.056230 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f2de205-100d-4ef3-ae6a-6a4119b95a5f" containerName="ovn-config" Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.056237 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f2de205-100d-4ef3-ae6a-6a4119b95a5f" containerName="ovn-config" Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.056458 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f2de205-100d-4ef3-ae6a-6a4119b95a5f" containerName="ovn-config" Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.056482 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="d354b8f9-3820-4897-8a8a-021d4a98668a" containerName="swift-ring-rebalance" Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.057167 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wj47k-config-gf29n" Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.059740 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.070058 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wj47k-config-gf29n"] Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.178236 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbf4f\" (UniqueName: \"kubernetes.io/projected/16493970-0b04-4f00-9383-ace92f29acab-kube-api-access-bbf4f\") pod \"ovn-controller-wj47k-config-gf29n\" (UID: \"16493970-0b04-4f00-9383-ace92f29acab\") " pod="openstack/ovn-controller-wj47k-config-gf29n" Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.178320 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/16493970-0b04-4f00-9383-ace92f29acab-additional-scripts\") pod \"ovn-controller-wj47k-config-gf29n\" (UID: \"16493970-0b04-4f00-9383-ace92f29acab\") " pod="openstack/ovn-controller-wj47k-config-gf29n" Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.178439 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/16493970-0b04-4f00-9383-ace92f29acab-var-log-ovn\") pod \"ovn-controller-wj47k-config-gf29n\" (UID: \"16493970-0b04-4f00-9383-ace92f29acab\") " pod="openstack/ovn-controller-wj47k-config-gf29n" Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.178489 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/16493970-0b04-4f00-9383-ace92f29acab-var-run-ovn\") pod \"ovn-controller-wj47k-config-gf29n\" (UID: \"16493970-0b04-4f00-9383-ace92f29acab\") " pod="openstack/ovn-controller-wj47k-config-gf29n" Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.178735 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16493970-0b04-4f00-9383-ace92f29acab-scripts\") pod \"ovn-controller-wj47k-config-gf29n\" (UID: \"16493970-0b04-4f00-9383-ace92f29acab\") " pod="openstack/ovn-controller-wj47k-config-gf29n" Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.178969 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/16493970-0b04-4f00-9383-ace92f29acab-var-run\") pod \"ovn-controller-wj47k-config-gf29n\" (UID: \"16493970-0b04-4f00-9383-ace92f29acab\") " pod="openstack/ovn-controller-wj47k-config-gf29n" Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.281356 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16493970-0b04-4f00-9383-ace92f29acab-scripts\") pod \"ovn-controller-wj47k-config-gf29n\" (UID: \"16493970-0b04-4f00-9383-ace92f29acab\") " pod="openstack/ovn-controller-wj47k-config-gf29n" Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.281441 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/16493970-0b04-4f00-9383-ace92f29acab-var-run\") pod \"ovn-controller-wj47k-config-gf29n\" (UID: \"16493970-0b04-4f00-9383-ace92f29acab\") " pod="openstack/ovn-controller-wj47k-config-gf29n" Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.281519 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbf4f\" (UniqueName: \"kubernetes.io/projected/16493970-0b04-4f00-9383-ace92f29acab-kube-api-access-bbf4f\") pod \"ovn-controller-wj47k-config-gf29n\" (UID: \"16493970-0b04-4f00-9383-ace92f29acab\") " pod="openstack/ovn-controller-wj47k-config-gf29n" Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.281615 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/16493970-0b04-4f00-9383-ace92f29acab-additional-scripts\") pod \"ovn-controller-wj47k-config-gf29n\" (UID: \"16493970-0b04-4f00-9383-ace92f29acab\") " pod="openstack/ovn-controller-wj47k-config-gf29n" Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.281659 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/16493970-0b04-4f00-9383-ace92f29acab-var-log-ovn\") pod \"ovn-controller-wj47k-config-gf29n\" (UID: \"16493970-0b04-4f00-9383-ace92f29acab\") " pod="openstack/ovn-controller-wj47k-config-gf29n" Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.281686 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/16493970-0b04-4f00-9383-ace92f29acab-var-run-ovn\") pod \"ovn-controller-wj47k-config-gf29n\" (UID: \"16493970-0b04-4f00-9383-ace92f29acab\") " pod="openstack/ovn-controller-wj47k-config-gf29n" Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.282001 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/16493970-0b04-4f00-9383-ace92f29acab-var-run-ovn\") pod \"ovn-controller-wj47k-config-gf29n\" (UID: \"16493970-0b04-4f00-9383-ace92f29acab\") " pod="openstack/ovn-controller-wj47k-config-gf29n" Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.282069 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/16493970-0b04-4f00-9383-ace92f29acab-var-log-ovn\") pod \"ovn-controller-wj47k-config-gf29n\" (UID: \"16493970-0b04-4f00-9383-ace92f29acab\") " pod="openstack/ovn-controller-wj47k-config-gf29n" Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.282084 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/16493970-0b04-4f00-9383-ace92f29acab-var-run\") pod \"ovn-controller-wj47k-config-gf29n\" (UID: \"16493970-0b04-4f00-9383-ace92f29acab\") " pod="openstack/ovn-controller-wj47k-config-gf29n" Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.283100 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/16493970-0b04-4f00-9383-ace92f29acab-additional-scripts\") pod \"ovn-controller-wj47k-config-gf29n\" (UID: \"16493970-0b04-4f00-9383-ace92f29acab\") " pod="openstack/ovn-controller-wj47k-config-gf29n" Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.283823 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16493970-0b04-4f00-9383-ace92f29acab-scripts\") pod \"ovn-controller-wj47k-config-gf29n\" (UID: \"16493970-0b04-4f00-9383-ace92f29acab\") " pod="openstack/ovn-controller-wj47k-config-gf29n" Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.301060 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbf4f\" (UniqueName: \"kubernetes.io/projected/16493970-0b04-4f00-9383-ace92f29acab-kube-api-access-bbf4f\") pod \"ovn-controller-wj47k-config-gf29n\" (UID: \"16493970-0b04-4f00-9383-ace92f29acab\") " pod="openstack/ovn-controller-wj47k-config-gf29n" Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.377014 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wj47k-config-gf29n" Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.454217 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f2de205-100d-4ef3-ae6a-6a4119b95a5f" path="/var/lib/kubelet/pods/0f2de205-100d-4ef3-ae6a-6a4119b95a5f/volumes" Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.492056 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea1dafcf-631a-4ae6-8aad-d716b977402d-etc-swift\") pod \"swift-storage-0\" (UID: \"ea1dafcf-631a-4ae6-8aad-d716b977402d\") " pod="openstack/swift-storage-0" Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.506607 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ea1dafcf-631a-4ae6-8aad-d716b977402d-etc-swift\") pod \"swift-storage-0\" (UID: \"ea1dafcf-631a-4ae6-8aad-d716b977402d\") " pod="openstack/swift-storage-0" Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.646499 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 24 21:41:14 crc kubenswrapper[4915]: I1124 21:41:14.670387 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-wj47k-config-gf29n"] Nov 24 21:41:15 crc kubenswrapper[4915]: I1124 21:41:15.206443 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 24 21:41:15 crc kubenswrapper[4915]: W1124 21:41:15.240805 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea1dafcf_631a_4ae6_8aad_d716b977402d.slice/crio-1423bbf5718aefa63448a6db6a6fbe83f1ed62c885350acc62b059f4d70e65c8 WatchSource:0}: Error finding container 1423bbf5718aefa63448a6db6a6fbe83f1ed62c885350acc62b059f4d70e65c8: Status 404 returned error can't find the container with id 1423bbf5718aefa63448a6db6a6fbe83f1ed62c885350acc62b059f4d70e65c8 Nov 24 21:41:15 crc kubenswrapper[4915]: I1124 21:41:15.439627 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea1dafcf-631a-4ae6-8aad-d716b977402d","Type":"ContainerStarted","Data":"1423bbf5718aefa63448a6db6a6fbe83f1ed62c885350acc62b059f4d70e65c8"} Nov 24 21:41:15 crc kubenswrapper[4915]: I1124 21:41:15.441254 4915 generic.go:334] "Generic (PLEG): container finished" podID="16493970-0b04-4f00-9383-ace92f29acab" containerID="e880bef4e0f1c9ea8c2de97d2a0290d4e0275c418a1d605a705a3c400e423ec7" exitCode=0 Nov 24 21:41:15 crc kubenswrapper[4915]: I1124 21:41:15.441284 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wj47k-config-gf29n" event={"ID":"16493970-0b04-4f00-9383-ace92f29acab","Type":"ContainerDied","Data":"e880bef4e0f1c9ea8c2de97d2a0290d4e0275c418a1d605a705a3c400e423ec7"} Nov 24 21:41:15 crc kubenswrapper[4915]: I1124 21:41:15.441300 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wj47k-config-gf29n" event={"ID":"16493970-0b04-4f00-9383-ace92f29acab","Type":"ContainerStarted","Data":"028a25c6194be0926a6b0f5a308283d66cabd48d9563969873552bcb6f8fd972"} Nov 24 21:41:15 crc kubenswrapper[4915]: I1124 21:41:15.524932 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-wj47k" Nov 24 21:41:17 crc kubenswrapper[4915]: I1124 21:41:17.884329 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:17 crc kubenswrapper[4915]: I1124 21:41:17.886804 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:18 crc kubenswrapper[4915]: I1124 21:41:18.471693 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:19 crc kubenswrapper[4915]: I1124 21:41:19.034218 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wj47k-config-gf29n" Nov 24 21:41:19 crc kubenswrapper[4915]: I1124 21:41:19.086503 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16493970-0b04-4f00-9383-ace92f29acab-scripts\") pod \"16493970-0b04-4f00-9383-ace92f29acab\" (UID: \"16493970-0b04-4f00-9383-ace92f29acab\") " Nov 24 21:41:19 crc kubenswrapper[4915]: I1124 21:41:19.086582 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/16493970-0b04-4f00-9383-ace92f29acab-var-run\") pod \"16493970-0b04-4f00-9383-ace92f29acab\" (UID: \"16493970-0b04-4f00-9383-ace92f29acab\") " Nov 24 21:41:19 crc kubenswrapper[4915]: I1124 21:41:19.086609 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/16493970-0b04-4f00-9383-ace92f29acab-var-log-ovn\") pod \"16493970-0b04-4f00-9383-ace92f29acab\" (UID: \"16493970-0b04-4f00-9383-ace92f29acab\") " Nov 24 21:41:19 crc kubenswrapper[4915]: I1124 21:41:19.086746 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbf4f\" (UniqueName: \"kubernetes.io/projected/16493970-0b04-4f00-9383-ace92f29acab-kube-api-access-bbf4f\") pod \"16493970-0b04-4f00-9383-ace92f29acab\" (UID: \"16493970-0b04-4f00-9383-ace92f29acab\") " Nov 24 21:41:19 crc kubenswrapper[4915]: I1124 21:41:19.086830 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16493970-0b04-4f00-9383-ace92f29acab-var-run" (OuterVolumeSpecName: "var-run") pod "16493970-0b04-4f00-9383-ace92f29acab" (UID: "16493970-0b04-4f00-9383-ace92f29acab"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:41:19 crc kubenswrapper[4915]: I1124 21:41:19.086864 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/16493970-0b04-4f00-9383-ace92f29acab-var-run-ovn\") pod \"16493970-0b04-4f00-9383-ace92f29acab\" (UID: \"16493970-0b04-4f00-9383-ace92f29acab\") " Nov 24 21:41:19 crc kubenswrapper[4915]: I1124 21:41:19.087074 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/16493970-0b04-4f00-9383-ace92f29acab-additional-scripts\") pod \"16493970-0b04-4f00-9383-ace92f29acab\" (UID: \"16493970-0b04-4f00-9383-ace92f29acab\") " Nov 24 21:41:19 crc kubenswrapper[4915]: I1124 21:41:19.086893 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16493970-0b04-4f00-9383-ace92f29acab-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "16493970-0b04-4f00-9383-ace92f29acab" (UID: "16493970-0b04-4f00-9383-ace92f29acab"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:41:19 crc kubenswrapper[4915]: I1124 21:41:19.086909 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16493970-0b04-4f00-9383-ace92f29acab-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "16493970-0b04-4f00-9383-ace92f29acab" (UID: "16493970-0b04-4f00-9383-ace92f29acab"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:41:19 crc kubenswrapper[4915]: I1124 21:41:19.087970 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16493970-0b04-4f00-9383-ace92f29acab-scripts" (OuterVolumeSpecName: "scripts") pod "16493970-0b04-4f00-9383-ace92f29acab" (UID: "16493970-0b04-4f00-9383-ace92f29acab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:19 crc kubenswrapper[4915]: I1124 21:41:19.088982 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16493970-0b04-4f00-9383-ace92f29acab-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "16493970-0b04-4f00-9383-ace92f29acab" (UID: "16493970-0b04-4f00-9383-ace92f29acab"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:19 crc kubenswrapper[4915]: I1124 21:41:19.088275 4915 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/16493970-0b04-4f00-9383-ace92f29acab-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:19 crc kubenswrapper[4915]: I1124 21:41:19.089219 4915 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/16493970-0b04-4f00-9383-ace92f29acab-var-run\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:19 crc kubenswrapper[4915]: I1124 21:41:19.092619 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16493970-0b04-4f00-9383-ace92f29acab-kube-api-access-bbf4f" (OuterVolumeSpecName: "kube-api-access-bbf4f") pod "16493970-0b04-4f00-9383-ace92f29acab" (UID: "16493970-0b04-4f00-9383-ace92f29acab"). InnerVolumeSpecName "kube-api-access-bbf4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:41:19 crc kubenswrapper[4915]: I1124 21:41:19.191062 4915 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/16493970-0b04-4f00-9383-ace92f29acab-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:19 crc kubenswrapper[4915]: I1124 21:41:19.191091 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16493970-0b04-4f00-9383-ace92f29acab-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:19 crc kubenswrapper[4915]: I1124 21:41:19.191100 4915 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/16493970-0b04-4f00-9383-ace92f29acab-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:19 crc kubenswrapper[4915]: I1124 21:41:19.191111 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbf4f\" (UniqueName: \"kubernetes.io/projected/16493970-0b04-4f00-9383-ace92f29acab-kube-api-access-bbf4f\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:19 crc kubenswrapper[4915]: I1124 21:41:19.481477 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea1dafcf-631a-4ae6-8aad-d716b977402d","Type":"ContainerStarted","Data":"75df5ce74aea79e740701ce29db0346d21349e99913799797ebf94d540b0fb4d"} Nov 24 21:41:19 crc kubenswrapper[4915]: I1124 21:41:19.481852 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea1dafcf-631a-4ae6-8aad-d716b977402d","Type":"ContainerStarted","Data":"e2b175d9d2ac717bda11814ae941ce2a7b437bd3ee8a7e12d80bb0fed23cdce1"} Nov 24 21:41:19 crc kubenswrapper[4915]: I1124 21:41:19.483167 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-wj47k-config-gf29n" Nov 24 21:41:19 crc kubenswrapper[4915]: I1124 21:41:19.483170 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-wj47k-config-gf29n" event={"ID":"16493970-0b04-4f00-9383-ace92f29acab","Type":"ContainerDied","Data":"028a25c6194be0926a6b0f5a308283d66cabd48d9563969873552bcb6f8fd972"} Nov 24 21:41:19 crc kubenswrapper[4915]: I1124 21:41:19.483328 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="028a25c6194be0926a6b0f5a308283d66cabd48d9563969873552bcb6f8fd972" Nov 24 21:41:20 crc kubenswrapper[4915]: I1124 21:41:20.164230 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-wj47k-config-gf29n"] Nov 24 21:41:20 crc kubenswrapper[4915]: I1124 21:41:20.182688 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-wj47k-config-gf29n"] Nov 24 21:41:20 crc kubenswrapper[4915]: I1124 21:41:20.440651 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16493970-0b04-4f00-9383-ace92f29acab" path="/var/lib/kubelet/pods/16493970-0b04-4f00-9383-ace92f29acab/volumes" Nov 24 21:41:20 crc kubenswrapper[4915]: I1124 21:41:20.508592 4915 generic.go:334] "Generic (PLEG): container finished" podID="815268b1-2fe4-4c1b-a56f-73cc4a0ccf46" containerID="c017670a0bfcff348bb4123c9a8d89bbd8e82682209fc96e8d1d194264d0bad7" exitCode=0 Nov 24 21:41:20 crc kubenswrapper[4915]: I1124 21:41:20.508686 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wtkdf" event={"ID":"815268b1-2fe4-4c1b-a56f-73cc4a0ccf46","Type":"ContainerDied","Data":"c017670a0bfcff348bb4123c9a8d89bbd8e82682209fc96e8d1d194264d0bad7"} Nov 24 21:41:20 crc kubenswrapper[4915]: I1124 21:41:20.515183 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea1dafcf-631a-4ae6-8aad-d716b977402d","Type":"ContainerStarted","Data":"5b0d30180f2ea6b1f5faa4acea02e3ee6b750c2b56f01a3e0227998f37fe1ef4"} Nov 24 21:41:20 crc kubenswrapper[4915]: I1124 21:41:20.515239 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea1dafcf-631a-4ae6-8aad-d716b977402d","Type":"ContainerStarted","Data":"7ebb6dea284567095aab48f57dd0ecc8a76520efc4e8fcaff7b8cc19e8fb1186"} Nov 24 21:41:21 crc kubenswrapper[4915]: I1124 21:41:21.567289 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea1dafcf-631a-4ae6-8aad-d716b977402d","Type":"ContainerStarted","Data":"c63f29ed70e48c7f73c7fc13880dd87976a64794b256bbef629c047bcd671ec4"} Nov 24 21:41:21 crc kubenswrapper[4915]: I1124 21:41:21.567901 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea1dafcf-631a-4ae6-8aad-d716b977402d","Type":"ContainerStarted","Data":"12b99fd8b305533da6de89d04c5cf36d800e6c5eadaae20df6a7f087f2167712"} Nov 24 21:41:21 crc kubenswrapper[4915]: I1124 21:41:21.567920 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea1dafcf-631a-4ae6-8aad-d716b977402d","Type":"ContainerStarted","Data":"66650e31404a19728d593e094ffd7b0f49bbaa4caf79ac9df83eb08a2206e739"} Nov 24 21:41:21 crc kubenswrapper[4915]: I1124 21:41:21.699601 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:41:21 crc kubenswrapper[4915]: I1124 21:41:21.699875 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="59f54e36-0033-4e91-be8b-7d447d666d04" containerName="prometheus" containerID="cri-o://6ab40da7d3b07856ae020c83a7642ec8892d72dea0fbefc475370708062b6f36" gracePeriod=600 Nov 24 21:41:21 crc kubenswrapper[4915]: I1124 21:41:21.700854 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="59f54e36-0033-4e91-be8b-7d447d666d04" containerName="config-reloader" containerID="cri-o://29978866ffa333011eb6603adf00aa7e921e247bf421a7aa334dda98b7686292" gracePeriod=600 Nov 24 21:41:21 crc kubenswrapper[4915]: I1124 21:41:21.700861 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="59f54e36-0033-4e91-be8b-7d447d666d04" containerName="thanos-sidecar" containerID="cri-o://41722e853b441032ab537c710a846e955a87bc128ec0ee06ff964101a03feed0" gracePeriod=600 Nov 24 21:41:21 crc kubenswrapper[4915]: I1124 21:41:21.981350 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wtkdf" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.055185 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpjdq\" (UniqueName: \"kubernetes.io/projected/815268b1-2fe4-4c1b-a56f-73cc4a0ccf46-kube-api-access-hpjdq\") pod \"815268b1-2fe4-4c1b-a56f-73cc4a0ccf46\" (UID: \"815268b1-2fe4-4c1b-a56f-73cc4a0ccf46\") " Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.055241 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/815268b1-2fe4-4c1b-a56f-73cc4a0ccf46-config-data\") pod \"815268b1-2fe4-4c1b-a56f-73cc4a0ccf46\" (UID: \"815268b1-2fe4-4c1b-a56f-73cc4a0ccf46\") " Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.055312 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/815268b1-2fe4-4c1b-a56f-73cc4a0ccf46-combined-ca-bundle\") pod \"815268b1-2fe4-4c1b-a56f-73cc4a0ccf46\" (UID: \"815268b1-2fe4-4c1b-a56f-73cc4a0ccf46\") " Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.055370 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/815268b1-2fe4-4c1b-a56f-73cc4a0ccf46-db-sync-config-data\") pod \"815268b1-2fe4-4c1b-a56f-73cc4a0ccf46\" (UID: \"815268b1-2fe4-4c1b-a56f-73cc4a0ccf46\") " Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.070555 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/815268b1-2fe4-4c1b-a56f-73cc4a0ccf46-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "815268b1-2fe4-4c1b-a56f-73cc4a0ccf46" (UID: "815268b1-2fe4-4c1b-a56f-73cc4a0ccf46"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.071350 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/815268b1-2fe4-4c1b-a56f-73cc4a0ccf46-kube-api-access-hpjdq" (OuterVolumeSpecName: "kube-api-access-hpjdq") pod "815268b1-2fe4-4c1b-a56f-73cc4a0ccf46" (UID: "815268b1-2fe4-4c1b-a56f-73cc4a0ccf46"). InnerVolumeSpecName "kube-api-access-hpjdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.119047 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/815268b1-2fe4-4c1b-a56f-73cc4a0ccf46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "815268b1-2fe4-4c1b-a56f-73cc4a0ccf46" (UID: "815268b1-2fe4-4c1b-a56f-73cc4a0ccf46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.158518 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpjdq\" (UniqueName: \"kubernetes.io/projected/815268b1-2fe4-4c1b-a56f-73cc4a0ccf46-kube-api-access-hpjdq\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.158551 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/815268b1-2fe4-4c1b-a56f-73cc4a0ccf46-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.158563 4915 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/815268b1-2fe4-4c1b-a56f-73cc4a0ccf46-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.159997 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/815268b1-2fe4-4c1b-a56f-73cc4a0ccf46-config-data" (OuterVolumeSpecName: "config-data") pod "815268b1-2fe4-4c1b-a56f-73cc4a0ccf46" (UID: "815268b1-2fe4-4c1b-a56f-73cc4a0ccf46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.261657 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/815268b1-2fe4-4c1b-a56f-73cc4a0ccf46-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.277211 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.362534 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfh5v\" (UniqueName: \"kubernetes.io/projected/59f54e36-0033-4e91-be8b-7d447d666d04-kube-api-access-jfh5v\") pod \"59f54e36-0033-4e91-be8b-7d447d666d04\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.363007 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f46d0998-3845-4338-bf4d-9c6294f76988\") pod \"59f54e36-0033-4e91-be8b-7d447d666d04\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.363098 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/59f54e36-0033-4e91-be8b-7d447d666d04-tls-assets\") pod \"59f54e36-0033-4e91-be8b-7d447d666d04\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.363145 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/59f54e36-0033-4e91-be8b-7d447d666d04-prometheus-metric-storage-rulefiles-0\") pod \"59f54e36-0033-4e91-be8b-7d447d666d04\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.363191 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/59f54e36-0033-4e91-be8b-7d447d666d04-thanos-prometheus-http-client-file\") pod \"59f54e36-0033-4e91-be8b-7d447d666d04\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.363284 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/59f54e36-0033-4e91-be8b-7d447d666d04-web-config\") pod \"59f54e36-0033-4e91-be8b-7d447d666d04\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.363419 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/59f54e36-0033-4e91-be8b-7d447d666d04-config-out\") pod \"59f54e36-0033-4e91-be8b-7d447d666d04\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.363449 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/59f54e36-0033-4e91-be8b-7d447d666d04-config\") pod \"59f54e36-0033-4e91-be8b-7d447d666d04\" (UID: \"59f54e36-0033-4e91-be8b-7d447d666d04\") " Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.365802 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59f54e36-0033-4e91-be8b-7d447d666d04-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "59f54e36-0033-4e91-be8b-7d447d666d04" (UID: "59f54e36-0033-4e91-be8b-7d447d666d04"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.368955 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59f54e36-0033-4e91-be8b-7d447d666d04-config" (OuterVolumeSpecName: "config") pod "59f54e36-0033-4e91-be8b-7d447d666d04" (UID: "59f54e36-0033-4e91-be8b-7d447d666d04"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.370519 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59f54e36-0033-4e91-be8b-7d447d666d04-kube-api-access-jfh5v" (OuterVolumeSpecName: "kube-api-access-jfh5v") pod "59f54e36-0033-4e91-be8b-7d447d666d04" (UID: "59f54e36-0033-4e91-be8b-7d447d666d04"). InnerVolumeSpecName "kube-api-access-jfh5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.375955 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59f54e36-0033-4e91-be8b-7d447d666d04-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "59f54e36-0033-4e91-be8b-7d447d666d04" (UID: "59f54e36-0033-4e91-be8b-7d447d666d04"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.377915 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59f54e36-0033-4e91-be8b-7d447d666d04-config-out" (OuterVolumeSpecName: "config-out") pod "59f54e36-0033-4e91-be8b-7d447d666d04" (UID: "59f54e36-0033-4e91-be8b-7d447d666d04"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.383012 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59f54e36-0033-4e91-be8b-7d447d666d04-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "59f54e36-0033-4e91-be8b-7d447d666d04" (UID: "59f54e36-0033-4e91-be8b-7d447d666d04"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.397355 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f46d0998-3845-4338-bf4d-9c6294f76988" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "59f54e36-0033-4e91-be8b-7d447d666d04" (UID: "59f54e36-0033-4e91-be8b-7d447d666d04"). InnerVolumeSpecName "pvc-f46d0998-3845-4338-bf4d-9c6294f76988". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.418341 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59f54e36-0033-4e91-be8b-7d447d666d04-web-config" (OuterVolumeSpecName: "web-config") pod "59f54e36-0033-4e91-be8b-7d447d666d04" (UID: "59f54e36-0033-4e91-be8b-7d447d666d04"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.466619 4915 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/59f54e36-0033-4e91-be8b-7d447d666d04-web-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.466652 4915 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/59f54e36-0033-4e91-be8b-7d447d666d04-config-out\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.466666 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/59f54e36-0033-4e91-be8b-7d447d666d04-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.466678 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfh5v\" (UniqueName: \"kubernetes.io/projected/59f54e36-0033-4e91-be8b-7d447d666d04-kube-api-access-jfh5v\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.466708 4915 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f46d0998-3845-4338-bf4d-9c6294f76988\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f46d0998-3845-4338-bf4d-9c6294f76988\") on node \"crc\" " Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.466724 4915 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/59f54e36-0033-4e91-be8b-7d447d666d04-tls-assets\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.466737 4915 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/59f54e36-0033-4e91-be8b-7d447d666d04-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.466750 4915 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/59f54e36-0033-4e91-be8b-7d447d666d04-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.503080 4915 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.503248 4915 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f46d0998-3845-4338-bf4d-9c6294f76988" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f46d0998-3845-4338-bf4d-9c6294f76988") on node "crc" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.568457 4915 reconciler_common.go:293] "Volume detached for volume \"pvc-f46d0998-3845-4338-bf4d-9c6294f76988\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f46d0998-3845-4338-bf4d-9c6294f76988\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.583611 4915 generic.go:334] "Generic (PLEG): container finished" podID="59f54e36-0033-4e91-be8b-7d447d666d04" containerID="41722e853b441032ab537c710a846e955a87bc128ec0ee06ff964101a03feed0" exitCode=0 Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.583664 4915 generic.go:334] "Generic (PLEG): container finished" podID="59f54e36-0033-4e91-be8b-7d447d666d04" containerID="29978866ffa333011eb6603adf00aa7e921e247bf421a7aa334dda98b7686292" exitCode=0 Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.583674 4915 generic.go:334] "Generic (PLEG): container finished" podID="59f54e36-0033-4e91-be8b-7d447d666d04" containerID="6ab40da7d3b07856ae020c83a7642ec8892d72dea0fbefc475370708062b6f36" exitCode=0 Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.583716 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"59f54e36-0033-4e91-be8b-7d447d666d04","Type":"ContainerDied","Data":"41722e853b441032ab537c710a846e955a87bc128ec0ee06ff964101a03feed0"} Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.583763 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"59f54e36-0033-4e91-be8b-7d447d666d04","Type":"ContainerDied","Data":"29978866ffa333011eb6603adf00aa7e921e247bf421a7aa334dda98b7686292"} Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.583787 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"59f54e36-0033-4e91-be8b-7d447d666d04","Type":"ContainerDied","Data":"6ab40da7d3b07856ae020c83a7642ec8892d72dea0fbefc475370708062b6f36"} Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.583796 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"59f54e36-0033-4e91-be8b-7d447d666d04","Type":"ContainerDied","Data":"40674364d51505edffbe971664af73918e7803fc104f22e76a759d43eb969576"} Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.583815 4915 scope.go:117] "RemoveContainer" containerID="41722e853b441032ab537c710a846e955a87bc128ec0ee06ff964101a03feed0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.583948 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.589146 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-wtkdf" event={"ID":"815268b1-2fe4-4c1b-a56f-73cc4a0ccf46","Type":"ContainerDied","Data":"80913142c66dece8975985eb6890a00cca9abf7d21d475d9376122f761ffea4f"} Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.589172 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-wtkdf" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.589187 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80913142c66dece8975985eb6890a00cca9abf7d21d475d9376122f761ffea4f" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.597324 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea1dafcf-631a-4ae6-8aad-d716b977402d","Type":"ContainerStarted","Data":"88da46d9fa0dedbabd996e4bb7a4306719b5c860829a90d4a557f4a2b354523c"} Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.612967 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.623532 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.632018 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:41:22 crc kubenswrapper[4915]: E1124 21:41:22.632380 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59f54e36-0033-4e91-be8b-7d447d666d04" containerName="prometheus" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.632396 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="59f54e36-0033-4e91-be8b-7d447d666d04" containerName="prometheus" Nov 24 21:41:22 crc kubenswrapper[4915]: E1124 21:41:22.632423 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59f54e36-0033-4e91-be8b-7d447d666d04" containerName="thanos-sidecar" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.632429 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="59f54e36-0033-4e91-be8b-7d447d666d04" containerName="thanos-sidecar" Nov 24 21:41:22 crc kubenswrapper[4915]: E1124 21:41:22.632439 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59f54e36-0033-4e91-be8b-7d447d666d04" containerName="config-reloader" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.632445 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="59f54e36-0033-4e91-be8b-7d447d666d04" containerName="config-reloader" Nov 24 21:41:22 crc kubenswrapper[4915]: E1124 21:41:22.632462 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16493970-0b04-4f00-9383-ace92f29acab" containerName="ovn-config" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.632468 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="16493970-0b04-4f00-9383-ace92f29acab" containerName="ovn-config" Nov 24 21:41:22 crc kubenswrapper[4915]: E1124 21:41:22.632479 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59f54e36-0033-4e91-be8b-7d447d666d04" containerName="init-config-reloader" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.632485 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="59f54e36-0033-4e91-be8b-7d447d666d04" containerName="init-config-reloader" Nov 24 21:41:22 crc kubenswrapper[4915]: E1124 21:41:22.632495 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815268b1-2fe4-4c1b-a56f-73cc4a0ccf46" containerName="glance-db-sync" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.632501 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="815268b1-2fe4-4c1b-a56f-73cc4a0ccf46" containerName="glance-db-sync" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.632670 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="59f54e36-0033-4e91-be8b-7d447d666d04" containerName="thanos-sidecar" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.632882 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="59f54e36-0033-4e91-be8b-7d447d666d04" containerName="config-reloader" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.632890 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="16493970-0b04-4f00-9383-ace92f29acab" containerName="ovn-config" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.632911 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="815268b1-2fe4-4c1b-a56f-73cc4a0ccf46" containerName="glance-db-sync" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.632923 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="59f54e36-0033-4e91-be8b-7d447d666d04" containerName="prometheus" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.634594 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.638800 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.641041 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-9hgrx" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.641345 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.642542 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.644383 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.645346 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.652573 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.655904 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.670140 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cbc69ba4-d747-467c-98ab-d22491a8203c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.670219 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/cbc69ba4-d747-467c-98ab-d22491a8203c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.670265 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4thc\" (UniqueName: \"kubernetes.io/projected/cbc69ba4-d747-467c-98ab-d22491a8203c-kube-api-access-g4thc\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.670342 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/cbc69ba4-d747-467c-98ab-d22491a8203c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.670378 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f46d0998-3845-4338-bf4d-9c6294f76988\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f46d0998-3845-4338-bf4d-9c6294f76988\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.670427 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cbc69ba4-d747-467c-98ab-d22491a8203c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.670460 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cbc69ba4-d747-467c-98ab-d22491a8203c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.670500 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cbc69ba4-d747-467c-98ab-d22491a8203c-config\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.670548 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cbc69ba4-d747-467c-98ab-d22491a8203c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.670573 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc69ba4-d747-467c-98ab-d22491a8203c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.670600 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cbc69ba4-d747-467c-98ab-d22491a8203c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.771798 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cbc69ba4-d747-467c-98ab-d22491a8203c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.771860 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cbc69ba4-d747-467c-98ab-d22491a8203c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.771904 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cbc69ba4-d747-467c-98ab-d22491a8203c-config\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.772994 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cbc69ba4-d747-467c-98ab-d22491a8203c-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.771946 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cbc69ba4-d747-467c-98ab-d22491a8203c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.773320 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc69ba4-d747-467c-98ab-d22491a8203c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.773358 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cbc69ba4-d747-467c-98ab-d22491a8203c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.773488 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cbc69ba4-d747-467c-98ab-d22491a8203c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.773559 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/cbc69ba4-d747-467c-98ab-d22491a8203c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.773612 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4thc\" (UniqueName: \"kubernetes.io/projected/cbc69ba4-d747-467c-98ab-d22491a8203c-kube-api-access-g4thc\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.776657 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cbc69ba4-d747-467c-98ab-d22491a8203c-config\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.776792 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cbc69ba4-d747-467c-98ab-d22491a8203c-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.776829 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/cbc69ba4-d747-467c-98ab-d22491a8203c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.776886 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f46d0998-3845-4338-bf4d-9c6294f76988\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f46d0998-3845-4338-bf4d-9c6294f76988\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.780241 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cbc69ba4-d747-467c-98ab-d22491a8203c-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.780624 4915 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.780667 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f46d0998-3845-4338-bf4d-9c6294f76988\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f46d0998-3845-4338-bf4d-9c6294f76988\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2d77608a2870e9deeb05060c3c02e327bdb3f202575835cd4a227365c50b165b/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.780996 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cbc69ba4-d747-467c-98ab-d22491a8203c-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.782125 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/cbc69ba4-d747-467c-98ab-d22491a8203c-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.782338 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cbc69ba4-d747-467c-98ab-d22491a8203c-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.783397 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cbc69ba4-d747-467c-98ab-d22491a8203c-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.784214 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/cbc69ba4-d747-467c-98ab-d22491a8203c-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.803396 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4thc\" (UniqueName: \"kubernetes.io/projected/cbc69ba4-d747-467c-98ab-d22491a8203c-kube-api-access-g4thc\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.852138 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f46d0998-3845-4338-bf4d-9c6294f76988\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f46d0998-3845-4338-bf4d-9c6294f76988\") pod \"prometheus-metric-storage-0\" (UID: \"cbc69ba4-d747-467c-98ab-d22491a8203c\") " pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.898511 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-sj7nd"] Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.900628 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-sj7nd" Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.950477 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-sj7nd"] Nov 24 21:41:22 crc kubenswrapper[4915]: I1124 21:41:22.969669 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:22.998418 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdd8b057-0c9e-4bc6-8b95-55beb17a7563-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-sj7nd\" (UID: \"cdd8b057-0c9e-4bc6-8b95-55beb17a7563\") " pod="openstack/dnsmasq-dns-74dc88fc-sj7nd" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:22.998657 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdd8b057-0c9e-4bc6-8b95-55beb17a7563-dns-svc\") pod \"dnsmasq-dns-74dc88fc-sj7nd\" (UID: \"cdd8b057-0c9e-4bc6-8b95-55beb17a7563\") " pod="openstack/dnsmasq-dns-74dc88fc-sj7nd" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:22.998764 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd8b057-0c9e-4bc6-8b95-55beb17a7563-config\") pod \"dnsmasq-dns-74dc88fc-sj7nd\" (UID: \"cdd8b057-0c9e-4bc6-8b95-55beb17a7563\") " pod="openstack/dnsmasq-dns-74dc88fc-sj7nd" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:22.998812 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8plg\" (UniqueName: \"kubernetes.io/projected/cdd8b057-0c9e-4bc6-8b95-55beb17a7563-kube-api-access-b8plg\") pod \"dnsmasq-dns-74dc88fc-sj7nd\" (UID: \"cdd8b057-0c9e-4bc6-8b95-55beb17a7563\") " pod="openstack/dnsmasq-dns-74dc88fc-sj7nd" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:22.998864 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdd8b057-0c9e-4bc6-8b95-55beb17a7563-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-sj7nd\" (UID: \"cdd8b057-0c9e-4bc6-8b95-55beb17a7563\") " pod="openstack/dnsmasq-dns-74dc88fc-sj7nd" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.101931 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdd8b057-0c9e-4bc6-8b95-55beb17a7563-dns-svc\") pod \"dnsmasq-dns-74dc88fc-sj7nd\" (UID: \"cdd8b057-0c9e-4bc6-8b95-55beb17a7563\") " pod="openstack/dnsmasq-dns-74dc88fc-sj7nd" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.102853 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdd8b057-0c9e-4bc6-8b95-55beb17a7563-dns-svc\") pod \"dnsmasq-dns-74dc88fc-sj7nd\" (UID: \"cdd8b057-0c9e-4bc6-8b95-55beb17a7563\") " pod="openstack/dnsmasq-dns-74dc88fc-sj7nd" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.103040 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd8b057-0c9e-4bc6-8b95-55beb17a7563-config\") pod \"dnsmasq-dns-74dc88fc-sj7nd\" (UID: \"cdd8b057-0c9e-4bc6-8b95-55beb17a7563\") " pod="openstack/dnsmasq-dns-74dc88fc-sj7nd" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.103139 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8plg\" (UniqueName: \"kubernetes.io/projected/cdd8b057-0c9e-4bc6-8b95-55beb17a7563-kube-api-access-b8plg\") pod \"dnsmasq-dns-74dc88fc-sj7nd\" (UID: \"cdd8b057-0c9e-4bc6-8b95-55beb17a7563\") " pod="openstack/dnsmasq-dns-74dc88fc-sj7nd" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.103248 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdd8b057-0c9e-4bc6-8b95-55beb17a7563-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-sj7nd\" (UID: \"cdd8b057-0c9e-4bc6-8b95-55beb17a7563\") " pod="openstack/dnsmasq-dns-74dc88fc-sj7nd" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.103341 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdd8b057-0c9e-4bc6-8b95-55beb17a7563-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-sj7nd\" (UID: \"cdd8b057-0c9e-4bc6-8b95-55beb17a7563\") " pod="openstack/dnsmasq-dns-74dc88fc-sj7nd" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.104197 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd8b057-0c9e-4bc6-8b95-55beb17a7563-config\") pod \"dnsmasq-dns-74dc88fc-sj7nd\" (UID: \"cdd8b057-0c9e-4bc6-8b95-55beb17a7563\") " pod="openstack/dnsmasq-dns-74dc88fc-sj7nd" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.104216 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdd8b057-0c9e-4bc6-8b95-55beb17a7563-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-sj7nd\" (UID: \"cdd8b057-0c9e-4bc6-8b95-55beb17a7563\") " pod="openstack/dnsmasq-dns-74dc88fc-sj7nd" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.104330 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdd8b057-0c9e-4bc6-8b95-55beb17a7563-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-sj7nd\" (UID: \"cdd8b057-0c9e-4bc6-8b95-55beb17a7563\") " pod="openstack/dnsmasq-dns-74dc88fc-sj7nd" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.121051 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8plg\" (UniqueName: \"kubernetes.io/projected/cdd8b057-0c9e-4bc6-8b95-55beb17a7563-kube-api-access-b8plg\") pod \"dnsmasq-dns-74dc88fc-sj7nd\" (UID: \"cdd8b057-0c9e-4bc6-8b95-55beb17a7563\") " pod="openstack/dnsmasq-dns-74dc88fc-sj7nd" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.123646 4915 scope.go:117] "RemoveContainer" containerID="29978866ffa333011eb6603adf00aa7e921e247bf421a7aa334dda98b7686292" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.233919 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-sj7nd" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.336240 4915 scope.go:117] "RemoveContainer" containerID="6ab40da7d3b07856ae020c83a7642ec8892d72dea0fbefc475370708062b6f36" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.389893 4915 scope.go:117] "RemoveContainer" containerID="987d19fc7a337053a115e4ac23f2d9b55373977d0707aae3966d28182c0f6133" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.447076 4915 scope.go:117] "RemoveContainer" containerID="41722e853b441032ab537c710a846e955a87bc128ec0ee06ff964101a03feed0" Nov 24 21:41:23 crc kubenswrapper[4915]: E1124 21:41:23.447761 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41722e853b441032ab537c710a846e955a87bc128ec0ee06ff964101a03feed0\": container with ID starting with 41722e853b441032ab537c710a846e955a87bc128ec0ee06ff964101a03feed0 not found: ID does not exist" containerID="41722e853b441032ab537c710a846e955a87bc128ec0ee06ff964101a03feed0" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.447808 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41722e853b441032ab537c710a846e955a87bc128ec0ee06ff964101a03feed0"} err="failed to get container status \"41722e853b441032ab537c710a846e955a87bc128ec0ee06ff964101a03feed0\": rpc error: code = NotFound desc = could not find container \"41722e853b441032ab537c710a846e955a87bc128ec0ee06ff964101a03feed0\": container with ID starting with 41722e853b441032ab537c710a846e955a87bc128ec0ee06ff964101a03feed0 not found: ID does not exist" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.447828 4915 scope.go:117] "RemoveContainer" containerID="29978866ffa333011eb6603adf00aa7e921e247bf421a7aa334dda98b7686292" Nov 24 21:41:23 crc kubenswrapper[4915]: E1124 21:41:23.450716 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29978866ffa333011eb6603adf00aa7e921e247bf421a7aa334dda98b7686292\": container with ID starting with 29978866ffa333011eb6603adf00aa7e921e247bf421a7aa334dda98b7686292 not found: ID does not exist" containerID="29978866ffa333011eb6603adf00aa7e921e247bf421a7aa334dda98b7686292" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.450745 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29978866ffa333011eb6603adf00aa7e921e247bf421a7aa334dda98b7686292"} err="failed to get container status \"29978866ffa333011eb6603adf00aa7e921e247bf421a7aa334dda98b7686292\": rpc error: code = NotFound desc = could not find container \"29978866ffa333011eb6603adf00aa7e921e247bf421a7aa334dda98b7686292\": container with ID starting with 29978866ffa333011eb6603adf00aa7e921e247bf421a7aa334dda98b7686292 not found: ID does not exist" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.450760 4915 scope.go:117] "RemoveContainer" containerID="6ab40da7d3b07856ae020c83a7642ec8892d72dea0fbefc475370708062b6f36" Nov 24 21:41:23 crc kubenswrapper[4915]: E1124 21:41:23.451245 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ab40da7d3b07856ae020c83a7642ec8892d72dea0fbefc475370708062b6f36\": container with ID starting with 6ab40da7d3b07856ae020c83a7642ec8892d72dea0fbefc475370708062b6f36 not found: ID does not exist" containerID="6ab40da7d3b07856ae020c83a7642ec8892d72dea0fbefc475370708062b6f36" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.451312 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ab40da7d3b07856ae020c83a7642ec8892d72dea0fbefc475370708062b6f36"} err="failed to get container status \"6ab40da7d3b07856ae020c83a7642ec8892d72dea0fbefc475370708062b6f36\": rpc error: code = NotFound desc = could not find container \"6ab40da7d3b07856ae020c83a7642ec8892d72dea0fbefc475370708062b6f36\": container with ID starting with 6ab40da7d3b07856ae020c83a7642ec8892d72dea0fbefc475370708062b6f36 not found: ID does not exist" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.451363 4915 scope.go:117] "RemoveContainer" containerID="987d19fc7a337053a115e4ac23f2d9b55373977d0707aae3966d28182c0f6133" Nov 24 21:41:23 crc kubenswrapper[4915]: E1124 21:41:23.451728 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"987d19fc7a337053a115e4ac23f2d9b55373977d0707aae3966d28182c0f6133\": container with ID starting with 987d19fc7a337053a115e4ac23f2d9b55373977d0707aae3966d28182c0f6133 not found: ID does not exist" containerID="987d19fc7a337053a115e4ac23f2d9b55373977d0707aae3966d28182c0f6133" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.451754 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"987d19fc7a337053a115e4ac23f2d9b55373977d0707aae3966d28182c0f6133"} err="failed to get container status \"987d19fc7a337053a115e4ac23f2d9b55373977d0707aae3966d28182c0f6133\": rpc error: code = NotFound desc = could not find container \"987d19fc7a337053a115e4ac23f2d9b55373977d0707aae3966d28182c0f6133\": container with ID starting with 987d19fc7a337053a115e4ac23f2d9b55373977d0707aae3966d28182c0f6133 not found: ID does not exist" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.451771 4915 scope.go:117] "RemoveContainer" containerID="41722e853b441032ab537c710a846e955a87bc128ec0ee06ff964101a03feed0" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.452056 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41722e853b441032ab537c710a846e955a87bc128ec0ee06ff964101a03feed0"} err="failed to get container status \"41722e853b441032ab537c710a846e955a87bc128ec0ee06ff964101a03feed0\": rpc error: code = NotFound desc = could not find container \"41722e853b441032ab537c710a846e955a87bc128ec0ee06ff964101a03feed0\": container with ID starting with 41722e853b441032ab537c710a846e955a87bc128ec0ee06ff964101a03feed0 not found: ID does not exist" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.452097 4915 scope.go:117] "RemoveContainer" containerID="29978866ffa333011eb6603adf00aa7e921e247bf421a7aa334dda98b7686292" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.452836 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29978866ffa333011eb6603adf00aa7e921e247bf421a7aa334dda98b7686292"} err="failed to get container status \"29978866ffa333011eb6603adf00aa7e921e247bf421a7aa334dda98b7686292\": rpc error: code = NotFound desc = could not find container \"29978866ffa333011eb6603adf00aa7e921e247bf421a7aa334dda98b7686292\": container with ID starting with 29978866ffa333011eb6603adf00aa7e921e247bf421a7aa334dda98b7686292 not found: ID does not exist" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.452873 4915 scope.go:117] "RemoveContainer" containerID="6ab40da7d3b07856ae020c83a7642ec8892d72dea0fbefc475370708062b6f36" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.453826 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ab40da7d3b07856ae020c83a7642ec8892d72dea0fbefc475370708062b6f36"} err="failed to get container status \"6ab40da7d3b07856ae020c83a7642ec8892d72dea0fbefc475370708062b6f36\": rpc error: code = NotFound desc = could not find container \"6ab40da7d3b07856ae020c83a7642ec8892d72dea0fbefc475370708062b6f36\": container with ID starting with 6ab40da7d3b07856ae020c83a7642ec8892d72dea0fbefc475370708062b6f36 not found: ID does not exist" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.453863 4915 scope.go:117] "RemoveContainer" containerID="987d19fc7a337053a115e4ac23f2d9b55373977d0707aae3966d28182c0f6133" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.454141 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"987d19fc7a337053a115e4ac23f2d9b55373977d0707aae3966d28182c0f6133"} err="failed to get container status \"987d19fc7a337053a115e4ac23f2d9b55373977d0707aae3966d28182c0f6133\": rpc error: code = NotFound desc = could not find container \"987d19fc7a337053a115e4ac23f2d9b55373977d0707aae3966d28182c0f6133\": container with ID starting with 987d19fc7a337053a115e4ac23f2d9b55373977d0707aae3966d28182c0f6133 not found: ID does not exist" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.454165 4915 scope.go:117] "RemoveContainer" containerID="41722e853b441032ab537c710a846e955a87bc128ec0ee06ff964101a03feed0" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.454373 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41722e853b441032ab537c710a846e955a87bc128ec0ee06ff964101a03feed0"} err="failed to get container status \"41722e853b441032ab537c710a846e955a87bc128ec0ee06ff964101a03feed0\": rpc error: code = NotFound desc = could not find container \"41722e853b441032ab537c710a846e955a87bc128ec0ee06ff964101a03feed0\": container with ID starting with 41722e853b441032ab537c710a846e955a87bc128ec0ee06ff964101a03feed0 not found: ID does not exist" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.454401 4915 scope.go:117] "RemoveContainer" containerID="29978866ffa333011eb6603adf00aa7e921e247bf421a7aa334dda98b7686292" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.454670 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29978866ffa333011eb6603adf00aa7e921e247bf421a7aa334dda98b7686292"} err="failed to get container status \"29978866ffa333011eb6603adf00aa7e921e247bf421a7aa334dda98b7686292\": rpc error: code = NotFound desc = could not find container \"29978866ffa333011eb6603adf00aa7e921e247bf421a7aa334dda98b7686292\": container with ID starting with 29978866ffa333011eb6603adf00aa7e921e247bf421a7aa334dda98b7686292 not found: ID does not exist" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.454694 4915 scope.go:117] "RemoveContainer" containerID="6ab40da7d3b07856ae020c83a7642ec8892d72dea0fbefc475370708062b6f36" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.455260 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ab40da7d3b07856ae020c83a7642ec8892d72dea0fbefc475370708062b6f36"} err="failed to get container status \"6ab40da7d3b07856ae020c83a7642ec8892d72dea0fbefc475370708062b6f36\": rpc error: code = NotFound desc = could not find container \"6ab40da7d3b07856ae020c83a7642ec8892d72dea0fbefc475370708062b6f36\": container with ID starting with 6ab40da7d3b07856ae020c83a7642ec8892d72dea0fbefc475370708062b6f36 not found: ID does not exist" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.455300 4915 scope.go:117] "RemoveContainer" containerID="987d19fc7a337053a115e4ac23f2d9b55373977d0707aae3966d28182c0f6133" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.455641 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"987d19fc7a337053a115e4ac23f2d9b55373977d0707aae3966d28182c0f6133"} err="failed to get container status \"987d19fc7a337053a115e4ac23f2d9b55373977d0707aae3966d28182c0f6133\": rpc error: code = NotFound desc = could not find container \"987d19fc7a337053a115e4ac23f2d9b55373977d0707aae3966d28182c0f6133\": container with ID starting with 987d19fc7a337053a115e4ac23f2d9b55373977d0707aae3966d28182c0f6133 not found: ID does not exist" Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.624936 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea1dafcf-631a-4ae6-8aad-d716b977402d","Type":"ContainerStarted","Data":"cf97c4ec1b55d0f2252dd23aed3718519bd5d7915165aa37cc2cc3b51730ab0c"} Nov 24 21:41:23 crc kubenswrapper[4915]: W1124 21:41:23.668037 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbc69ba4_d747_467c_98ab_d22491a8203c.slice/crio-4c49c59939b4f1d53011c55419ce7937a8dc72c0e7354edfa0c1289e002189fb WatchSource:0}: Error finding container 4c49c59939b4f1d53011c55419ce7937a8dc72c0e7354edfa0c1289e002189fb: Status 404 returned error can't find the container with id 4c49c59939b4f1d53011c55419ce7937a8dc72c0e7354edfa0c1289e002189fb Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.668542 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 24 21:41:23 crc kubenswrapper[4915]: I1124 21:41:23.842618 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-sj7nd"] Nov 24 21:41:23 crc kubenswrapper[4915]: W1124 21:41:23.844884 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdd8b057_0c9e_4bc6_8b95_55beb17a7563.slice/crio-4df1edc24b255d7a880c0402bc08bebb02c711605a8f7bf81ba792951cb37388 WatchSource:0}: Error finding container 4df1edc24b255d7a880c0402bc08bebb02c711605a8f7bf81ba792951cb37388: Status 404 returned error can't find the container with id 4df1edc24b255d7a880c0402bc08bebb02c711605a8f7bf81ba792951cb37388 Nov 24 21:41:24 crc kubenswrapper[4915]: I1124 21:41:24.328203 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:41:24 crc kubenswrapper[4915]: I1124 21:41:24.328443 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:41:24 crc kubenswrapper[4915]: I1124 21:41:24.441560 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59f54e36-0033-4e91-be8b-7d447d666d04" path="/var/lib/kubelet/pods/59f54e36-0033-4e91-be8b-7d447d666d04/volumes" Nov 24 21:41:24 crc kubenswrapper[4915]: I1124 21:41:24.637100 4915 generic.go:334] "Generic (PLEG): container finished" podID="cdd8b057-0c9e-4bc6-8b95-55beb17a7563" containerID="af9ee1aa18ff1870dbd543975255711a2a6e390b816116ff16c1e9f4300af1d7" exitCode=0 Nov 24 21:41:24 crc kubenswrapper[4915]: I1124 21:41:24.637304 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-sj7nd" event={"ID":"cdd8b057-0c9e-4bc6-8b95-55beb17a7563","Type":"ContainerDied","Data":"af9ee1aa18ff1870dbd543975255711a2a6e390b816116ff16c1e9f4300af1d7"} Nov 24 21:41:24 crc kubenswrapper[4915]: I1124 21:41:24.637505 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-sj7nd" event={"ID":"cdd8b057-0c9e-4bc6-8b95-55beb17a7563","Type":"ContainerStarted","Data":"4df1edc24b255d7a880c0402bc08bebb02c711605a8f7bf81ba792951cb37388"} Nov 24 21:41:24 crc kubenswrapper[4915]: I1124 21:41:24.646411 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea1dafcf-631a-4ae6-8aad-d716b977402d","Type":"ContainerStarted","Data":"8124e61b98c8a624333f7c1641816a8c9b09653c4259ecf6b61fa1a1d5069123"} Nov 24 21:41:24 crc kubenswrapper[4915]: I1124 21:41:24.646451 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea1dafcf-631a-4ae6-8aad-d716b977402d","Type":"ContainerStarted","Data":"d42981c3ab9500bef63530332a940a07280bfd256f62567928def6188ae8dfe1"} Nov 24 21:41:24 crc kubenswrapper[4915]: I1124 21:41:24.646463 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea1dafcf-631a-4ae6-8aad-d716b977402d","Type":"ContainerStarted","Data":"fba0402b316bc94d4bf3adfa1e8e06b7b03d8631655370d57ab7825351f8b9e0"} Nov 24 21:41:24 crc kubenswrapper[4915]: I1124 21:41:24.647692 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cbc69ba4-d747-467c-98ab-d22491a8203c","Type":"ContainerStarted","Data":"4c49c59939b4f1d53011c55419ce7937a8dc72c0e7354edfa0c1289e002189fb"} Nov 24 21:41:25 crc kubenswrapper[4915]: I1124 21:41:25.668446 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea1dafcf-631a-4ae6-8aad-d716b977402d","Type":"ContainerStarted","Data":"9f84c6597fbcaa1d269c90a3bd87a02a43019f62f4ee5db5f1b6a1d14c048737"} Nov 24 21:41:25 crc kubenswrapper[4915]: I1124 21:41:25.668738 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea1dafcf-631a-4ae6-8aad-d716b977402d","Type":"ContainerStarted","Data":"9516dcf5521f06b9ed106451724efdaf1cf69d3afec3608e17b8f6fe1446d158"} Nov 24 21:41:25 crc kubenswrapper[4915]: I1124 21:41:25.668750 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ea1dafcf-631a-4ae6-8aad-d716b977402d","Type":"ContainerStarted","Data":"e68263b1e3572676fa1eafb668dd408f20bb9520691007b9d0f40476a931be67"} Nov 24 21:41:25 crc kubenswrapper[4915]: I1124 21:41:25.671275 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-sj7nd" event={"ID":"cdd8b057-0c9e-4bc6-8b95-55beb17a7563","Type":"ContainerStarted","Data":"ce1afc60f87ebad07cb6720d86a00f388082ee5f6d05848f61eecb66be3a1183"} Nov 24 21:41:25 crc kubenswrapper[4915]: I1124 21:41:25.671421 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74dc88fc-sj7nd" Nov 24 21:41:25 crc kubenswrapper[4915]: I1124 21:41:25.711289 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.76518923 podStartE2EDuration="44.711270266s" podCreationTimestamp="2025-11-24 21:40:41 +0000 UTC" firstStartedPulling="2025-11-24 21:41:15.243329133 +0000 UTC m=+1293.559581306" lastFinishedPulling="2025-11-24 21:41:23.189410179 +0000 UTC m=+1301.505662342" observedRunningTime="2025-11-24 21:41:25.710931127 +0000 UTC m=+1304.027183320" watchObservedRunningTime="2025-11-24 21:41:25.711270266 +0000 UTC m=+1304.027522449" Nov 24 21:41:25 crc kubenswrapper[4915]: I1124 21:41:25.734996 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74dc88fc-sj7nd" podStartSLOduration=3.734978624 podStartE2EDuration="3.734978624s" podCreationTimestamp="2025-11-24 21:41:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:25.733269959 +0000 UTC m=+1304.049522142" watchObservedRunningTime="2025-11-24 21:41:25.734978624 +0000 UTC m=+1304.051230797" Nov 24 21:41:25 crc kubenswrapper[4915]: I1124 21:41:25.972534 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-sj7nd"] Nov 24 21:41:25 crc kubenswrapper[4915]: I1124 21:41:25.994986 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.037504 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-96hc8"] Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.039341 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.046641 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-96hc8"] Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.061359 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4e32b86-3497-4bf8-84d7-594ad6597982-config\") pod \"dnsmasq-dns-5f59b8f679-96hc8\" (UID: \"f4e32b86-3497-4bf8-84d7-594ad6597982\") " pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.061445 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdblp\" (UniqueName: \"kubernetes.io/projected/f4e32b86-3497-4bf8-84d7-594ad6597982-kube-api-access-rdblp\") pod \"dnsmasq-dns-5f59b8f679-96hc8\" (UID: \"f4e32b86-3497-4bf8-84d7-594ad6597982\") " pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.061474 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4e32b86-3497-4bf8-84d7-594ad6597982-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-96hc8\" (UID: \"f4e32b86-3497-4bf8-84d7-594ad6597982\") " pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.061513 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4e32b86-3497-4bf8-84d7-594ad6597982-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-96hc8\" (UID: \"f4e32b86-3497-4bf8-84d7-594ad6597982\") " pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.061528 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4e32b86-3497-4bf8-84d7-594ad6597982-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-96hc8\" (UID: \"f4e32b86-3497-4bf8-84d7-594ad6597982\") " pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.061592 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4e32b86-3497-4bf8-84d7-594ad6597982-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-96hc8\" (UID: \"f4e32b86-3497-4bf8-84d7-594ad6597982\") " pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.066358 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.086949 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.163454 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4e32b86-3497-4bf8-84d7-594ad6597982-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-96hc8\" (UID: \"f4e32b86-3497-4bf8-84d7-594ad6597982\") " pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.163508 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4e32b86-3497-4bf8-84d7-594ad6597982-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-96hc8\" (UID: \"f4e32b86-3497-4bf8-84d7-594ad6597982\") " pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.163632 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4e32b86-3497-4bf8-84d7-594ad6597982-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-96hc8\" (UID: \"f4e32b86-3497-4bf8-84d7-594ad6597982\") " pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.163722 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4e32b86-3497-4bf8-84d7-594ad6597982-config\") pod \"dnsmasq-dns-5f59b8f679-96hc8\" (UID: \"f4e32b86-3497-4bf8-84d7-594ad6597982\") " pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.163802 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdblp\" (UniqueName: \"kubernetes.io/projected/f4e32b86-3497-4bf8-84d7-594ad6597982-kube-api-access-rdblp\") pod \"dnsmasq-dns-5f59b8f679-96hc8\" (UID: \"f4e32b86-3497-4bf8-84d7-594ad6597982\") " pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.163852 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4e32b86-3497-4bf8-84d7-594ad6597982-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-96hc8\" (UID: \"f4e32b86-3497-4bf8-84d7-594ad6597982\") " pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.165363 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4e32b86-3497-4bf8-84d7-594ad6597982-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-96hc8\" (UID: \"f4e32b86-3497-4bf8-84d7-594ad6597982\") " pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.165398 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4e32b86-3497-4bf8-84d7-594ad6597982-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-96hc8\" (UID: \"f4e32b86-3497-4bf8-84d7-594ad6597982\") " pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.166078 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4e32b86-3497-4bf8-84d7-594ad6597982-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-96hc8\" (UID: \"f4e32b86-3497-4bf8-84d7-594ad6597982\") " pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.169650 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4e32b86-3497-4bf8-84d7-594ad6597982-config\") pod \"dnsmasq-dns-5f59b8f679-96hc8\" (UID: \"f4e32b86-3497-4bf8-84d7-594ad6597982\") " pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.172631 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4e32b86-3497-4bf8-84d7-594ad6597982-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-96hc8\" (UID: \"f4e32b86-3497-4bf8-84d7-594ad6597982\") " pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.193550 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdblp\" (UniqueName: \"kubernetes.io/projected/f4e32b86-3497-4bf8-84d7-594ad6597982-kube-api-access-rdblp\") pod \"dnsmasq-dns-5f59b8f679-96hc8\" (UID: \"f4e32b86-3497-4bf8-84d7-594ad6597982\") " pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.373400 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.403927 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-s5npb"] Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.418492 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-s5npb" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.453235 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-da23-account-create-xqsrk"] Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.454692 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-s5npb"] Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.454796 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-da23-account-create-xqsrk" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.456351 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.521261 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-da23-account-create-xqsrk"] Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.554006 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-ssnfp"] Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.555486 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-ssnfp" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.563734 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-ssnfp"] Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.574382 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4699e1f2-8d9a-4aff-bf8a-d648162c8bf3-operator-scripts\") pod \"barbican-da23-account-create-xqsrk\" (UID: \"4699e1f2-8d9a-4aff-bf8a-d648162c8bf3\") " pod="openstack/barbican-da23-account-create-xqsrk" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.574506 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmsw8\" (UniqueName: \"kubernetes.io/projected/4699e1f2-8d9a-4aff-bf8a-d648162c8bf3-kube-api-access-fmsw8\") pod \"barbican-da23-account-create-xqsrk\" (UID: \"4699e1f2-8d9a-4aff-bf8a-d648162c8bf3\") " pod="openstack/barbican-da23-account-create-xqsrk" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.574543 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwrf5\" (UniqueName: \"kubernetes.io/projected/919f233f-0ca4-4b04-ad6f-4b4bebba4f3b-kube-api-access-bwrf5\") pod \"barbican-db-create-s5npb\" (UID: \"919f233f-0ca4-4b04-ad6f-4b4bebba4f3b\") " pod="openstack/barbican-db-create-s5npb" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.574630 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/919f233f-0ca4-4b04-ad6f-4b4bebba4f3b-operator-scripts\") pod \"barbican-db-create-s5npb\" (UID: \"919f233f-0ca4-4b04-ad6f-4b4bebba4f3b\") " pod="openstack/barbican-db-create-s5npb" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.677120 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwrf5\" (UniqueName: \"kubernetes.io/projected/919f233f-0ca4-4b04-ad6f-4b4bebba4f3b-kube-api-access-bwrf5\") pod \"barbican-db-create-s5npb\" (UID: \"919f233f-0ca4-4b04-ad6f-4b4bebba4f3b\") " pod="openstack/barbican-db-create-s5npb" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.677202 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/919f233f-0ca4-4b04-ad6f-4b4bebba4f3b-operator-scripts\") pod \"barbican-db-create-s5npb\" (UID: \"919f233f-0ca4-4b04-ad6f-4b4bebba4f3b\") " pod="openstack/barbican-db-create-s5npb" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.677237 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/240b0e2f-e08f-45b7-a006-4a982095780c-operator-scripts\") pod \"heat-db-create-ssnfp\" (UID: \"240b0e2f-e08f-45b7-a006-4a982095780c\") " pod="openstack/heat-db-create-ssnfp" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.677277 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4699e1f2-8d9a-4aff-bf8a-d648162c8bf3-operator-scripts\") pod \"barbican-da23-account-create-xqsrk\" (UID: \"4699e1f2-8d9a-4aff-bf8a-d648162c8bf3\") " pod="openstack/barbican-da23-account-create-xqsrk" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.677308 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxvwn\" (UniqueName: \"kubernetes.io/projected/240b0e2f-e08f-45b7-a006-4a982095780c-kube-api-access-fxvwn\") pod \"heat-db-create-ssnfp\" (UID: \"240b0e2f-e08f-45b7-a006-4a982095780c\") " pod="openstack/heat-db-create-ssnfp" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.677397 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmsw8\" (UniqueName: \"kubernetes.io/projected/4699e1f2-8d9a-4aff-bf8a-d648162c8bf3-kube-api-access-fmsw8\") pod \"barbican-da23-account-create-xqsrk\" (UID: \"4699e1f2-8d9a-4aff-bf8a-d648162c8bf3\") " pod="openstack/barbican-da23-account-create-xqsrk" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.678478 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/919f233f-0ca4-4b04-ad6f-4b4bebba4f3b-operator-scripts\") pod \"barbican-db-create-s5npb\" (UID: \"919f233f-0ca4-4b04-ad6f-4b4bebba4f3b\") " pod="openstack/barbican-db-create-s5npb" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.678961 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4699e1f2-8d9a-4aff-bf8a-d648162c8bf3-operator-scripts\") pod \"barbican-da23-account-create-xqsrk\" (UID: \"4699e1f2-8d9a-4aff-bf8a-d648162c8bf3\") " pod="openstack/barbican-da23-account-create-xqsrk" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.710518 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmsw8\" (UniqueName: \"kubernetes.io/projected/4699e1f2-8d9a-4aff-bf8a-d648162c8bf3-kube-api-access-fmsw8\") pod \"barbican-da23-account-create-xqsrk\" (UID: \"4699e1f2-8d9a-4aff-bf8a-d648162c8bf3\") " pod="openstack/barbican-da23-account-create-xqsrk" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.714387 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cbc69ba4-d747-467c-98ab-d22491a8203c","Type":"ContainerStarted","Data":"2d8e1ed9ea039c46b48f8f3b2a44e99fd7a3806d60979edfea9d14cc94428fd9"} Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.722487 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwrf5\" (UniqueName: \"kubernetes.io/projected/919f233f-0ca4-4b04-ad6f-4b4bebba4f3b-kube-api-access-bwrf5\") pod \"barbican-db-create-s5npb\" (UID: \"919f233f-0ca4-4b04-ad6f-4b4bebba4f3b\") " pod="openstack/barbican-db-create-s5npb" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.725106 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-kg5gb"] Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.726579 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kg5gb" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.745890 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-4269-account-create-rbxk7"] Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.747822 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4269-account-create-rbxk7" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.750908 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.758106 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-kg5gb"] Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.774832 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-5fb97"] Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.776149 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5fb97" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.783665 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.783910 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-x7cdq" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.784041 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.784131 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.785377 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/240b0e2f-e08f-45b7-a006-4a982095780c-operator-scripts\") pod \"heat-db-create-ssnfp\" (UID: \"240b0e2f-e08f-45b7-a006-4a982095780c\") " pod="openstack/heat-db-create-ssnfp" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.785439 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxvwn\" (UniqueName: \"kubernetes.io/projected/240b0e2f-e08f-45b7-a006-4a982095780c-kube-api-access-fxvwn\") pod \"heat-db-create-ssnfp\" (UID: \"240b0e2f-e08f-45b7-a006-4a982095780c\") " pod="openstack/heat-db-create-ssnfp" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.786225 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/240b0e2f-e08f-45b7-a006-4a982095780c-operator-scripts\") pod \"heat-db-create-ssnfp\" (UID: \"240b0e2f-e08f-45b7-a006-4a982095780c\") " pod="openstack/heat-db-create-ssnfp" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.813704 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-5fb97"] Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.819480 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxvwn\" (UniqueName: \"kubernetes.io/projected/240b0e2f-e08f-45b7-a006-4a982095780c-kube-api-access-fxvwn\") pod \"heat-db-create-ssnfp\" (UID: \"240b0e2f-e08f-45b7-a006-4a982095780c\") " pod="openstack/heat-db-create-ssnfp" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.823177 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-s5npb" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.828854 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-4269-account-create-rbxk7"] Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.842413 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-da23-account-create-xqsrk" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.886921 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4df9l\" (UniqueName: \"kubernetes.io/projected/c74c4c4b-c291-42e8-a7f5-37fc513af9b0-kube-api-access-4df9l\") pod \"cinder-db-create-kg5gb\" (UID: \"c74c4c4b-c291-42e8-a7f5-37fc513af9b0\") " pod="openstack/cinder-db-create-kg5gb" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.887002 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzzzk\" (UniqueName: \"kubernetes.io/projected/2cd7abca-28b3-4ddc-b209-7f95193ffa11-kube-api-access-bzzzk\") pod \"keystone-db-sync-5fb97\" (UID: \"2cd7abca-28b3-4ddc-b209-7f95193ffa11\") " pod="openstack/keystone-db-sync-5fb97" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.887073 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cd7abca-28b3-4ddc-b209-7f95193ffa11-combined-ca-bundle\") pod \"keystone-db-sync-5fb97\" (UID: \"2cd7abca-28b3-4ddc-b209-7f95193ffa11\") " pod="openstack/keystone-db-sync-5fb97" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.887125 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cd7abca-28b3-4ddc-b209-7f95193ffa11-config-data\") pod \"keystone-db-sync-5fb97\" (UID: \"2cd7abca-28b3-4ddc-b209-7f95193ffa11\") " pod="openstack/keystone-db-sync-5fb97" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.887192 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a9cf3f7-758e-4c7a-9ace-de87acde4680-operator-scripts\") pod \"cinder-4269-account-create-rbxk7\" (UID: \"0a9cf3f7-758e-4c7a-9ace-de87acde4680\") " pod="openstack/cinder-4269-account-create-rbxk7" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.887321 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c74c4c4b-c291-42e8-a7f5-37fc513af9b0-operator-scripts\") pod \"cinder-db-create-kg5gb\" (UID: \"c74c4c4b-c291-42e8-a7f5-37fc513af9b0\") " pod="openstack/cinder-db-create-kg5gb" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.887407 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msvbl\" (UniqueName: \"kubernetes.io/projected/0a9cf3f7-758e-4c7a-9ace-de87acde4680-kube-api-access-msvbl\") pod \"cinder-4269-account-create-rbxk7\" (UID: \"0a9cf3f7-758e-4c7a-9ace-de87acde4680\") " pod="openstack/cinder-4269-account-create-rbxk7" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.894109 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-tdtzl"] Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.895908 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-tdtzl" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.919725 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-e10f-account-create-5mrxz"] Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.921390 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-ssnfp" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.921517 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-e10f-account-create-5mrxz" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.924348 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.927550 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-tdtzl"] Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.957885 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-e10f-account-create-5mrxz"] Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.989444 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4df9l\" (UniqueName: \"kubernetes.io/projected/c74c4c4b-c291-42e8-a7f5-37fc513af9b0-kube-api-access-4df9l\") pod \"cinder-db-create-kg5gb\" (UID: \"c74c4c4b-c291-42e8-a7f5-37fc513af9b0\") " pod="openstack/cinder-db-create-kg5gb" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.989514 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzzzk\" (UniqueName: \"kubernetes.io/projected/2cd7abca-28b3-4ddc-b209-7f95193ffa11-kube-api-access-bzzzk\") pod \"keystone-db-sync-5fb97\" (UID: \"2cd7abca-28b3-4ddc-b209-7f95193ffa11\") " pod="openstack/keystone-db-sync-5fb97" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.989543 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cd7abca-28b3-4ddc-b209-7f95193ffa11-combined-ca-bundle\") pod \"keystone-db-sync-5fb97\" (UID: \"2cd7abca-28b3-4ddc-b209-7f95193ffa11\") " pod="openstack/keystone-db-sync-5fb97" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.989589 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cd7abca-28b3-4ddc-b209-7f95193ffa11-config-data\") pod \"keystone-db-sync-5fb97\" (UID: \"2cd7abca-28b3-4ddc-b209-7f95193ffa11\") " pod="openstack/keystone-db-sync-5fb97" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.989628 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a9cf3f7-758e-4c7a-9ace-de87acde4680-operator-scripts\") pod \"cinder-4269-account-create-rbxk7\" (UID: \"0a9cf3f7-758e-4c7a-9ace-de87acde4680\") " pod="openstack/cinder-4269-account-create-rbxk7" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.989672 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7spwq\" (UniqueName: \"kubernetes.io/projected/00e73e89-3b03-49a8-878b-3b208952476b-kube-api-access-7spwq\") pod \"heat-e10f-account-create-5mrxz\" (UID: \"00e73e89-3b03-49a8-878b-3b208952476b\") " pod="openstack/heat-e10f-account-create-5mrxz" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.989708 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea12e14a-f2f3-476e-886d-5a1648322134-operator-scripts\") pod \"neutron-db-create-tdtzl\" (UID: \"ea12e14a-f2f3-476e-886d-5a1648322134\") " pod="openstack/neutron-db-create-tdtzl" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.989731 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00e73e89-3b03-49a8-878b-3b208952476b-operator-scripts\") pod \"heat-e10f-account-create-5mrxz\" (UID: \"00e73e89-3b03-49a8-878b-3b208952476b\") " pod="openstack/heat-e10f-account-create-5mrxz" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.989757 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbq9g\" (UniqueName: \"kubernetes.io/projected/ea12e14a-f2f3-476e-886d-5a1648322134-kube-api-access-lbq9g\") pod \"neutron-db-create-tdtzl\" (UID: \"ea12e14a-f2f3-476e-886d-5a1648322134\") " pod="openstack/neutron-db-create-tdtzl" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.989800 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c74c4c4b-c291-42e8-a7f5-37fc513af9b0-operator-scripts\") pod \"cinder-db-create-kg5gb\" (UID: \"c74c4c4b-c291-42e8-a7f5-37fc513af9b0\") " pod="openstack/cinder-db-create-kg5gb" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.989876 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msvbl\" (UniqueName: \"kubernetes.io/projected/0a9cf3f7-758e-4c7a-9ace-de87acde4680-kube-api-access-msvbl\") pod \"cinder-4269-account-create-rbxk7\" (UID: \"0a9cf3f7-758e-4c7a-9ace-de87acde4680\") " pod="openstack/cinder-4269-account-create-rbxk7" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.990976 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a9cf3f7-758e-4c7a-9ace-de87acde4680-operator-scripts\") pod \"cinder-4269-account-create-rbxk7\" (UID: \"0a9cf3f7-758e-4c7a-9ace-de87acde4680\") " pod="openstack/cinder-4269-account-create-rbxk7" Nov 24 21:41:26 crc kubenswrapper[4915]: I1124 21:41:26.992517 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c74c4c4b-c291-42e8-a7f5-37fc513af9b0-operator-scripts\") pod \"cinder-db-create-kg5gb\" (UID: \"c74c4c4b-c291-42e8-a7f5-37fc513af9b0\") " pod="openstack/cinder-db-create-kg5gb" Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:26.996581 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cd7abca-28b3-4ddc-b209-7f95193ffa11-config-data\") pod \"keystone-db-sync-5fb97\" (UID: \"2cd7abca-28b3-4ddc-b209-7f95193ffa11\") " pod="openstack/keystone-db-sync-5fb97" Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.014181 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4df9l\" (UniqueName: \"kubernetes.io/projected/c74c4c4b-c291-42e8-a7f5-37fc513af9b0-kube-api-access-4df9l\") pod \"cinder-db-create-kg5gb\" (UID: \"c74c4c4b-c291-42e8-a7f5-37fc513af9b0\") " pod="openstack/cinder-db-create-kg5gb" Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.014850 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msvbl\" (UniqueName: \"kubernetes.io/projected/0a9cf3f7-758e-4c7a-9ace-de87acde4680-kube-api-access-msvbl\") pod \"cinder-4269-account-create-rbxk7\" (UID: \"0a9cf3f7-758e-4c7a-9ace-de87acde4680\") " pod="openstack/cinder-4269-account-create-rbxk7" Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.015007 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cd7abca-28b3-4ddc-b209-7f95193ffa11-combined-ca-bundle\") pod \"keystone-db-sync-5fb97\" (UID: \"2cd7abca-28b3-4ddc-b209-7f95193ffa11\") " pod="openstack/keystone-db-sync-5fb97" Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.023091 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzzzk\" (UniqueName: \"kubernetes.io/projected/2cd7abca-28b3-4ddc-b209-7f95193ffa11-kube-api-access-bzzzk\") pod \"keystone-db-sync-5fb97\" (UID: \"2cd7abca-28b3-4ddc-b209-7f95193ffa11\") " pod="openstack/keystone-db-sync-5fb97" Nov 24 21:41:27 crc kubenswrapper[4915]: W1124 21:41:27.068660 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4e32b86_3497_4bf8_84d7_594ad6597982.slice/crio-2fa83ef236ecdff6e03aca5ebb12122ab08a91383df5630211cb72b91d105767 WatchSource:0}: Error finding container 2fa83ef236ecdff6e03aca5ebb12122ab08a91383df5630211cb72b91d105767: Status 404 returned error can't find the container with id 2fa83ef236ecdff6e03aca5ebb12122ab08a91383df5630211cb72b91d105767 Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.068920 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5e4c-account-create-dnwgt"] Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.072541 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5e4c-account-create-dnwgt" Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.076589 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.091331 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7spwq\" (UniqueName: \"kubernetes.io/projected/00e73e89-3b03-49a8-878b-3b208952476b-kube-api-access-7spwq\") pod \"heat-e10f-account-create-5mrxz\" (UID: \"00e73e89-3b03-49a8-878b-3b208952476b\") " pod="openstack/heat-e10f-account-create-5mrxz" Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.091408 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea12e14a-f2f3-476e-886d-5a1648322134-operator-scripts\") pod \"neutron-db-create-tdtzl\" (UID: \"ea12e14a-f2f3-476e-886d-5a1648322134\") " pod="openstack/neutron-db-create-tdtzl" Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.091430 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00e73e89-3b03-49a8-878b-3b208952476b-operator-scripts\") pod \"heat-e10f-account-create-5mrxz\" (UID: \"00e73e89-3b03-49a8-878b-3b208952476b\") " pod="openstack/heat-e10f-account-create-5mrxz" Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.091502 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbq9g\" (UniqueName: \"kubernetes.io/projected/ea12e14a-f2f3-476e-886d-5a1648322134-kube-api-access-lbq9g\") pod \"neutron-db-create-tdtzl\" (UID: \"ea12e14a-f2f3-476e-886d-5a1648322134\") " pod="openstack/neutron-db-create-tdtzl" Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.092502 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea12e14a-f2f3-476e-886d-5a1648322134-operator-scripts\") pod \"neutron-db-create-tdtzl\" (UID: \"ea12e14a-f2f3-476e-886d-5a1648322134\") " pod="openstack/neutron-db-create-tdtzl" Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.103298 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00e73e89-3b03-49a8-878b-3b208952476b-operator-scripts\") pod \"heat-e10f-account-create-5mrxz\" (UID: \"00e73e89-3b03-49a8-878b-3b208952476b\") " pod="openstack/heat-e10f-account-create-5mrxz" Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.112284 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbq9g\" (UniqueName: \"kubernetes.io/projected/ea12e14a-f2f3-476e-886d-5a1648322134-kube-api-access-lbq9g\") pod \"neutron-db-create-tdtzl\" (UID: \"ea12e14a-f2f3-476e-886d-5a1648322134\") " pod="openstack/neutron-db-create-tdtzl" Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.113015 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7spwq\" (UniqueName: \"kubernetes.io/projected/00e73e89-3b03-49a8-878b-3b208952476b-kube-api-access-7spwq\") pod \"heat-e10f-account-create-5mrxz\" (UID: \"00e73e89-3b03-49a8-878b-3b208952476b\") " pod="openstack/heat-e10f-account-create-5mrxz" Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.120223 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5e4c-account-create-dnwgt"] Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.137315 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-96hc8"] Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.163005 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kg5gb" Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.177266 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4269-account-create-rbxk7" Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.192982 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82lqr\" (UniqueName: \"kubernetes.io/projected/70f4148e-7141-4cb7-9ed3-e0369c008c28-kube-api-access-82lqr\") pod \"neutron-5e4c-account-create-dnwgt\" (UID: \"70f4148e-7141-4cb7-9ed3-e0369c008c28\") " pod="openstack/neutron-5e4c-account-create-dnwgt" Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.193202 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70f4148e-7141-4cb7-9ed3-e0369c008c28-operator-scripts\") pod \"neutron-5e4c-account-create-dnwgt\" (UID: \"70f4148e-7141-4cb7-9ed3-e0369c008c28\") " pod="openstack/neutron-5e4c-account-create-dnwgt" Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.222415 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5fb97" Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.242290 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-tdtzl" Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.267556 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-e10f-account-create-5mrxz" Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.295489 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70f4148e-7141-4cb7-9ed3-e0369c008c28-operator-scripts\") pod \"neutron-5e4c-account-create-dnwgt\" (UID: \"70f4148e-7141-4cb7-9ed3-e0369c008c28\") " pod="openstack/neutron-5e4c-account-create-dnwgt" Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.295599 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82lqr\" (UniqueName: \"kubernetes.io/projected/70f4148e-7141-4cb7-9ed3-e0369c008c28-kube-api-access-82lqr\") pod \"neutron-5e4c-account-create-dnwgt\" (UID: \"70f4148e-7141-4cb7-9ed3-e0369c008c28\") " pod="openstack/neutron-5e4c-account-create-dnwgt" Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.296985 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70f4148e-7141-4cb7-9ed3-e0369c008c28-operator-scripts\") pod \"neutron-5e4c-account-create-dnwgt\" (UID: \"70f4148e-7141-4cb7-9ed3-e0369c008c28\") " pod="openstack/neutron-5e4c-account-create-dnwgt" Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.318735 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82lqr\" (UniqueName: \"kubernetes.io/projected/70f4148e-7141-4cb7-9ed3-e0369c008c28-kube-api-access-82lqr\") pod \"neutron-5e4c-account-create-dnwgt\" (UID: \"70f4148e-7141-4cb7-9ed3-e0369c008c28\") " pod="openstack/neutron-5e4c-account-create-dnwgt" Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.428397 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-s5npb"] Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.428815 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5e4c-account-create-dnwgt" Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.688918 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-da23-account-create-xqsrk"] Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.736726 4915 generic.go:334] "Generic (PLEG): container finished" podID="f4e32b86-3497-4bf8-84d7-594ad6597982" containerID="d81c9e77a320b75356796ad1a116cb4ead474d898825d2c356a26d7b92dee185" exitCode=0 Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.736843 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" event={"ID":"f4e32b86-3497-4bf8-84d7-594ad6597982","Type":"ContainerDied","Data":"d81c9e77a320b75356796ad1a116cb4ead474d898825d2c356a26d7b92dee185"} Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.736891 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" event={"ID":"f4e32b86-3497-4bf8-84d7-594ad6597982","Type":"ContainerStarted","Data":"2fa83ef236ecdff6e03aca5ebb12122ab08a91383df5630211cb72b91d105767"} Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.738602 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-s5npb" event={"ID":"919f233f-0ca4-4b04-ad6f-4b4bebba4f3b","Type":"ContainerStarted","Data":"79b3448254101653401e824b7557d8ae6adc7b9e82fdd59ef9b93cb5f5bf96b5"} Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.738627 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-s5npb" event={"ID":"919f233f-0ca4-4b04-ad6f-4b4bebba4f3b","Type":"ContainerStarted","Data":"04c7108a95b3b9a05f907d8f796690488520af2a54fb296906e60e401e8bb341"} Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.746465 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74dc88fc-sj7nd" podUID="cdd8b057-0c9e-4bc6-8b95-55beb17a7563" containerName="dnsmasq-dns" containerID="cri-o://ce1afc60f87ebad07cb6720d86a00f388082ee5f6d05848f61eecb66be3a1183" gracePeriod=10 Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.746752 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-da23-account-create-xqsrk" event={"ID":"4699e1f2-8d9a-4aff-bf8a-d648162c8bf3","Type":"ContainerStarted","Data":"294f5de1ef3657f2a08bff89558c966221aface9493e95f8ef17a4ec75a579f6"} Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.806686 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-s5npb" podStartSLOduration=1.806670334 podStartE2EDuration="1.806670334s" podCreationTimestamp="2025-11-24 21:41:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:27.801069893 +0000 UTC m=+1306.117322076" watchObservedRunningTime="2025-11-24 21:41:27.806670334 +0000 UTC m=+1306.122922507" Nov 24 21:41:27 crc kubenswrapper[4915]: I1124 21:41:27.861395 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-ssnfp"] Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.055484 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-4269-account-create-rbxk7"] Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.073047 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-e10f-account-create-5mrxz"] Nov 24 21:41:28 crc kubenswrapper[4915]: W1124 21:41:28.106230 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00e73e89_3b03_49a8_878b_3b208952476b.slice/crio-6c2f52e9479ca1837000ec91c6f6509d03cbf002552d22555011af1205ba7497 WatchSource:0}: Error finding container 6c2f52e9479ca1837000ec91c6f6509d03cbf002552d22555011af1205ba7497: Status 404 returned error can't find the container with id 6c2f52e9479ca1837000ec91c6f6509d03cbf002552d22555011af1205ba7497 Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.245467 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-kg5gb"] Nov 24 21:41:28 crc kubenswrapper[4915]: W1124 21:41:28.250334 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc74c4c4b_c291_42e8_a7f5_37fc513af9b0.slice/crio-39d07e688456878d71a530caed38e93202bc431adc1b1ca7b682ef8f161d9e48 WatchSource:0}: Error finding container 39d07e688456878d71a530caed38e93202bc431adc1b1ca7b682ef8f161d9e48: Status 404 returned error can't find the container with id 39d07e688456878d71a530caed38e93202bc431adc1b1ca7b682ef8f161d9e48 Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.277949 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-tdtzl"] Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.299752 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-5fb97"] Nov 24 21:41:28 crc kubenswrapper[4915]: W1124 21:41:28.374951 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cd7abca_28b3_4ddc_b209_7f95193ffa11.slice/crio-b24bd1aab30021e727f66829ab92fa26962a7c9fd81c4189959d79f6ee9afe91 WatchSource:0}: Error finding container b24bd1aab30021e727f66829ab92fa26962a7c9fd81c4189959d79f6ee9afe91: Status 404 returned error can't find the container with id b24bd1aab30021e727f66829ab92fa26962a7c9fd81c4189959d79f6ee9afe91 Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.469242 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5e4c-account-create-dnwgt"] Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.758187 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-tdtzl" event={"ID":"ea12e14a-f2f3-476e-886d-5a1648322134","Type":"ContainerStarted","Data":"dea2546913e46a885c3bfaca83f0ee8252719721f51bf623b0d95b2d3afdf8f0"} Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.759418 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5fb97" event={"ID":"2cd7abca-28b3-4ddc-b209-7f95193ffa11","Type":"ContainerStarted","Data":"b24bd1aab30021e727f66829ab92fa26962a7c9fd81c4189959d79f6ee9afe91"} Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.763421 4915 generic.go:334] "Generic (PLEG): container finished" podID="919f233f-0ca4-4b04-ad6f-4b4bebba4f3b" containerID="79b3448254101653401e824b7557d8ae6adc7b9e82fdd59ef9b93cb5f5bf96b5" exitCode=0 Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.763498 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-s5npb" event={"ID":"919f233f-0ca4-4b04-ad6f-4b4bebba4f3b","Type":"ContainerDied","Data":"79b3448254101653401e824b7557d8ae6adc7b9e82fdd59ef9b93cb5f5bf96b5"} Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.779996 4915 generic.go:334] "Generic (PLEG): container finished" podID="cdd8b057-0c9e-4bc6-8b95-55beb17a7563" containerID="ce1afc60f87ebad07cb6720d86a00f388082ee5f6d05848f61eecb66be3a1183" exitCode=0 Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.780065 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-sj7nd" event={"ID":"cdd8b057-0c9e-4bc6-8b95-55beb17a7563","Type":"ContainerDied","Data":"ce1afc60f87ebad07cb6720d86a00f388082ee5f6d05848f61eecb66be3a1183"} Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.780109 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-sj7nd" event={"ID":"cdd8b057-0c9e-4bc6-8b95-55beb17a7563","Type":"ContainerDied","Data":"4df1edc24b255d7a880c0402bc08bebb02c711605a8f7bf81ba792951cb37388"} Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.780124 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4df1edc24b255d7a880c0402bc08bebb02c711605a8f7bf81ba792951cb37388" Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.781978 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-e10f-account-create-5mrxz" event={"ID":"00e73e89-3b03-49a8-878b-3b208952476b","Type":"ContainerStarted","Data":"2825052bd516853da21b8403a38e8dcff8a97222ce103a096522461af3aeff31"} Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.782014 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-e10f-account-create-5mrxz" event={"ID":"00e73e89-3b03-49a8-878b-3b208952476b","Type":"ContainerStarted","Data":"6c2f52e9479ca1837000ec91c6f6509d03cbf002552d22555011af1205ba7497"} Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.783512 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5e4c-account-create-dnwgt" event={"ID":"70f4148e-7141-4cb7-9ed3-e0369c008c28","Type":"ContainerStarted","Data":"eb12ade3316d8c9e18c37b978d4620814fbf787c2bd415684720f65e77c34101"} Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.785208 4915 generic.go:334] "Generic (PLEG): container finished" podID="4699e1f2-8d9a-4aff-bf8a-d648162c8bf3" containerID="ac7a3ec8f8851e686611fd946399d4e113eb7aaba9636134cc31e04ac3d1359c" exitCode=0 Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.785250 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-da23-account-create-xqsrk" event={"ID":"4699e1f2-8d9a-4aff-bf8a-d648162c8bf3","Type":"ContainerDied","Data":"ac7a3ec8f8851e686611fd946399d4e113eb7aaba9636134cc31e04ac3d1359c"} Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.787098 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kg5gb" event={"ID":"c74c4c4b-c291-42e8-a7f5-37fc513af9b0","Type":"ContainerStarted","Data":"39d07e688456878d71a530caed38e93202bc431adc1b1ca7b682ef8f161d9e48"} Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.788577 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4269-account-create-rbxk7" event={"ID":"0a9cf3f7-758e-4c7a-9ace-de87acde4680","Type":"ContainerStarted","Data":"25ff21b6d6353d520d680971004d9b1c657f2b15b6ca52bafcf776e64747a427"} Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.788605 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4269-account-create-rbxk7" event={"ID":"0a9cf3f7-758e-4c7a-9ace-de87acde4680","Type":"ContainerStarted","Data":"1b05ff9db2888c26724f5e47f78f10fd404dd5fae70f6763ae6dfe0442bd65c4"} Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.790029 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-sj7nd" Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.790052 4915 generic.go:334] "Generic (PLEG): container finished" podID="240b0e2f-e08f-45b7-a006-4a982095780c" containerID="a2b3bde45feac8319bb960ad4202e31c64449add4ca5639a7d333a90aa4f3376" exitCode=0 Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.790117 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-ssnfp" event={"ID":"240b0e2f-e08f-45b7-a006-4a982095780c","Type":"ContainerDied","Data":"a2b3bde45feac8319bb960ad4202e31c64449add4ca5639a7d333a90aa4f3376"} Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.790138 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-ssnfp" event={"ID":"240b0e2f-e08f-45b7-a006-4a982095780c","Type":"ContainerStarted","Data":"2f63c3c8c61c1f156738fa500587a553204125b2131e7a5c0014c78d35e990a4"} Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.793911 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" event={"ID":"f4e32b86-3497-4bf8-84d7-594ad6597982","Type":"ContainerStarted","Data":"3e3a1fb8bb9a4614f248a4b139ca3149303c45375fe0e170a91191828e42d958"} Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.794116 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.832681 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd8b057-0c9e-4bc6-8b95-55beb17a7563-config\") pod \"cdd8b057-0c9e-4bc6-8b95-55beb17a7563\" (UID: \"cdd8b057-0c9e-4bc6-8b95-55beb17a7563\") " Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.832974 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8plg\" (UniqueName: \"kubernetes.io/projected/cdd8b057-0c9e-4bc6-8b95-55beb17a7563-kube-api-access-b8plg\") pod \"cdd8b057-0c9e-4bc6-8b95-55beb17a7563\" (UID: \"cdd8b057-0c9e-4bc6-8b95-55beb17a7563\") " Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.833168 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdd8b057-0c9e-4bc6-8b95-55beb17a7563-ovsdbserver-sb\") pod \"cdd8b057-0c9e-4bc6-8b95-55beb17a7563\" (UID: \"cdd8b057-0c9e-4bc6-8b95-55beb17a7563\") " Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.834295 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdd8b057-0c9e-4bc6-8b95-55beb17a7563-ovsdbserver-nb\") pod \"cdd8b057-0c9e-4bc6-8b95-55beb17a7563\" (UID: \"cdd8b057-0c9e-4bc6-8b95-55beb17a7563\") " Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.834337 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdd8b057-0c9e-4bc6-8b95-55beb17a7563-dns-svc\") pod \"cdd8b057-0c9e-4bc6-8b95-55beb17a7563\" (UID: \"cdd8b057-0c9e-4bc6-8b95-55beb17a7563\") " Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.839927 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" podStartSLOduration=3.839912203 podStartE2EDuration="3.839912203s" podCreationTimestamp="2025-11-24 21:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:28.831699771 +0000 UTC m=+1307.147951944" watchObservedRunningTime="2025-11-24 21:41:28.839912203 +0000 UTC m=+1307.156164376" Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.841065 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdd8b057-0c9e-4bc6-8b95-55beb17a7563-kube-api-access-b8plg" (OuterVolumeSpecName: "kube-api-access-b8plg") pod "cdd8b057-0c9e-4bc6-8b95-55beb17a7563" (UID: "cdd8b057-0c9e-4bc6-8b95-55beb17a7563"). InnerVolumeSpecName "kube-api-access-b8plg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.948821 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8plg\" (UniqueName: \"kubernetes.io/projected/cdd8b057-0c9e-4bc6-8b95-55beb17a7563-kube-api-access-b8plg\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.959516 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd8b057-0c9e-4bc6-8b95-55beb17a7563-config" (OuterVolumeSpecName: "config") pod "cdd8b057-0c9e-4bc6-8b95-55beb17a7563" (UID: "cdd8b057-0c9e-4bc6-8b95-55beb17a7563"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.969761 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd8b057-0c9e-4bc6-8b95-55beb17a7563-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cdd8b057-0c9e-4bc6-8b95-55beb17a7563" (UID: "cdd8b057-0c9e-4bc6-8b95-55beb17a7563"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:28 crc kubenswrapper[4915]: I1124 21:41:28.985431 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd8b057-0c9e-4bc6-8b95-55beb17a7563-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cdd8b057-0c9e-4bc6-8b95-55beb17a7563" (UID: "cdd8b057-0c9e-4bc6-8b95-55beb17a7563"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:29 crc kubenswrapper[4915]: I1124 21:41:29.031530 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd8b057-0c9e-4bc6-8b95-55beb17a7563-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cdd8b057-0c9e-4bc6-8b95-55beb17a7563" (UID: "cdd8b057-0c9e-4bc6-8b95-55beb17a7563"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:29 crc kubenswrapper[4915]: I1124 21:41:29.050467 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdd8b057-0c9e-4bc6-8b95-55beb17a7563-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:29 crc kubenswrapper[4915]: I1124 21:41:29.050488 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdd8b057-0c9e-4bc6-8b95-55beb17a7563-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:29 crc kubenswrapper[4915]: I1124 21:41:29.050498 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdd8b057-0c9e-4bc6-8b95-55beb17a7563-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:29 crc kubenswrapper[4915]: I1124 21:41:29.050507 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd8b057-0c9e-4bc6-8b95-55beb17a7563-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:29 crc kubenswrapper[4915]: E1124 21:41:29.452155 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70f4148e_7141_4cb7_9ed3_e0369c008c28.slice/crio-conmon-21234988054c2e55dbac98bf6d4689db614603f6da9308a1cfdcde1089b2b0ea.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea12e14a_f2f3_476e_886d_5a1648322134.slice/crio-1c5e973dcf06aa760d7b734fcfda712f8bc35303801b00aeaa81d1d07b41733c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70f4148e_7141_4cb7_9ed3_e0369c008c28.slice/crio-21234988054c2e55dbac98bf6d4689db614603f6da9308a1cfdcde1089b2b0ea.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea12e14a_f2f3_476e_886d_5a1648322134.slice/crio-conmon-1c5e973dcf06aa760d7b734fcfda712f8bc35303801b00aeaa81d1d07b41733c.scope\": RecentStats: unable to find data in memory cache]" Nov 24 21:41:29 crc kubenswrapper[4915]: I1124 21:41:29.808407 4915 generic.go:334] "Generic (PLEG): container finished" podID="0a9cf3f7-758e-4c7a-9ace-de87acde4680" containerID="25ff21b6d6353d520d680971004d9b1c657f2b15b6ca52bafcf776e64747a427" exitCode=0 Nov 24 21:41:29 crc kubenswrapper[4915]: I1124 21:41:29.808513 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4269-account-create-rbxk7" event={"ID":"0a9cf3f7-758e-4c7a-9ace-de87acde4680","Type":"ContainerDied","Data":"25ff21b6d6353d520d680971004d9b1c657f2b15b6ca52bafcf776e64747a427"} Nov 24 21:41:29 crc kubenswrapper[4915]: I1124 21:41:29.814430 4915 generic.go:334] "Generic (PLEG): container finished" podID="70f4148e-7141-4cb7-9ed3-e0369c008c28" containerID="21234988054c2e55dbac98bf6d4689db614603f6da9308a1cfdcde1089b2b0ea" exitCode=0 Nov 24 21:41:29 crc kubenswrapper[4915]: I1124 21:41:29.814516 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5e4c-account-create-dnwgt" event={"ID":"70f4148e-7141-4cb7-9ed3-e0369c008c28","Type":"ContainerDied","Data":"21234988054c2e55dbac98bf6d4689db614603f6da9308a1cfdcde1089b2b0ea"} Nov 24 21:41:29 crc kubenswrapper[4915]: I1124 21:41:29.817441 4915 generic.go:334] "Generic (PLEG): container finished" podID="00e73e89-3b03-49a8-878b-3b208952476b" containerID="2825052bd516853da21b8403a38e8dcff8a97222ce103a096522461af3aeff31" exitCode=0 Nov 24 21:41:29 crc kubenswrapper[4915]: I1124 21:41:29.817489 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-e10f-account-create-5mrxz" event={"ID":"00e73e89-3b03-49a8-878b-3b208952476b","Type":"ContainerDied","Data":"2825052bd516853da21b8403a38e8dcff8a97222ce103a096522461af3aeff31"} Nov 24 21:41:29 crc kubenswrapper[4915]: I1124 21:41:29.820402 4915 generic.go:334] "Generic (PLEG): container finished" podID="ea12e14a-f2f3-476e-886d-5a1648322134" containerID="1c5e973dcf06aa760d7b734fcfda712f8bc35303801b00aeaa81d1d07b41733c" exitCode=0 Nov 24 21:41:29 crc kubenswrapper[4915]: I1124 21:41:29.820473 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-tdtzl" event={"ID":"ea12e14a-f2f3-476e-886d-5a1648322134","Type":"ContainerDied","Data":"1c5e973dcf06aa760d7b734fcfda712f8bc35303801b00aeaa81d1d07b41733c"} Nov 24 21:41:29 crc kubenswrapper[4915]: I1124 21:41:29.822245 4915 generic.go:334] "Generic (PLEG): container finished" podID="c74c4c4b-c291-42e8-a7f5-37fc513af9b0" containerID="c456e024068960c23b78981c13ee141152f417a3489600c1948b124478943411" exitCode=0 Nov 24 21:41:29 crc kubenswrapper[4915]: I1124 21:41:29.822399 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-sj7nd" Nov 24 21:41:29 crc kubenswrapper[4915]: I1124 21:41:29.822296 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kg5gb" event={"ID":"c74c4c4b-c291-42e8-a7f5-37fc513af9b0","Type":"ContainerDied","Data":"c456e024068960c23b78981c13ee141152f417a3489600c1948b124478943411"} Nov 24 21:41:29 crc kubenswrapper[4915]: I1124 21:41:29.892949 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-sj7nd"] Nov 24 21:41:29 crc kubenswrapper[4915]: I1124 21:41:29.896512 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-sj7nd"] Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.333348 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4269-account-create-rbxk7" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.386675 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a9cf3f7-758e-4c7a-9ace-de87acde4680-operator-scripts\") pod \"0a9cf3f7-758e-4c7a-9ace-de87acde4680\" (UID: \"0a9cf3f7-758e-4c7a-9ace-de87acde4680\") " Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.386838 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msvbl\" (UniqueName: \"kubernetes.io/projected/0a9cf3f7-758e-4c7a-9ace-de87acde4680-kube-api-access-msvbl\") pod \"0a9cf3f7-758e-4c7a-9ace-de87acde4680\" (UID: \"0a9cf3f7-758e-4c7a-9ace-de87acde4680\") " Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.387508 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a9cf3f7-758e-4c7a-9ace-de87acde4680-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0a9cf3f7-758e-4c7a-9ace-de87acde4680" (UID: "0a9cf3f7-758e-4c7a-9ace-de87acde4680"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.395454 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a9cf3f7-758e-4c7a-9ace-de87acde4680-kube-api-access-msvbl" (OuterVolumeSpecName: "kube-api-access-msvbl") pod "0a9cf3f7-758e-4c7a-9ace-de87acde4680" (UID: "0a9cf3f7-758e-4c7a-9ace-de87acde4680"). InnerVolumeSpecName "kube-api-access-msvbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.471238 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdd8b057-0c9e-4bc6-8b95-55beb17a7563" path="/var/lib/kubelet/pods/cdd8b057-0c9e-4bc6-8b95-55beb17a7563/volumes" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.489557 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msvbl\" (UniqueName: \"kubernetes.io/projected/0a9cf3f7-758e-4c7a-9ace-de87acde4680-kube-api-access-msvbl\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.489581 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a9cf3f7-758e-4c7a-9ace-de87acde4680-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.649431 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-ssnfp" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.677067 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-s5npb" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.688354 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-da23-account-create-xqsrk" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.689216 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-e10f-account-create-5mrxz" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.701301 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxvwn\" (UniqueName: \"kubernetes.io/projected/240b0e2f-e08f-45b7-a006-4a982095780c-kube-api-access-fxvwn\") pod \"240b0e2f-e08f-45b7-a006-4a982095780c\" (UID: \"240b0e2f-e08f-45b7-a006-4a982095780c\") " Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.701365 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/240b0e2f-e08f-45b7-a006-4a982095780c-operator-scripts\") pod \"240b0e2f-e08f-45b7-a006-4a982095780c\" (UID: \"240b0e2f-e08f-45b7-a006-4a982095780c\") " Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.701832 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/240b0e2f-e08f-45b7-a006-4a982095780c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "240b0e2f-e08f-45b7-a006-4a982095780c" (UID: "240b0e2f-e08f-45b7-a006-4a982095780c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.710281 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/240b0e2f-e08f-45b7-a006-4a982095780c-kube-api-access-fxvwn" (OuterVolumeSpecName: "kube-api-access-fxvwn") pod "240b0e2f-e08f-45b7-a006-4a982095780c" (UID: "240b0e2f-e08f-45b7-a006-4a982095780c"). InnerVolumeSpecName "kube-api-access-fxvwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.803044 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7spwq\" (UniqueName: \"kubernetes.io/projected/00e73e89-3b03-49a8-878b-3b208952476b-kube-api-access-7spwq\") pod \"00e73e89-3b03-49a8-878b-3b208952476b\" (UID: \"00e73e89-3b03-49a8-878b-3b208952476b\") " Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.803160 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmsw8\" (UniqueName: \"kubernetes.io/projected/4699e1f2-8d9a-4aff-bf8a-d648162c8bf3-kube-api-access-fmsw8\") pod \"4699e1f2-8d9a-4aff-bf8a-d648162c8bf3\" (UID: \"4699e1f2-8d9a-4aff-bf8a-d648162c8bf3\") " Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.803219 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwrf5\" (UniqueName: \"kubernetes.io/projected/919f233f-0ca4-4b04-ad6f-4b4bebba4f3b-kube-api-access-bwrf5\") pod \"919f233f-0ca4-4b04-ad6f-4b4bebba4f3b\" (UID: \"919f233f-0ca4-4b04-ad6f-4b4bebba4f3b\") " Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.803248 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/919f233f-0ca4-4b04-ad6f-4b4bebba4f3b-operator-scripts\") pod \"919f233f-0ca4-4b04-ad6f-4b4bebba4f3b\" (UID: \"919f233f-0ca4-4b04-ad6f-4b4bebba4f3b\") " Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.803328 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4699e1f2-8d9a-4aff-bf8a-d648162c8bf3-operator-scripts\") pod \"4699e1f2-8d9a-4aff-bf8a-d648162c8bf3\" (UID: \"4699e1f2-8d9a-4aff-bf8a-d648162c8bf3\") " Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.803686 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/919f233f-0ca4-4b04-ad6f-4b4bebba4f3b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "919f233f-0ca4-4b04-ad6f-4b4bebba4f3b" (UID: "919f233f-0ca4-4b04-ad6f-4b4bebba4f3b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.803818 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4699e1f2-8d9a-4aff-bf8a-d648162c8bf3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4699e1f2-8d9a-4aff-bf8a-d648162c8bf3" (UID: "4699e1f2-8d9a-4aff-bf8a-d648162c8bf3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.804176 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00e73e89-3b03-49a8-878b-3b208952476b-operator-scripts\") pod \"00e73e89-3b03-49a8-878b-3b208952476b\" (UID: \"00e73e89-3b03-49a8-878b-3b208952476b\") " Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.804711 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00e73e89-3b03-49a8-878b-3b208952476b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "00e73e89-3b03-49a8-878b-3b208952476b" (UID: "00e73e89-3b03-49a8-878b-3b208952476b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.804816 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxvwn\" (UniqueName: \"kubernetes.io/projected/240b0e2f-e08f-45b7-a006-4a982095780c-kube-api-access-fxvwn\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.804835 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/919f233f-0ca4-4b04-ad6f-4b4bebba4f3b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.804846 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/240b0e2f-e08f-45b7-a006-4a982095780c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.804855 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4699e1f2-8d9a-4aff-bf8a-d648162c8bf3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.804864 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00e73e89-3b03-49a8-878b-3b208952476b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.806621 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4699e1f2-8d9a-4aff-bf8a-d648162c8bf3-kube-api-access-fmsw8" (OuterVolumeSpecName: "kube-api-access-fmsw8") pod "4699e1f2-8d9a-4aff-bf8a-d648162c8bf3" (UID: "4699e1f2-8d9a-4aff-bf8a-d648162c8bf3"). InnerVolumeSpecName "kube-api-access-fmsw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.808141 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00e73e89-3b03-49a8-878b-3b208952476b-kube-api-access-7spwq" (OuterVolumeSpecName: "kube-api-access-7spwq") pod "00e73e89-3b03-49a8-878b-3b208952476b" (UID: "00e73e89-3b03-49a8-878b-3b208952476b"). InnerVolumeSpecName "kube-api-access-7spwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.808308 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/919f233f-0ca4-4b04-ad6f-4b4bebba4f3b-kube-api-access-bwrf5" (OuterVolumeSpecName: "kube-api-access-bwrf5") pod "919f233f-0ca4-4b04-ad6f-4b4bebba4f3b" (UID: "919f233f-0ca4-4b04-ad6f-4b4bebba4f3b"). InnerVolumeSpecName "kube-api-access-bwrf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.833665 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-da23-account-create-xqsrk" event={"ID":"4699e1f2-8d9a-4aff-bf8a-d648162c8bf3","Type":"ContainerDied","Data":"294f5de1ef3657f2a08bff89558c966221aface9493e95f8ef17a4ec75a579f6"} Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.833703 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="294f5de1ef3657f2a08bff89558c966221aface9493e95f8ef17a4ec75a579f6" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.833758 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-da23-account-create-xqsrk" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.837402 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-e10f-account-create-5mrxz" event={"ID":"00e73e89-3b03-49a8-878b-3b208952476b","Type":"ContainerDied","Data":"6c2f52e9479ca1837000ec91c6f6509d03cbf002552d22555011af1205ba7497"} Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.837438 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c2f52e9479ca1837000ec91c6f6509d03cbf002552d22555011af1205ba7497" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.837507 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-e10f-account-create-5mrxz" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.842895 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-4269-account-create-rbxk7" event={"ID":"0a9cf3f7-758e-4c7a-9ace-de87acde4680","Type":"ContainerDied","Data":"1b05ff9db2888c26724f5e47f78f10fd404dd5fae70f6763ae6dfe0442bd65c4"} Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.842928 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b05ff9db2888c26724f5e47f78f10fd404dd5fae70f6763ae6dfe0442bd65c4" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.842976 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-4269-account-create-rbxk7" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.844964 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-ssnfp" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.844978 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-ssnfp" event={"ID":"240b0e2f-e08f-45b7-a006-4a982095780c","Type":"ContainerDied","Data":"2f63c3c8c61c1f156738fa500587a553204125b2131e7a5c0014c78d35e990a4"} Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.845026 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f63c3c8c61c1f156738fa500587a553204125b2131e7a5c0014c78d35e990a4" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.846569 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-s5npb" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.850705 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-s5npb" event={"ID":"919f233f-0ca4-4b04-ad6f-4b4bebba4f3b","Type":"ContainerDied","Data":"04c7108a95b3b9a05f907d8f796690488520af2a54fb296906e60e401e8bb341"} Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.850733 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04c7108a95b3b9a05f907d8f796690488520af2a54fb296906e60e401e8bb341" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.906208 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7spwq\" (UniqueName: \"kubernetes.io/projected/00e73e89-3b03-49a8-878b-3b208952476b-kube-api-access-7spwq\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.906230 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmsw8\" (UniqueName: \"kubernetes.io/projected/4699e1f2-8d9a-4aff-bf8a-d648162c8bf3-kube-api-access-fmsw8\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:30 crc kubenswrapper[4915]: I1124 21:41:30.906241 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwrf5\" (UniqueName: \"kubernetes.io/projected/919f233f-0ca4-4b04-ad6f-4b4bebba4f3b-kube-api-access-bwrf5\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:32 crc kubenswrapper[4915]: I1124 21:41:32.868824 4915 generic.go:334] "Generic (PLEG): container finished" podID="cbc69ba4-d747-467c-98ab-d22491a8203c" containerID="2d8e1ed9ea039c46b48f8f3b2a44e99fd7a3806d60979edfea9d14cc94428fd9" exitCode=0 Nov 24 21:41:32 crc kubenswrapper[4915]: I1124 21:41:32.868928 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cbc69ba4-d747-467c-98ab-d22491a8203c","Type":"ContainerDied","Data":"2d8e1ed9ea039c46b48f8f3b2a44e99fd7a3806d60979edfea9d14cc94428fd9"} Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.667651 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kg5gb" Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.767287 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-tdtzl" Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.777221 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4df9l\" (UniqueName: \"kubernetes.io/projected/c74c4c4b-c291-42e8-a7f5-37fc513af9b0-kube-api-access-4df9l\") pod \"c74c4c4b-c291-42e8-a7f5-37fc513af9b0\" (UID: \"c74c4c4b-c291-42e8-a7f5-37fc513af9b0\") " Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.777469 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c74c4c4b-c291-42e8-a7f5-37fc513af9b0-operator-scripts\") pod \"c74c4c4b-c291-42e8-a7f5-37fc513af9b0\" (UID: \"c74c4c4b-c291-42e8-a7f5-37fc513af9b0\") " Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.778104 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c74c4c4b-c291-42e8-a7f5-37fc513af9b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c74c4c4b-c291-42e8-a7f5-37fc513af9b0" (UID: "c74c4c4b-c291-42e8-a7f5-37fc513af9b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.779995 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5e4c-account-create-dnwgt" Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.787780 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c74c4c4b-c291-42e8-a7f5-37fc513af9b0-kube-api-access-4df9l" (OuterVolumeSpecName: "kube-api-access-4df9l") pod "c74c4c4b-c291-42e8-a7f5-37fc513af9b0" (UID: "c74c4c4b-c291-42e8-a7f5-37fc513af9b0"). InnerVolumeSpecName "kube-api-access-4df9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.878650 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea12e14a-f2f3-476e-886d-5a1648322134-operator-scripts\") pod \"ea12e14a-f2f3-476e-886d-5a1648322134\" (UID: \"ea12e14a-f2f3-476e-886d-5a1648322134\") " Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.878683 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70f4148e-7141-4cb7-9ed3-e0369c008c28-operator-scripts\") pod \"70f4148e-7141-4cb7-9ed3-e0369c008c28\" (UID: \"70f4148e-7141-4cb7-9ed3-e0369c008c28\") " Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.878778 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbq9g\" (UniqueName: \"kubernetes.io/projected/ea12e14a-f2f3-476e-886d-5a1648322134-kube-api-access-lbq9g\") pod \"ea12e14a-f2f3-476e-886d-5a1648322134\" (UID: \"ea12e14a-f2f3-476e-886d-5a1648322134\") " Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.878899 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82lqr\" (UniqueName: \"kubernetes.io/projected/70f4148e-7141-4cb7-9ed3-e0369c008c28-kube-api-access-82lqr\") pod \"70f4148e-7141-4cb7-9ed3-e0369c008c28\" (UID: \"70f4148e-7141-4cb7-9ed3-e0369c008c28\") " Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.879090 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea12e14a-f2f3-476e-886d-5a1648322134-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ea12e14a-f2f3-476e-886d-5a1648322134" (UID: "ea12e14a-f2f3-476e-886d-5a1648322134"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.879394 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4df9l\" (UniqueName: \"kubernetes.io/projected/c74c4c4b-c291-42e8-a7f5-37fc513af9b0-kube-api-access-4df9l\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.879406 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea12e14a-f2f3-476e-886d-5a1648322134-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.879415 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c74c4c4b-c291-42e8-a7f5-37fc513af9b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.879780 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70f4148e-7141-4cb7-9ed3-e0369c008c28-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "70f4148e-7141-4cb7-9ed3-e0369c008c28" (UID: "70f4148e-7141-4cb7-9ed3-e0369c008c28"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.880461 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5e4c-account-create-dnwgt" event={"ID":"70f4148e-7141-4cb7-9ed3-e0369c008c28","Type":"ContainerDied","Data":"eb12ade3316d8c9e18c37b978d4620814fbf787c2bd415684720f65e77c34101"} Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.880578 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb12ade3316d8c9e18c37b978d4620814fbf787c2bd415684720f65e77c34101" Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.880710 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5e4c-account-create-dnwgt" Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.882261 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea12e14a-f2f3-476e-886d-5a1648322134-kube-api-access-lbq9g" (OuterVolumeSpecName: "kube-api-access-lbq9g") pod "ea12e14a-f2f3-476e-886d-5a1648322134" (UID: "ea12e14a-f2f3-476e-886d-5a1648322134"). InnerVolumeSpecName "kube-api-access-lbq9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.883734 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-tdtzl" event={"ID":"ea12e14a-f2f3-476e-886d-5a1648322134","Type":"ContainerDied","Data":"dea2546913e46a885c3bfaca83f0ee8252719721f51bf623b0d95b2d3afdf8f0"} Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.883754 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-tdtzl" Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.883767 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dea2546913e46a885c3bfaca83f0ee8252719721f51bf623b0d95b2d3afdf8f0" Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.885342 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5fb97" event={"ID":"2cd7abca-28b3-4ddc-b209-7f95193ffa11","Type":"ContainerStarted","Data":"703956656f1d9507f18cecb6c6ca43fb84a8e621e26c2466956b78e2596c9361"} Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.887695 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70f4148e-7141-4cb7-9ed3-e0369c008c28-kube-api-access-82lqr" (OuterVolumeSpecName: "kube-api-access-82lqr") pod "70f4148e-7141-4cb7-9ed3-e0369c008c28" (UID: "70f4148e-7141-4cb7-9ed3-e0369c008c28"). InnerVolumeSpecName "kube-api-access-82lqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.890326 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kg5gb" event={"ID":"c74c4c4b-c291-42e8-a7f5-37fc513af9b0","Type":"ContainerDied","Data":"39d07e688456878d71a530caed38e93202bc431adc1b1ca7b682ef8f161d9e48"} Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.890368 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39d07e688456878d71a530caed38e93202bc431adc1b1ca7b682ef8f161d9e48" Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.890370 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kg5gb" Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.892477 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cbc69ba4-d747-467c-98ab-d22491a8203c","Type":"ContainerStarted","Data":"9ba827f603691355284e3e33300c15e463274784ebf5f3e53d83bd9f7e49391d"} Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.981518 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82lqr\" (UniqueName: \"kubernetes.io/projected/70f4148e-7141-4cb7-9ed3-e0369c008c28-kube-api-access-82lqr\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.981553 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70f4148e-7141-4cb7-9ed3-e0369c008c28-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:33 crc kubenswrapper[4915]: I1124 21:41:33.981566 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbq9g\" (UniqueName: \"kubernetes.io/projected/ea12e14a-f2f3-476e-886d-5a1648322134-kube-api-access-lbq9g\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:34 crc kubenswrapper[4915]: I1124 21:41:34.712190 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-5fb97" podStartSLOduration=3.5794458220000003 podStartE2EDuration="8.712141894s" podCreationTimestamp="2025-11-24 21:41:26 +0000 UTC" firstStartedPulling="2025-11-24 21:41:28.387062772 +0000 UTC m=+1306.703314945" lastFinishedPulling="2025-11-24 21:41:33.519758844 +0000 UTC m=+1311.836011017" observedRunningTime="2025-11-24 21:41:33.905233548 +0000 UTC m=+1312.221485721" watchObservedRunningTime="2025-11-24 21:41:34.712141894 +0000 UTC m=+1313.028394087" Nov 24 21:41:36 crc kubenswrapper[4915]: I1124 21:41:36.375879 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" Nov 24 21:41:36 crc kubenswrapper[4915]: I1124 21:41:36.470522 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-rgp58"] Nov 24 21:41:36 crc kubenswrapper[4915]: I1124 21:41:36.470796 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-rgp58" podUID="9d81fc37-e6f7-45ac-b468-cb1868ed5bc4" containerName="dnsmasq-dns" containerID="cri-o://fe90e0d9d4005b95e72cc51282a80126cf2845bfec3d239fe19a87421eec3a4e" gracePeriod=10 Nov 24 21:41:36 crc kubenswrapper[4915]: I1124 21:41:36.688500 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-rgp58" podUID="9d81fc37-e6f7-45ac-b468-cb1868ed5bc4" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.154:5353: connect: connection refused" Nov 24 21:41:36 crc kubenswrapper[4915]: I1124 21:41:36.931961 4915 generic.go:334] "Generic (PLEG): container finished" podID="9d81fc37-e6f7-45ac-b468-cb1868ed5bc4" containerID="fe90e0d9d4005b95e72cc51282a80126cf2845bfec3d239fe19a87421eec3a4e" exitCode=0 Nov 24 21:41:36 crc kubenswrapper[4915]: I1124 21:41:36.932002 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-rgp58" event={"ID":"9d81fc37-e6f7-45ac-b468-cb1868ed5bc4","Type":"ContainerDied","Data":"fe90e0d9d4005b95e72cc51282a80126cf2845bfec3d239fe19a87421eec3a4e"} Nov 24 21:41:37 crc kubenswrapper[4915]: I1124 21:41:37.133392 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-rgp58" Nov 24 21:41:37 crc kubenswrapper[4915]: I1124 21:41:37.264490 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7tzw\" (UniqueName: \"kubernetes.io/projected/9d81fc37-e6f7-45ac-b468-cb1868ed5bc4-kube-api-access-t7tzw\") pod \"9d81fc37-e6f7-45ac-b468-cb1868ed5bc4\" (UID: \"9d81fc37-e6f7-45ac-b468-cb1868ed5bc4\") " Nov 24 21:41:37 crc kubenswrapper[4915]: I1124 21:41:37.264571 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d81fc37-e6f7-45ac-b468-cb1868ed5bc4-config\") pod \"9d81fc37-e6f7-45ac-b468-cb1868ed5bc4\" (UID: \"9d81fc37-e6f7-45ac-b468-cb1868ed5bc4\") " Nov 24 21:41:37 crc kubenswrapper[4915]: I1124 21:41:37.264703 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d81fc37-e6f7-45ac-b468-cb1868ed5bc4-ovsdbserver-sb\") pod \"9d81fc37-e6f7-45ac-b468-cb1868ed5bc4\" (UID: \"9d81fc37-e6f7-45ac-b468-cb1868ed5bc4\") " Nov 24 21:41:37 crc kubenswrapper[4915]: I1124 21:41:37.264730 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d81fc37-e6f7-45ac-b468-cb1868ed5bc4-dns-svc\") pod \"9d81fc37-e6f7-45ac-b468-cb1868ed5bc4\" (UID: \"9d81fc37-e6f7-45ac-b468-cb1868ed5bc4\") " Nov 24 21:41:37 crc kubenswrapper[4915]: I1124 21:41:37.264780 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d81fc37-e6f7-45ac-b468-cb1868ed5bc4-ovsdbserver-nb\") pod \"9d81fc37-e6f7-45ac-b468-cb1868ed5bc4\" (UID: \"9d81fc37-e6f7-45ac-b468-cb1868ed5bc4\") " Nov 24 21:41:37 crc kubenswrapper[4915]: I1124 21:41:37.270437 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d81fc37-e6f7-45ac-b468-cb1868ed5bc4-kube-api-access-t7tzw" (OuterVolumeSpecName: "kube-api-access-t7tzw") pod "9d81fc37-e6f7-45ac-b468-cb1868ed5bc4" (UID: "9d81fc37-e6f7-45ac-b468-cb1868ed5bc4"). InnerVolumeSpecName "kube-api-access-t7tzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:41:37 crc kubenswrapper[4915]: I1124 21:41:37.326870 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d81fc37-e6f7-45ac-b468-cb1868ed5bc4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9d81fc37-e6f7-45ac-b468-cb1868ed5bc4" (UID: "9d81fc37-e6f7-45ac-b468-cb1868ed5bc4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:37 crc kubenswrapper[4915]: I1124 21:41:37.336763 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d81fc37-e6f7-45ac-b468-cb1868ed5bc4-config" (OuterVolumeSpecName: "config") pod "9d81fc37-e6f7-45ac-b468-cb1868ed5bc4" (UID: "9d81fc37-e6f7-45ac-b468-cb1868ed5bc4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:37 crc kubenswrapper[4915]: I1124 21:41:37.340147 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d81fc37-e6f7-45ac-b468-cb1868ed5bc4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9d81fc37-e6f7-45ac-b468-cb1868ed5bc4" (UID: "9d81fc37-e6f7-45ac-b468-cb1868ed5bc4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:37 crc kubenswrapper[4915]: I1124 21:41:37.361355 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d81fc37-e6f7-45ac-b468-cb1868ed5bc4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9d81fc37-e6f7-45ac-b468-cb1868ed5bc4" (UID: "9d81fc37-e6f7-45ac-b468-cb1868ed5bc4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:37 crc kubenswrapper[4915]: I1124 21:41:37.368026 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7tzw\" (UniqueName: \"kubernetes.io/projected/9d81fc37-e6f7-45ac-b468-cb1868ed5bc4-kube-api-access-t7tzw\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:37 crc kubenswrapper[4915]: I1124 21:41:37.368215 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d81fc37-e6f7-45ac-b468-cb1868ed5bc4-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:37 crc kubenswrapper[4915]: I1124 21:41:37.368313 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d81fc37-e6f7-45ac-b468-cb1868ed5bc4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:37 crc kubenswrapper[4915]: I1124 21:41:37.368380 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d81fc37-e6f7-45ac-b468-cb1868ed5bc4-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:37 crc kubenswrapper[4915]: I1124 21:41:37.368452 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d81fc37-e6f7-45ac-b468-cb1868ed5bc4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:37 crc kubenswrapper[4915]: I1124 21:41:37.960406 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cbc69ba4-d747-467c-98ab-d22491a8203c","Type":"ContainerStarted","Data":"c45920c0bbdd6981929d1b14f9fddd5512f6d4814e26a9584533551e9efdba03"} Nov 24 21:41:37 crc kubenswrapper[4915]: I1124 21:41:37.960655 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cbc69ba4-d747-467c-98ab-d22491a8203c","Type":"ContainerStarted","Data":"9bcd4a49ab53071c7b9492fd1f5b6c47873211384af786fe78918fbbefc7579f"} Nov 24 21:41:37 crc kubenswrapper[4915]: I1124 21:41:37.963664 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-rgp58" event={"ID":"9d81fc37-e6f7-45ac-b468-cb1868ed5bc4","Type":"ContainerDied","Data":"18206cc990716ea46d0639005f4a12b02a156934b7c2e527e447c1c548e00a15"} Nov 24 21:41:37 crc kubenswrapper[4915]: I1124 21:41:37.963710 4915 scope.go:117] "RemoveContainer" containerID="fe90e0d9d4005b95e72cc51282a80126cf2845bfec3d239fe19a87421eec3a4e" Nov 24 21:41:37 crc kubenswrapper[4915]: I1124 21:41:37.963714 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-rgp58" Nov 24 21:41:37 crc kubenswrapper[4915]: I1124 21:41:37.973008 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:37 crc kubenswrapper[4915]: I1124 21:41:37.973048 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:37 crc kubenswrapper[4915]: I1124 21:41:37.979523 4915 generic.go:334] "Generic (PLEG): container finished" podID="2cd7abca-28b3-4ddc-b209-7f95193ffa11" containerID="703956656f1d9507f18cecb6c6ca43fb84a8e621e26c2466956b78e2596c9361" exitCode=0 Nov 24 21:41:37 crc kubenswrapper[4915]: I1124 21:41:37.979614 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5fb97" event={"ID":"2cd7abca-28b3-4ddc-b209-7f95193ffa11","Type":"ContainerDied","Data":"703956656f1d9507f18cecb6c6ca43fb84a8e621e26c2466956b78e2596c9361"} Nov 24 21:41:37 crc kubenswrapper[4915]: I1124 21:41:37.999148 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=15.99912809 podStartE2EDuration="15.99912809s" podCreationTimestamp="2025-11-24 21:41:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:37.982262085 +0000 UTC m=+1316.298514258" watchObservedRunningTime="2025-11-24 21:41:37.99912809 +0000 UTC m=+1316.315380263" Nov 24 21:41:38 crc kubenswrapper[4915]: I1124 21:41:38.003081 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:38 crc kubenswrapper[4915]: I1124 21:41:38.026351 4915 scope.go:117] "RemoveContainer" containerID="972ed35f4ad1137b9f6bb8bdd959a849033053ce8567e5808558e4bed6722c94" Nov 24 21:41:38 crc kubenswrapper[4915]: I1124 21:41:38.031669 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-rgp58"] Nov 24 21:41:38 crc kubenswrapper[4915]: I1124 21:41:38.041570 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-rgp58"] Nov 24 21:41:38 crc kubenswrapper[4915]: I1124 21:41:38.445951 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d81fc37-e6f7-45ac-b468-cb1868ed5bc4" path="/var/lib/kubelet/pods/9d81fc37-e6f7-45ac-b468-cb1868ed5bc4/volumes" Nov 24 21:41:39 crc kubenswrapper[4915]: I1124 21:41:39.000284 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 24 21:41:39 crc kubenswrapper[4915]: I1124 21:41:39.458571 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5fb97" Nov 24 21:41:39 crc kubenswrapper[4915]: I1124 21:41:39.510825 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cd7abca-28b3-4ddc-b209-7f95193ffa11-combined-ca-bundle\") pod \"2cd7abca-28b3-4ddc-b209-7f95193ffa11\" (UID: \"2cd7abca-28b3-4ddc-b209-7f95193ffa11\") " Nov 24 21:41:39 crc kubenswrapper[4915]: I1124 21:41:39.511099 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cd7abca-28b3-4ddc-b209-7f95193ffa11-config-data\") pod \"2cd7abca-28b3-4ddc-b209-7f95193ffa11\" (UID: \"2cd7abca-28b3-4ddc-b209-7f95193ffa11\") " Nov 24 21:41:39 crc kubenswrapper[4915]: I1124 21:41:39.511235 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzzzk\" (UniqueName: \"kubernetes.io/projected/2cd7abca-28b3-4ddc-b209-7f95193ffa11-kube-api-access-bzzzk\") pod \"2cd7abca-28b3-4ddc-b209-7f95193ffa11\" (UID: \"2cd7abca-28b3-4ddc-b209-7f95193ffa11\") " Nov 24 21:41:39 crc kubenswrapper[4915]: I1124 21:41:39.517105 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cd7abca-28b3-4ddc-b209-7f95193ffa11-kube-api-access-bzzzk" (OuterVolumeSpecName: "kube-api-access-bzzzk") pod "2cd7abca-28b3-4ddc-b209-7f95193ffa11" (UID: "2cd7abca-28b3-4ddc-b209-7f95193ffa11"). InnerVolumeSpecName "kube-api-access-bzzzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:41:39 crc kubenswrapper[4915]: I1124 21:41:39.543302 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cd7abca-28b3-4ddc-b209-7f95193ffa11-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2cd7abca-28b3-4ddc-b209-7f95193ffa11" (UID: "2cd7abca-28b3-4ddc-b209-7f95193ffa11"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:41:39 crc kubenswrapper[4915]: I1124 21:41:39.570719 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cd7abca-28b3-4ddc-b209-7f95193ffa11-config-data" (OuterVolumeSpecName: "config-data") pod "2cd7abca-28b3-4ddc-b209-7f95193ffa11" (UID: "2cd7abca-28b3-4ddc-b209-7f95193ffa11"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:41:39 crc kubenswrapper[4915]: I1124 21:41:39.614381 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cd7abca-28b3-4ddc-b209-7f95193ffa11-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:39 crc kubenswrapper[4915]: I1124 21:41:39.614417 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cd7abca-28b3-4ddc-b209-7f95193ffa11-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:39 crc kubenswrapper[4915]: I1124 21:41:39.614428 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzzzk\" (UniqueName: \"kubernetes.io/projected/2cd7abca-28b3-4ddc-b209-7f95193ffa11-kube-api-access-bzzzk\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.002722 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5fb97" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.002728 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5fb97" event={"ID":"2cd7abca-28b3-4ddc-b209-7f95193ffa11","Type":"ContainerDied","Data":"b24bd1aab30021e727f66829ab92fa26962a7c9fd81c4189959d79f6ee9afe91"} Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.002796 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b24bd1aab30021e727f66829ab92fa26962a7c9fd81c4189959d79f6ee9afe91" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.263193 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-b5rpb"] Nov 24 21:41:40 crc kubenswrapper[4915]: E1124 21:41:40.263663 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="919f233f-0ca4-4b04-ad6f-4b4bebba4f3b" containerName="mariadb-database-create" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.263680 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="919f233f-0ca4-4b04-ad6f-4b4bebba4f3b" containerName="mariadb-database-create" Nov 24 21:41:40 crc kubenswrapper[4915]: E1124 21:41:40.263696 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a9cf3f7-758e-4c7a-9ace-de87acde4680" containerName="mariadb-account-create" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.263701 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a9cf3f7-758e-4c7a-9ace-de87acde4680" containerName="mariadb-account-create" Nov 24 21:41:40 crc kubenswrapper[4915]: E1124 21:41:40.263709 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d81fc37-e6f7-45ac-b468-cb1868ed5bc4" containerName="dnsmasq-dns" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.263716 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d81fc37-e6f7-45ac-b468-cb1868ed5bc4" containerName="dnsmasq-dns" Nov 24 21:41:40 crc kubenswrapper[4915]: E1124 21:41:40.263740 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd8b057-0c9e-4bc6-8b95-55beb17a7563" containerName="dnsmasq-dns" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.263746 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd8b057-0c9e-4bc6-8b95-55beb17a7563" containerName="dnsmasq-dns" Nov 24 21:41:40 crc kubenswrapper[4915]: E1124 21:41:40.263760 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea12e14a-f2f3-476e-886d-5a1648322134" containerName="mariadb-database-create" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.263765 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea12e14a-f2f3-476e-886d-5a1648322134" containerName="mariadb-database-create" Nov 24 21:41:40 crc kubenswrapper[4915]: E1124 21:41:40.263862 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd8b057-0c9e-4bc6-8b95-55beb17a7563" containerName="init" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.263868 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd8b057-0c9e-4bc6-8b95-55beb17a7563" containerName="init" Nov 24 21:41:40 crc kubenswrapper[4915]: E1124 21:41:40.263876 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd7abca-28b3-4ddc-b209-7f95193ffa11" containerName="keystone-db-sync" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.263882 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd7abca-28b3-4ddc-b209-7f95193ffa11" containerName="keystone-db-sync" Nov 24 21:41:40 crc kubenswrapper[4915]: E1124 21:41:40.263893 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="240b0e2f-e08f-45b7-a006-4a982095780c" containerName="mariadb-database-create" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.263898 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="240b0e2f-e08f-45b7-a006-4a982095780c" containerName="mariadb-database-create" Nov 24 21:41:40 crc kubenswrapper[4915]: E1124 21:41:40.263911 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00e73e89-3b03-49a8-878b-3b208952476b" containerName="mariadb-account-create" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.263916 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="00e73e89-3b03-49a8-878b-3b208952476b" containerName="mariadb-account-create" Nov 24 21:41:40 crc kubenswrapper[4915]: E1124 21:41:40.263926 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4699e1f2-8d9a-4aff-bf8a-d648162c8bf3" containerName="mariadb-account-create" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.263932 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="4699e1f2-8d9a-4aff-bf8a-d648162c8bf3" containerName="mariadb-account-create" Nov 24 21:41:40 crc kubenswrapper[4915]: E1124 21:41:40.263944 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70f4148e-7141-4cb7-9ed3-e0369c008c28" containerName="mariadb-account-create" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.263959 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="70f4148e-7141-4cb7-9ed3-e0369c008c28" containerName="mariadb-account-create" Nov 24 21:41:40 crc kubenswrapper[4915]: E1124 21:41:40.263974 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d81fc37-e6f7-45ac-b468-cb1868ed5bc4" containerName="init" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.263979 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d81fc37-e6f7-45ac-b468-cb1868ed5bc4" containerName="init" Nov 24 21:41:40 crc kubenswrapper[4915]: E1124 21:41:40.263988 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c74c4c4b-c291-42e8-a7f5-37fc513af9b0" containerName="mariadb-database-create" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.263994 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c74c4c4b-c291-42e8-a7f5-37fc513af9b0" containerName="mariadb-database-create" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.264175 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cd7abca-28b3-4ddc-b209-7f95193ffa11" containerName="keystone-db-sync" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.264186 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdd8b057-0c9e-4bc6-8b95-55beb17a7563" containerName="dnsmasq-dns" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.264198 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="240b0e2f-e08f-45b7-a006-4a982095780c" containerName="mariadb-database-create" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.264208 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c74c4c4b-c291-42e8-a7f5-37fc513af9b0" containerName="mariadb-database-create" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.264215 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="70f4148e-7141-4cb7-9ed3-e0369c008c28" containerName="mariadb-account-create" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.264226 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="4699e1f2-8d9a-4aff-bf8a-d648162c8bf3" containerName="mariadb-account-create" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.264234 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="00e73e89-3b03-49a8-878b-3b208952476b" containerName="mariadb-account-create" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.264243 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a9cf3f7-758e-4c7a-9ace-de87acde4680" containerName="mariadb-account-create" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.264253 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea12e14a-f2f3-476e-886d-5a1648322134" containerName="mariadb-database-create" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.264261 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="919f233f-0ca4-4b04-ad6f-4b4bebba4f3b" containerName="mariadb-database-create" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.264271 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d81fc37-e6f7-45ac-b468-cb1868ed5bc4" containerName="dnsmasq-dns" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.265345 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-b5rpb" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.277467 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-b5rpb"] Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.309835 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-fhxnt"] Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.310998 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fhxnt" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.313284 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.313574 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.313732 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-x7cdq" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.313862 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.316995 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.326687 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-b5rpb\" (UID: \"74f70de0-ded1-4353-8cc0-5f85dc3e60c5\") " pod="openstack/dnsmasq-dns-bbf5cc879-b5rpb" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.326732 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddzgw\" (UniqueName: \"kubernetes.io/projected/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-kube-api-access-ddzgw\") pod \"dnsmasq-dns-bbf5cc879-b5rpb\" (UID: \"74f70de0-ded1-4353-8cc0-5f85dc3e60c5\") " pod="openstack/dnsmasq-dns-bbf5cc879-b5rpb" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.326756 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-b5rpb\" (UID: \"74f70de0-ded1-4353-8cc0-5f85dc3e60c5\") " pod="openstack/dnsmasq-dns-bbf5cc879-b5rpb" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.326789 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-b5rpb\" (UID: \"74f70de0-ded1-4353-8cc0-5f85dc3e60c5\") " pod="openstack/dnsmasq-dns-bbf5cc879-b5rpb" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.326815 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-b5rpb\" (UID: \"74f70de0-ded1-4353-8cc0-5f85dc3e60c5\") " pod="openstack/dnsmasq-dns-bbf5cc879-b5rpb" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.326984 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-config\") pod \"dnsmasq-dns-bbf5cc879-b5rpb\" (UID: \"74f70de0-ded1-4353-8cc0-5f85dc3e60c5\") " pod="openstack/dnsmasq-dns-bbf5cc879-b5rpb" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.407071 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-fhxnt"] Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.428651 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/57c34cab-0ab6-4b4d-9278-f61863a50b22-credential-keys\") pod \"keystone-bootstrap-fhxnt\" (UID: \"57c34cab-0ab6-4b4d-9278-f61863a50b22\") " pod="openstack/keystone-bootstrap-fhxnt" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.428691 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57c34cab-0ab6-4b4d-9278-f61863a50b22-scripts\") pod \"keystone-bootstrap-fhxnt\" (UID: \"57c34cab-0ab6-4b4d-9278-f61863a50b22\") " pod="openstack/keystone-bootstrap-fhxnt" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.428735 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfhp7\" (UniqueName: \"kubernetes.io/projected/57c34cab-0ab6-4b4d-9278-f61863a50b22-kube-api-access-hfhp7\") pod \"keystone-bootstrap-fhxnt\" (UID: \"57c34cab-0ab6-4b4d-9278-f61863a50b22\") " pod="openstack/keystone-bootstrap-fhxnt" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.428800 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/57c34cab-0ab6-4b4d-9278-f61863a50b22-fernet-keys\") pod \"keystone-bootstrap-fhxnt\" (UID: \"57c34cab-0ab6-4b4d-9278-f61863a50b22\") " pod="openstack/keystone-bootstrap-fhxnt" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.428824 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-b5rpb\" (UID: \"74f70de0-ded1-4353-8cc0-5f85dc3e60c5\") " pod="openstack/dnsmasq-dns-bbf5cc879-b5rpb" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.428840 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c34cab-0ab6-4b4d-9278-f61863a50b22-combined-ca-bundle\") pod \"keystone-bootstrap-fhxnt\" (UID: \"57c34cab-0ab6-4b4d-9278-f61863a50b22\") " pod="openstack/keystone-bootstrap-fhxnt" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.428859 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddzgw\" (UniqueName: \"kubernetes.io/projected/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-kube-api-access-ddzgw\") pod \"dnsmasq-dns-bbf5cc879-b5rpb\" (UID: \"74f70de0-ded1-4353-8cc0-5f85dc3e60c5\") " pod="openstack/dnsmasq-dns-bbf5cc879-b5rpb" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.428880 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-b5rpb\" (UID: \"74f70de0-ded1-4353-8cc0-5f85dc3e60c5\") " pod="openstack/dnsmasq-dns-bbf5cc879-b5rpb" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.428899 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-b5rpb\" (UID: \"74f70de0-ded1-4353-8cc0-5f85dc3e60c5\") " pod="openstack/dnsmasq-dns-bbf5cc879-b5rpb" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.428924 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-b5rpb\" (UID: \"74f70de0-ded1-4353-8cc0-5f85dc3e60c5\") " pod="openstack/dnsmasq-dns-bbf5cc879-b5rpb" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.429038 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-config\") pod \"dnsmasq-dns-bbf5cc879-b5rpb\" (UID: \"74f70de0-ded1-4353-8cc0-5f85dc3e60c5\") " pod="openstack/dnsmasq-dns-bbf5cc879-b5rpb" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.429198 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c34cab-0ab6-4b4d-9278-f61863a50b22-config-data\") pod \"keystone-bootstrap-fhxnt\" (UID: \"57c34cab-0ab6-4b4d-9278-f61863a50b22\") " pod="openstack/keystone-bootstrap-fhxnt" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.429745 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-b5rpb\" (UID: \"74f70de0-ded1-4353-8cc0-5f85dc3e60c5\") " pod="openstack/dnsmasq-dns-bbf5cc879-b5rpb" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.429909 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-b5rpb\" (UID: \"74f70de0-ded1-4353-8cc0-5f85dc3e60c5\") " pod="openstack/dnsmasq-dns-bbf5cc879-b5rpb" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.429957 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-b5rpb\" (UID: \"74f70de0-ded1-4353-8cc0-5f85dc3e60c5\") " pod="openstack/dnsmasq-dns-bbf5cc879-b5rpb" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.430407 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-config\") pod \"dnsmasq-dns-bbf5cc879-b5rpb\" (UID: \"74f70de0-ded1-4353-8cc0-5f85dc3e60c5\") " pod="openstack/dnsmasq-dns-bbf5cc879-b5rpb" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.432232 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-b5rpb\" (UID: \"74f70de0-ded1-4353-8cc0-5f85dc3e60c5\") " pod="openstack/dnsmasq-dns-bbf5cc879-b5rpb" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.456939 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-6jszw"] Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.461165 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddzgw\" (UniqueName: \"kubernetes.io/projected/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-kube-api-access-ddzgw\") pod \"dnsmasq-dns-bbf5cc879-b5rpb\" (UID: \"74f70de0-ded1-4353-8cc0-5f85dc3e60c5\") " pod="openstack/dnsmasq-dns-bbf5cc879-b5rpb" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.463348 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-6jszw"] Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.463471 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-6jszw" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.489241 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-7vhk4" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.490427 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.533066 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c34cab-0ab6-4b4d-9278-f61863a50b22-config-data\") pod \"keystone-bootstrap-fhxnt\" (UID: \"57c34cab-0ab6-4b4d-9278-f61863a50b22\") " pod="openstack/keystone-bootstrap-fhxnt" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.533192 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/57c34cab-0ab6-4b4d-9278-f61863a50b22-credential-keys\") pod \"keystone-bootstrap-fhxnt\" (UID: \"57c34cab-0ab6-4b4d-9278-f61863a50b22\") " pod="openstack/keystone-bootstrap-fhxnt" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.533217 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57c34cab-0ab6-4b4d-9278-f61863a50b22-scripts\") pod \"keystone-bootstrap-fhxnt\" (UID: \"57c34cab-0ab6-4b4d-9278-f61863a50b22\") " pod="openstack/keystone-bootstrap-fhxnt" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.533253 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdfq9\" (UniqueName: \"kubernetes.io/projected/8c21efec-e27b-4e9e-bdc6-a4d9a0eab412-kube-api-access-kdfq9\") pod \"heat-db-sync-6jszw\" (UID: \"8c21efec-e27b-4e9e-bdc6-a4d9a0eab412\") " pod="openstack/heat-db-sync-6jszw" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.533283 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfhp7\" (UniqueName: \"kubernetes.io/projected/57c34cab-0ab6-4b4d-9278-f61863a50b22-kube-api-access-hfhp7\") pod \"keystone-bootstrap-fhxnt\" (UID: \"57c34cab-0ab6-4b4d-9278-f61863a50b22\") " pod="openstack/keystone-bootstrap-fhxnt" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.533340 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c21efec-e27b-4e9e-bdc6-a4d9a0eab412-config-data\") pod \"heat-db-sync-6jszw\" (UID: \"8c21efec-e27b-4e9e-bdc6-a4d9a0eab412\") " pod="openstack/heat-db-sync-6jszw" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.533382 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/57c34cab-0ab6-4b4d-9278-f61863a50b22-fernet-keys\") pod \"keystone-bootstrap-fhxnt\" (UID: \"57c34cab-0ab6-4b4d-9278-f61863a50b22\") " pod="openstack/keystone-bootstrap-fhxnt" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.533404 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c21efec-e27b-4e9e-bdc6-a4d9a0eab412-combined-ca-bundle\") pod \"heat-db-sync-6jszw\" (UID: \"8c21efec-e27b-4e9e-bdc6-a4d9a0eab412\") " pod="openstack/heat-db-sync-6jszw" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.533442 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c34cab-0ab6-4b4d-9278-f61863a50b22-combined-ca-bundle\") pod \"keystone-bootstrap-fhxnt\" (UID: \"57c34cab-0ab6-4b4d-9278-f61863a50b22\") " pod="openstack/keystone-bootstrap-fhxnt" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.542538 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c34cab-0ab6-4b4d-9278-f61863a50b22-config-data\") pod \"keystone-bootstrap-fhxnt\" (UID: \"57c34cab-0ab6-4b4d-9278-f61863a50b22\") " pod="openstack/keystone-bootstrap-fhxnt" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.546066 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57c34cab-0ab6-4b4d-9278-f61863a50b22-scripts\") pod \"keystone-bootstrap-fhxnt\" (UID: \"57c34cab-0ab6-4b4d-9278-f61863a50b22\") " pod="openstack/keystone-bootstrap-fhxnt" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.547217 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/57c34cab-0ab6-4b4d-9278-f61863a50b22-credential-keys\") pod \"keystone-bootstrap-fhxnt\" (UID: \"57c34cab-0ab6-4b4d-9278-f61863a50b22\") " pod="openstack/keystone-bootstrap-fhxnt" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.548787 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c34cab-0ab6-4b4d-9278-f61863a50b22-combined-ca-bundle\") pod \"keystone-bootstrap-fhxnt\" (UID: \"57c34cab-0ab6-4b4d-9278-f61863a50b22\") " pod="openstack/keystone-bootstrap-fhxnt" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.552479 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/57c34cab-0ab6-4b4d-9278-f61863a50b22-fernet-keys\") pod \"keystone-bootstrap-fhxnt\" (UID: \"57c34cab-0ab6-4b4d-9278-f61863a50b22\") " pod="openstack/keystone-bootstrap-fhxnt" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.561983 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfhp7\" (UniqueName: \"kubernetes.io/projected/57c34cab-0ab6-4b4d-9278-f61863a50b22-kube-api-access-hfhp7\") pod \"keystone-bootstrap-fhxnt\" (UID: \"57c34cab-0ab6-4b4d-9278-f61863a50b22\") " pod="openstack/keystone-bootstrap-fhxnt" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.583053 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-b5rpb" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.623311 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-9t9tc"] Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.624642 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9t9tc" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.633764 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-29hzm" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.634075 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.634302 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.635232 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdfq9\" (UniqueName: \"kubernetes.io/projected/8c21efec-e27b-4e9e-bdc6-a4d9a0eab412-kube-api-access-kdfq9\") pod \"heat-db-sync-6jszw\" (UID: \"8c21efec-e27b-4e9e-bdc6-a4d9a0eab412\") " pod="openstack/heat-db-sync-6jszw" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.635309 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c21efec-e27b-4e9e-bdc6-a4d9a0eab412-config-data\") pod \"heat-db-sync-6jszw\" (UID: \"8c21efec-e27b-4e9e-bdc6-a4d9a0eab412\") " pod="openstack/heat-db-sync-6jszw" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.635338 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c21efec-e27b-4e9e-bdc6-a4d9a0eab412-combined-ca-bundle\") pod \"heat-db-sync-6jszw\" (UID: \"8c21efec-e27b-4e9e-bdc6-a4d9a0eab412\") " pod="openstack/heat-db-sync-6jszw" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.651294 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fhxnt" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.660309 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c21efec-e27b-4e9e-bdc6-a4d9a0eab412-config-data\") pod \"heat-db-sync-6jszw\" (UID: \"8c21efec-e27b-4e9e-bdc6-a4d9a0eab412\") " pod="openstack/heat-db-sync-6jszw" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.666319 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c21efec-e27b-4e9e-bdc6-a4d9a0eab412-combined-ca-bundle\") pod \"heat-db-sync-6jszw\" (UID: \"8c21efec-e27b-4e9e-bdc6-a4d9a0eab412\") " pod="openstack/heat-db-sync-6jszw" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.666869 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-kxbs8"] Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.681493 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kxbs8" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.684671 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdfq9\" (UniqueName: \"kubernetes.io/projected/8c21efec-e27b-4e9e-bdc6-a4d9a0eab412-kube-api-access-kdfq9\") pod \"heat-db-sync-6jszw\" (UID: \"8c21efec-e27b-4e9e-bdc6-a4d9a0eab412\") " pod="openstack/heat-db-sync-6jszw" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.688096 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.688276 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-kxbs8"] Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.691743 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5td8d" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.719551 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-9t9tc"] Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.737304 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0646b5c8-f87a-4f27-9327-1bc87669623f-combined-ca-bundle\") pod \"cinder-db-sync-9t9tc\" (UID: \"0646b5c8-f87a-4f27-9327-1bc87669623f\") " pod="openstack/cinder-db-sync-9t9tc" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.737357 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8612c3b5-24cc-431a-a888-8be923564356-combined-ca-bundle\") pod \"barbican-db-sync-kxbs8\" (UID: \"8612c3b5-24cc-431a-a888-8be923564356\") " pod="openstack/barbican-db-sync-kxbs8" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.737384 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0646b5c8-f87a-4f27-9327-1bc87669623f-db-sync-config-data\") pod \"cinder-db-sync-9t9tc\" (UID: \"0646b5c8-f87a-4f27-9327-1bc87669623f\") " pod="openstack/cinder-db-sync-9t9tc" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.737437 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0646b5c8-f87a-4f27-9327-1bc87669623f-etc-machine-id\") pod \"cinder-db-sync-9t9tc\" (UID: \"0646b5c8-f87a-4f27-9327-1bc87669623f\") " pod="openstack/cinder-db-sync-9t9tc" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.737493 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2k2d\" (UniqueName: \"kubernetes.io/projected/8612c3b5-24cc-431a-a888-8be923564356-kube-api-access-p2k2d\") pod \"barbican-db-sync-kxbs8\" (UID: \"8612c3b5-24cc-431a-a888-8be923564356\") " pod="openstack/barbican-db-sync-kxbs8" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.737516 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8612c3b5-24cc-431a-a888-8be923564356-db-sync-config-data\") pod \"barbican-db-sync-kxbs8\" (UID: \"8612c3b5-24cc-431a-a888-8be923564356\") " pod="openstack/barbican-db-sync-kxbs8" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.737535 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0646b5c8-f87a-4f27-9327-1bc87669623f-config-data\") pod \"cinder-db-sync-9t9tc\" (UID: \"0646b5c8-f87a-4f27-9327-1bc87669623f\") " pod="openstack/cinder-db-sync-9t9tc" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.737573 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75jb4\" (UniqueName: \"kubernetes.io/projected/0646b5c8-f87a-4f27-9327-1bc87669623f-kube-api-access-75jb4\") pod \"cinder-db-sync-9t9tc\" (UID: \"0646b5c8-f87a-4f27-9327-1bc87669623f\") " pod="openstack/cinder-db-sync-9t9tc" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.737593 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0646b5c8-f87a-4f27-9327-1bc87669623f-scripts\") pod \"cinder-db-sync-9t9tc\" (UID: \"0646b5c8-f87a-4f27-9327-1bc87669623f\") " pod="openstack/cinder-db-sync-9t9tc" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.770019 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-vx4fc"] Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.771501 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vx4fc" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.777796 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-7vxkw" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.777957 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.778045 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.838062 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-6jszw" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.839991 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0646b5c8-f87a-4f27-9327-1bc87669623f-combined-ca-bundle\") pod \"cinder-db-sync-9t9tc\" (UID: \"0646b5c8-f87a-4f27-9327-1bc87669623f\") " pod="openstack/cinder-db-sync-9t9tc" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.840030 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8612c3b5-24cc-431a-a888-8be923564356-combined-ca-bundle\") pod \"barbican-db-sync-kxbs8\" (UID: \"8612c3b5-24cc-431a-a888-8be923564356\") " pod="openstack/barbican-db-sync-kxbs8" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.840057 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0646b5c8-f87a-4f27-9327-1bc87669623f-db-sync-config-data\") pod \"cinder-db-sync-9t9tc\" (UID: \"0646b5c8-f87a-4f27-9327-1bc87669623f\") " pod="openstack/cinder-db-sync-9t9tc" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.840093 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0646b5c8-f87a-4f27-9327-1bc87669623f-etc-machine-id\") pod \"cinder-db-sync-9t9tc\" (UID: \"0646b5c8-f87a-4f27-9327-1bc87669623f\") " pod="openstack/cinder-db-sync-9t9tc" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.840133 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2k2d\" (UniqueName: \"kubernetes.io/projected/8612c3b5-24cc-431a-a888-8be923564356-kube-api-access-p2k2d\") pod \"barbican-db-sync-kxbs8\" (UID: \"8612c3b5-24cc-431a-a888-8be923564356\") " pod="openstack/barbican-db-sync-kxbs8" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.840156 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8612c3b5-24cc-431a-a888-8be923564356-db-sync-config-data\") pod \"barbican-db-sync-kxbs8\" (UID: \"8612c3b5-24cc-431a-a888-8be923564356\") " pod="openstack/barbican-db-sync-kxbs8" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.840181 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpf6p\" (UniqueName: \"kubernetes.io/projected/54e7cfde-938c-4b51-8cfc-a1f290de00fd-kube-api-access-gpf6p\") pod \"neutron-db-sync-vx4fc\" (UID: \"54e7cfde-938c-4b51-8cfc-a1f290de00fd\") " pod="openstack/neutron-db-sync-vx4fc" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.840201 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0646b5c8-f87a-4f27-9327-1bc87669623f-config-data\") pod \"cinder-db-sync-9t9tc\" (UID: \"0646b5c8-f87a-4f27-9327-1bc87669623f\") " pod="openstack/cinder-db-sync-9t9tc" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.840216 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75jb4\" (UniqueName: \"kubernetes.io/projected/0646b5c8-f87a-4f27-9327-1bc87669623f-kube-api-access-75jb4\") pod \"cinder-db-sync-9t9tc\" (UID: \"0646b5c8-f87a-4f27-9327-1bc87669623f\") " pod="openstack/cinder-db-sync-9t9tc" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.840239 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0646b5c8-f87a-4f27-9327-1bc87669623f-scripts\") pod \"cinder-db-sync-9t9tc\" (UID: \"0646b5c8-f87a-4f27-9327-1bc87669623f\") " pod="openstack/cinder-db-sync-9t9tc" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.840255 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/54e7cfde-938c-4b51-8cfc-a1f290de00fd-config\") pod \"neutron-db-sync-vx4fc\" (UID: \"54e7cfde-938c-4b51-8cfc-a1f290de00fd\") " pod="openstack/neutron-db-sync-vx4fc" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.840270 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54e7cfde-938c-4b51-8cfc-a1f290de00fd-combined-ca-bundle\") pod \"neutron-db-sync-vx4fc\" (UID: \"54e7cfde-938c-4b51-8cfc-a1f290de00fd\") " pod="openstack/neutron-db-sync-vx4fc" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.845281 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0646b5c8-f87a-4f27-9327-1bc87669623f-etc-machine-id\") pod \"cinder-db-sync-9t9tc\" (UID: \"0646b5c8-f87a-4f27-9327-1bc87669623f\") " pod="openstack/cinder-db-sync-9t9tc" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.850410 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0646b5c8-f87a-4f27-9327-1bc87669623f-combined-ca-bundle\") pod \"cinder-db-sync-9t9tc\" (UID: \"0646b5c8-f87a-4f27-9327-1bc87669623f\") " pod="openstack/cinder-db-sync-9t9tc" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.853590 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8612c3b5-24cc-431a-a888-8be923564356-combined-ca-bundle\") pod \"barbican-db-sync-kxbs8\" (UID: \"8612c3b5-24cc-431a-a888-8be923564356\") " pod="openstack/barbican-db-sync-kxbs8" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.854408 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0646b5c8-f87a-4f27-9327-1bc87669623f-scripts\") pod \"cinder-db-sync-9t9tc\" (UID: \"0646b5c8-f87a-4f27-9327-1bc87669623f\") " pod="openstack/cinder-db-sync-9t9tc" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.855691 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0646b5c8-f87a-4f27-9327-1bc87669623f-db-sync-config-data\") pod \"cinder-db-sync-9t9tc\" (UID: \"0646b5c8-f87a-4f27-9327-1bc87669623f\") " pod="openstack/cinder-db-sync-9t9tc" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.859638 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0646b5c8-f87a-4f27-9327-1bc87669623f-config-data\") pod \"cinder-db-sync-9t9tc\" (UID: \"0646b5c8-f87a-4f27-9327-1bc87669623f\") " pod="openstack/cinder-db-sync-9t9tc" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.869016 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8612c3b5-24cc-431a-a888-8be923564356-db-sync-config-data\") pod \"barbican-db-sync-kxbs8\" (UID: \"8612c3b5-24cc-431a-a888-8be923564356\") " pod="openstack/barbican-db-sync-kxbs8" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.879095 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vx4fc"] Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.882121 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75jb4\" (UniqueName: \"kubernetes.io/projected/0646b5c8-f87a-4f27-9327-1bc87669623f-kube-api-access-75jb4\") pod \"cinder-db-sync-9t9tc\" (UID: \"0646b5c8-f87a-4f27-9327-1bc87669623f\") " pod="openstack/cinder-db-sync-9t9tc" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.883464 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2k2d\" (UniqueName: \"kubernetes.io/projected/8612c3b5-24cc-431a-a888-8be923564356-kube-api-access-p2k2d\") pod \"barbican-db-sync-kxbs8\" (UID: \"8612c3b5-24cc-431a-a888-8be923564356\") " pod="openstack/barbican-db-sync-kxbs8" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.942374 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpf6p\" (UniqueName: \"kubernetes.io/projected/54e7cfde-938c-4b51-8cfc-a1f290de00fd-kube-api-access-gpf6p\") pod \"neutron-db-sync-vx4fc\" (UID: \"54e7cfde-938c-4b51-8cfc-a1f290de00fd\") " pod="openstack/neutron-db-sync-vx4fc" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.942957 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/54e7cfde-938c-4b51-8cfc-a1f290de00fd-config\") pod \"neutron-db-sync-vx4fc\" (UID: \"54e7cfde-938c-4b51-8cfc-a1f290de00fd\") " pod="openstack/neutron-db-sync-vx4fc" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.942990 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54e7cfde-938c-4b51-8cfc-a1f290de00fd-combined-ca-bundle\") pod \"neutron-db-sync-vx4fc\" (UID: \"54e7cfde-938c-4b51-8cfc-a1f290de00fd\") " pod="openstack/neutron-db-sync-vx4fc" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.950081 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-b5rpb"] Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.956529 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/54e7cfde-938c-4b51-8cfc-a1f290de00fd-config\") pod \"neutron-db-sync-vx4fc\" (UID: \"54e7cfde-938c-4b51-8cfc-a1f290de00fd\") " pod="openstack/neutron-db-sync-vx4fc" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.964263 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpf6p\" (UniqueName: \"kubernetes.io/projected/54e7cfde-938c-4b51-8cfc-a1f290de00fd-kube-api-access-gpf6p\") pod \"neutron-db-sync-vx4fc\" (UID: \"54e7cfde-938c-4b51-8cfc-a1f290de00fd\") " pod="openstack/neutron-db-sync-vx4fc" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.968033 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54e7cfde-938c-4b51-8cfc-a1f290de00fd-combined-ca-bundle\") pod \"neutron-db-sync-vx4fc\" (UID: \"54e7cfde-938c-4b51-8cfc-a1f290de00fd\") " pod="openstack/neutron-db-sync-vx4fc" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.981383 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-88gfs"] Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.984281 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-88gfs" Nov 24 21:41:40 crc kubenswrapper[4915]: I1124 21:41:40.999148 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-88gfs"] Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.025833 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-2dxdv"] Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.027282 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2dxdv" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.031016 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-dlc2b" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.031192 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.031327 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.046138 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.049245 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.051750 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.052688 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.070556 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2dxdv"] Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.097248 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9t9tc" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.100511 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.128678 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kxbs8" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.148489 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fd83616-8f63-4754-8286-5c25487c8b9c-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-88gfs\" (UID: \"8fd83616-8f63-4754-8286-5c25487c8b9c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-88gfs" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.148619 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fd83616-8f63-4754-8286-5c25487c8b9c-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-88gfs\" (UID: \"8fd83616-8f63-4754-8286-5c25487c8b9c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-88gfs" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.148646 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/578a54f5-2d6f-4c21-b549-55cd00237570-logs\") pod \"placement-db-sync-2dxdv\" (UID: \"578a54f5-2d6f-4c21-b549-55cd00237570\") " pod="openstack/placement-db-sync-2dxdv" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.148688 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fd83616-8f63-4754-8286-5c25487c8b9c-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-88gfs\" (UID: \"8fd83616-8f63-4754-8286-5c25487c8b9c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-88gfs" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.148767 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fd83616-8f63-4754-8286-5c25487c8b9c-config\") pod \"dnsmasq-dns-56df8fb6b7-88gfs\" (UID: \"8fd83616-8f63-4754-8286-5c25487c8b9c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-88gfs" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.148831 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8fd83616-8f63-4754-8286-5c25487c8b9c-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-88gfs\" (UID: \"8fd83616-8f63-4754-8286-5c25487c8b9c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-88gfs" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.148872 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g8vp\" (UniqueName: \"kubernetes.io/projected/8fd83616-8f63-4754-8286-5c25487c8b9c-kube-api-access-4g8vp\") pod \"dnsmasq-dns-56df8fb6b7-88gfs\" (UID: \"8fd83616-8f63-4754-8286-5c25487c8b9c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-88gfs" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.148998 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/578a54f5-2d6f-4c21-b549-55cd00237570-combined-ca-bundle\") pod \"placement-db-sync-2dxdv\" (UID: \"578a54f5-2d6f-4c21-b549-55cd00237570\") " pod="openstack/placement-db-sync-2dxdv" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.149036 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/578a54f5-2d6f-4c21-b549-55cd00237570-config-data\") pod \"placement-db-sync-2dxdv\" (UID: \"578a54f5-2d6f-4c21-b549-55cd00237570\") " pod="openstack/placement-db-sync-2dxdv" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.149060 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/578a54f5-2d6f-4c21-b549-55cd00237570-scripts\") pod \"placement-db-sync-2dxdv\" (UID: \"578a54f5-2d6f-4c21-b549-55cd00237570\") " pod="openstack/placement-db-sync-2dxdv" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.149107 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94lhq\" (UniqueName: \"kubernetes.io/projected/578a54f5-2d6f-4c21-b549-55cd00237570-kube-api-access-94lhq\") pod \"placement-db-sync-2dxdv\" (UID: \"578a54f5-2d6f-4c21-b549-55cd00237570\") " pod="openstack/placement-db-sync-2dxdv" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.169016 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vx4fc" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.251431 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fd83616-8f63-4754-8286-5c25487c8b9c-config\") pod \"dnsmasq-dns-56df8fb6b7-88gfs\" (UID: \"8fd83616-8f63-4754-8286-5c25487c8b9c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-88gfs" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.251495 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8fd83616-8f63-4754-8286-5c25487c8b9c-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-88gfs\" (UID: \"8fd83616-8f63-4754-8286-5c25487c8b9c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-88gfs" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.251527 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g8vp\" (UniqueName: \"kubernetes.io/projected/8fd83616-8f63-4754-8286-5c25487c8b9c-kube-api-access-4g8vp\") pod \"dnsmasq-dns-56df8fb6b7-88gfs\" (UID: \"8fd83616-8f63-4754-8286-5c25487c8b9c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-88gfs" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.251560 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a56691-c50a-4917-9331-4920a62c5a3b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1a56691-c50a-4917-9331-4920a62c5a3b\") " pod="openstack/ceilometer-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.251588 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1a56691-c50a-4917-9331-4920a62c5a3b-run-httpd\") pod \"ceilometer-0\" (UID: \"c1a56691-c50a-4917-9331-4920a62c5a3b\") " pod="openstack/ceilometer-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.251624 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b7vt\" (UniqueName: \"kubernetes.io/projected/c1a56691-c50a-4917-9331-4920a62c5a3b-kube-api-access-9b7vt\") pod \"ceilometer-0\" (UID: \"c1a56691-c50a-4917-9331-4920a62c5a3b\") " pod="openstack/ceilometer-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.251660 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1a56691-c50a-4917-9331-4920a62c5a3b-scripts\") pod \"ceilometer-0\" (UID: \"c1a56691-c50a-4917-9331-4920a62c5a3b\") " pod="openstack/ceilometer-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.251685 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/578a54f5-2d6f-4c21-b549-55cd00237570-combined-ca-bundle\") pod \"placement-db-sync-2dxdv\" (UID: \"578a54f5-2d6f-4c21-b549-55cd00237570\") " pod="openstack/placement-db-sync-2dxdv" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.251703 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1a56691-c50a-4917-9331-4920a62c5a3b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1a56691-c50a-4917-9331-4920a62c5a3b\") " pod="openstack/ceilometer-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.251730 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/578a54f5-2d6f-4c21-b549-55cd00237570-config-data\") pod \"placement-db-sync-2dxdv\" (UID: \"578a54f5-2d6f-4c21-b549-55cd00237570\") " pod="openstack/placement-db-sync-2dxdv" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.252552 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/578a54f5-2d6f-4c21-b549-55cd00237570-scripts\") pod \"placement-db-sync-2dxdv\" (UID: \"578a54f5-2d6f-4c21-b549-55cd00237570\") " pod="openstack/placement-db-sync-2dxdv" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.252633 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94lhq\" (UniqueName: \"kubernetes.io/projected/578a54f5-2d6f-4c21-b549-55cd00237570-kube-api-access-94lhq\") pod \"placement-db-sync-2dxdv\" (UID: \"578a54f5-2d6f-4c21-b549-55cd00237570\") " pod="openstack/placement-db-sync-2dxdv" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.252666 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fd83616-8f63-4754-8286-5c25487c8b9c-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-88gfs\" (UID: \"8fd83616-8f63-4754-8286-5c25487c8b9c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-88gfs" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.252692 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a56691-c50a-4917-9331-4920a62c5a3b-config-data\") pod \"ceilometer-0\" (UID: \"c1a56691-c50a-4917-9331-4920a62c5a3b\") " pod="openstack/ceilometer-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.252758 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fd83616-8f63-4754-8286-5c25487c8b9c-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-88gfs\" (UID: \"8fd83616-8f63-4754-8286-5c25487c8b9c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-88gfs" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.252806 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/578a54f5-2d6f-4c21-b549-55cd00237570-logs\") pod \"placement-db-sync-2dxdv\" (UID: \"578a54f5-2d6f-4c21-b549-55cd00237570\") " pod="openstack/placement-db-sync-2dxdv" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.252858 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fd83616-8f63-4754-8286-5c25487c8b9c-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-88gfs\" (UID: \"8fd83616-8f63-4754-8286-5c25487c8b9c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-88gfs" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.252893 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1a56691-c50a-4917-9331-4920a62c5a3b-log-httpd\") pod \"ceilometer-0\" (UID: \"c1a56691-c50a-4917-9331-4920a62c5a3b\") " pod="openstack/ceilometer-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.255570 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fd83616-8f63-4754-8286-5c25487c8b9c-config\") pod \"dnsmasq-dns-56df8fb6b7-88gfs\" (UID: \"8fd83616-8f63-4754-8286-5c25487c8b9c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-88gfs" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.255977 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8fd83616-8f63-4754-8286-5c25487c8b9c-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-88gfs\" (UID: \"8fd83616-8f63-4754-8286-5c25487c8b9c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-88gfs" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.256474 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/578a54f5-2d6f-4c21-b549-55cd00237570-logs\") pod \"placement-db-sync-2dxdv\" (UID: \"578a54f5-2d6f-4c21-b549-55cd00237570\") " pod="openstack/placement-db-sync-2dxdv" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.256918 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fd83616-8f63-4754-8286-5c25487c8b9c-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-88gfs\" (UID: \"8fd83616-8f63-4754-8286-5c25487c8b9c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-88gfs" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.257577 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fd83616-8f63-4754-8286-5c25487c8b9c-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-88gfs\" (UID: \"8fd83616-8f63-4754-8286-5c25487c8b9c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-88gfs" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.257824 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fd83616-8f63-4754-8286-5c25487c8b9c-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-88gfs\" (UID: \"8fd83616-8f63-4754-8286-5c25487c8b9c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-88gfs" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.262941 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/578a54f5-2d6f-4c21-b549-55cd00237570-scripts\") pod \"placement-db-sync-2dxdv\" (UID: \"578a54f5-2d6f-4c21-b549-55cd00237570\") " pod="openstack/placement-db-sync-2dxdv" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.263379 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/578a54f5-2d6f-4c21-b549-55cd00237570-combined-ca-bundle\") pod \"placement-db-sync-2dxdv\" (UID: \"578a54f5-2d6f-4c21-b549-55cd00237570\") " pod="openstack/placement-db-sync-2dxdv" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.283074 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/578a54f5-2d6f-4c21-b549-55cd00237570-config-data\") pod \"placement-db-sync-2dxdv\" (UID: \"578a54f5-2d6f-4c21-b549-55cd00237570\") " pod="openstack/placement-db-sync-2dxdv" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.285320 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94lhq\" (UniqueName: \"kubernetes.io/projected/578a54f5-2d6f-4c21-b549-55cd00237570-kube-api-access-94lhq\") pod \"placement-db-sync-2dxdv\" (UID: \"578a54f5-2d6f-4c21-b549-55cd00237570\") " pod="openstack/placement-db-sync-2dxdv" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.289540 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g8vp\" (UniqueName: \"kubernetes.io/projected/8fd83616-8f63-4754-8286-5c25487c8b9c-kube-api-access-4g8vp\") pod \"dnsmasq-dns-56df8fb6b7-88gfs\" (UID: \"8fd83616-8f63-4754-8286-5c25487c8b9c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-88gfs" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.306388 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-88gfs" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.358978 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a56691-c50a-4917-9331-4920a62c5a3b-config-data\") pod \"ceilometer-0\" (UID: \"c1a56691-c50a-4917-9331-4920a62c5a3b\") " pod="openstack/ceilometer-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.361692 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1a56691-c50a-4917-9331-4920a62c5a3b-log-httpd\") pod \"ceilometer-0\" (UID: \"c1a56691-c50a-4917-9331-4920a62c5a3b\") " pod="openstack/ceilometer-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.362677 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1a56691-c50a-4917-9331-4920a62c5a3b-log-httpd\") pod \"ceilometer-0\" (UID: \"c1a56691-c50a-4917-9331-4920a62c5a3b\") " pod="openstack/ceilometer-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.362896 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a56691-c50a-4917-9331-4920a62c5a3b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1a56691-c50a-4917-9331-4920a62c5a3b\") " pod="openstack/ceilometer-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.362935 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1a56691-c50a-4917-9331-4920a62c5a3b-run-httpd\") pod \"ceilometer-0\" (UID: \"c1a56691-c50a-4917-9331-4920a62c5a3b\") " pod="openstack/ceilometer-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.362994 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b7vt\" (UniqueName: \"kubernetes.io/projected/c1a56691-c50a-4917-9331-4920a62c5a3b-kube-api-access-9b7vt\") pod \"ceilometer-0\" (UID: \"c1a56691-c50a-4917-9331-4920a62c5a3b\") " pod="openstack/ceilometer-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.363045 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1a56691-c50a-4917-9331-4920a62c5a3b-scripts\") pod \"ceilometer-0\" (UID: \"c1a56691-c50a-4917-9331-4920a62c5a3b\") " pod="openstack/ceilometer-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.363098 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1a56691-c50a-4917-9331-4920a62c5a3b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1a56691-c50a-4917-9331-4920a62c5a3b\") " pod="openstack/ceilometer-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.363967 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1a56691-c50a-4917-9331-4920a62c5a3b-run-httpd\") pod \"ceilometer-0\" (UID: \"c1a56691-c50a-4917-9331-4920a62c5a3b\") " pod="openstack/ceilometer-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.367609 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a56691-c50a-4917-9331-4920a62c5a3b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1a56691-c50a-4917-9331-4920a62c5a3b\") " pod="openstack/ceilometer-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.368984 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a56691-c50a-4917-9331-4920a62c5a3b-config-data\") pod \"ceilometer-0\" (UID: \"c1a56691-c50a-4917-9331-4920a62c5a3b\") " pod="openstack/ceilometer-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.370198 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2dxdv" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.374809 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1a56691-c50a-4917-9331-4920a62c5a3b-scripts\") pod \"ceilometer-0\" (UID: \"c1a56691-c50a-4917-9331-4920a62c5a3b\") " pod="openstack/ceilometer-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.379732 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1a56691-c50a-4917-9331-4920a62c5a3b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1a56691-c50a-4917-9331-4920a62c5a3b\") " pod="openstack/ceilometer-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.382735 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b7vt\" (UniqueName: \"kubernetes.io/projected/c1a56691-c50a-4917-9331-4920a62c5a3b-kube-api-access-9b7vt\") pod \"ceilometer-0\" (UID: \"c1a56691-c50a-4917-9331-4920a62c5a3b\") " pod="openstack/ceilometer-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.390924 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.423859 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-fhxnt"] Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.446045 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.448055 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.460556 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.465430 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.465706 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.465891 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qmbqs" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.466159 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.569377 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.569431 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.569454 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-scripts\") pod \"glance-default-external-api-0\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.569472 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.569487 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-config-data\") pod \"glance-default-external-api-0\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.569521 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgzzp\" (UniqueName: \"kubernetes.io/projected/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-kube-api-access-rgzzp\") pod \"glance-default-external-api-0\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.569553 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.569598 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-logs\") pod \"glance-default-external-api-0\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.627864 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.630107 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.635222 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.635456 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.662318 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.672139 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.672195 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.672218 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-scripts\") pod \"glance-default-external-api-0\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.672234 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.672250 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-config-data\") pod \"glance-default-external-api-0\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.672278 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgzzp\" (UniqueName: \"kubernetes.io/projected/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-kube-api-access-rgzzp\") pod \"glance-default-external-api-0\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.672311 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.672340 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-logs\") pod \"glance-default-external-api-0\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.673699 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.674008 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-logs\") pod \"glance-default-external-api-0\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.677002 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.680190 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-config-data\") pod \"glance-default-external-api-0\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.685526 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.686242 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-scripts\") pod \"glance-default-external-api-0\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.727516 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgzzp\" (UniqueName: \"kubernetes.io/projected/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-kube-api-access-rgzzp\") pod \"glance-default-external-api-0\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.731048 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.783941 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs4xx\" (UniqueName: \"kubernetes.io/projected/02f62db4-2e1b-48c4-b644-06155ffe44c1-kube-api-access-fs4xx\") pod \"glance-default-internal-api-0\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.784060 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02f62db4-2e1b-48c4-b644-06155ffe44c1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.784173 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02f62db4-2e1b-48c4-b644-06155ffe44c1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.784233 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.784329 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f62db4-2e1b-48c4-b644-06155ffe44c1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.784370 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02f62db4-2e1b-48c4-b644-06155ffe44c1-logs\") pod \"glance-default-internal-api-0\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.784495 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/02f62db4-2e1b-48c4-b644-06155ffe44c1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.784533 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02f62db4-2e1b-48c4-b644-06155ffe44c1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.874241 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-6jszw"] Nov 24 21:41:41 crc kubenswrapper[4915]: W1124 21:41:41.874425 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c21efec_e27b_4e9e_bdc6_a4d9a0eab412.slice/crio-4f64b68fe67c5a007344a93686f60c363f22edfb40921743ce739eb4d278e0be WatchSource:0}: Error finding container 4f64b68fe67c5a007344a93686f60c363f22edfb40921743ce739eb4d278e0be: Status 404 returned error can't find the container with id 4f64b68fe67c5a007344a93686f60c363f22edfb40921743ce739eb4d278e0be Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.887939 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.899871 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02f62db4-2e1b-48c4-b644-06155ffe44c1-logs\") pod \"glance-default-internal-api-0\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.900077 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/02f62db4-2e1b-48c4-b644-06155ffe44c1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.900125 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02f62db4-2e1b-48c4-b644-06155ffe44c1-logs\") pod \"glance-default-internal-api-0\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.900262 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02f62db4-2e1b-48c4-b644-06155ffe44c1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.906546 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs4xx\" (UniqueName: \"kubernetes.io/projected/02f62db4-2e1b-48c4-b644-06155ffe44c1-kube-api-access-fs4xx\") pod \"glance-default-internal-api-0\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.906612 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02f62db4-2e1b-48c4-b644-06155ffe44c1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.906707 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02f62db4-2e1b-48c4-b644-06155ffe44c1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.906767 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.907720 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f62db4-2e1b-48c4-b644-06155ffe44c1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.907731 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02f62db4-2e1b-48c4-b644-06155ffe44c1-config-data\") pod \"glance-default-internal-api-0\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.908271 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-b5rpb"] Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.908746 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.900619 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/02f62db4-2e1b-48c4-b644-06155ffe44c1-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.915335 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02f62db4-2e1b-48c4-b644-06155ffe44c1-scripts\") pod \"glance-default-internal-api-0\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.916072 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f62db4-2e1b-48c4-b644-06155ffe44c1-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.917091 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02f62db4-2e1b-48c4-b644-06155ffe44c1-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.935316 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs4xx\" (UniqueName: \"kubernetes.io/projected/02f62db4-2e1b-48c4-b644-06155ffe44c1-kube-api-access-fs4xx\") pod \"glance-default-internal-api-0\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:41 crc kubenswrapper[4915]: I1124 21:41:41.947750 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:42 crc kubenswrapper[4915]: I1124 21:41:42.003833 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-9t9tc"] Nov 24 21:41:42 crc kubenswrapper[4915]: I1124 21:41:42.045700 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-b5rpb" event={"ID":"74f70de0-ded1-4353-8cc0-5f85dc3e60c5","Type":"ContainerStarted","Data":"2a8c59e379cd6b597b53e67de2d58849884918430747d4dbe1b25754233396e9"} Nov 24 21:41:42 crc kubenswrapper[4915]: I1124 21:41:42.054179 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fhxnt" event={"ID":"57c34cab-0ab6-4b4d-9278-f61863a50b22","Type":"ContainerStarted","Data":"2d4b2ebfaa93d1ea8a9be306b362e49953c41ae748d27e350f933cebecf8c96e"} Nov 24 21:41:42 crc kubenswrapper[4915]: I1124 21:41:42.055753 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-6jszw" event={"ID":"8c21efec-e27b-4e9e-bdc6-a4d9a0eab412","Type":"ContainerStarted","Data":"4f64b68fe67c5a007344a93686f60c363f22edfb40921743ce739eb4d278e0be"} Nov 24 21:41:42 crc kubenswrapper[4915]: I1124 21:41:42.073669 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vx4fc"] Nov 24 21:41:42 crc kubenswrapper[4915]: I1124 21:41:42.142631 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 21:41:42 crc kubenswrapper[4915]: I1124 21:41:42.162450 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:41:42 crc kubenswrapper[4915]: I1124 21:41:42.239261 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-kxbs8"] Nov 24 21:41:42 crc kubenswrapper[4915]: I1124 21:41:42.322640 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2dxdv"] Nov 24 21:41:42 crc kubenswrapper[4915]: I1124 21:41:42.339000 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-88gfs"] Nov 24 21:41:42 crc kubenswrapper[4915]: W1124 21:41:42.560959 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1a56691_c50a_4917_9331_4920a62c5a3b.slice/crio-c4097db2d57cba19ea40fa801ebaed3b5c6bbccbaf481853d584ed1cc7a85af0 WatchSource:0}: Error finding container c4097db2d57cba19ea40fa801ebaed3b5c6bbccbaf481853d584ed1cc7a85af0: Status 404 returned error can't find the container with id c4097db2d57cba19ea40fa801ebaed3b5c6bbccbaf481853d584ed1cc7a85af0 Nov 24 21:41:42 crc kubenswrapper[4915]: I1124 21:41:42.660215 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:41:42 crc kubenswrapper[4915]: I1124 21:41:42.817965 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:41:42 crc kubenswrapper[4915]: I1124 21:41:42.955564 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.015884 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.104810 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.167572 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-b5rpb" event={"ID":"74f70de0-ded1-4353-8cc0-5f85dc3e60c5","Type":"ContainerDied","Data":"efb39e8f599309d1497a84221ccfe2da98670d37d81187f6c8739e7a848c0580"} Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.167873 4915 generic.go:334] "Generic (PLEG): container finished" podID="74f70de0-ded1-4353-8cc0-5f85dc3e60c5" containerID="efb39e8f599309d1497a84221ccfe2da98670d37d81187f6c8739e7a848c0580" exitCode=0 Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.177361 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1a56691-c50a-4917-9331-4920a62c5a3b","Type":"ContainerStarted","Data":"c4097db2d57cba19ea40fa801ebaed3b5c6bbccbaf481853d584ed1cc7a85af0"} Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.183508 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65","Type":"ContainerStarted","Data":"c2a99eca3313c3af75ed096926396acd95c4f7c5d766eb3443439501c0511998"} Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.187006 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2dxdv" event={"ID":"578a54f5-2d6f-4c21-b549-55cd00237570","Type":"ContainerStarted","Data":"8afc805208fa72676a2ed26e58ca2ae2ec2d00b48d81a6b3012984bec3df0488"} Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.204278 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vx4fc" event={"ID":"54e7cfde-938c-4b51-8cfc-a1f290de00fd","Type":"ContainerStarted","Data":"f9340f261202a38f689b7c9fce867b689e2b68762d7b650e05089de27c813aad"} Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.204339 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vx4fc" event={"ID":"54e7cfde-938c-4b51-8cfc-a1f290de00fd","Type":"ContainerStarted","Data":"ed2a5e651acba51109dd2037fec06d1041650476842dbaa166938e77e1b1f17d"} Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.209210 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.214374 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"02f62db4-2e1b-48c4-b644-06155ffe44c1","Type":"ContainerStarted","Data":"8f57e1f3e7ad876b9c942ab7371bf8caefb6b0908ac2d43f8aa33065d6fd9938"} Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.214410 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fhxnt" event={"ID":"57c34cab-0ab6-4b4d-9278-f61863a50b22","Type":"ContainerStarted","Data":"bfd0790b766d393930046cfb400c998ce4a2cfac5f9d11f1e812e3ecd4dc862b"} Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.237360 4915 generic.go:334] "Generic (PLEG): container finished" podID="8fd83616-8f63-4754-8286-5c25487c8b9c" containerID="daf0c36e8f6c6d0b6ffb066a05d60ae761d9e2d8af9fa4e59e0debde414cc076" exitCode=0 Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.237748 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-88gfs" event={"ID":"8fd83616-8f63-4754-8286-5c25487c8b9c","Type":"ContainerDied","Data":"daf0c36e8f6c6d0b6ffb066a05d60ae761d9e2d8af9fa4e59e0debde414cc076"} Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.237809 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-88gfs" event={"ID":"8fd83616-8f63-4754-8286-5c25487c8b9c","Type":"ContainerStarted","Data":"99db960ad300d200d79810da8193f405b3a3927f1a89dc56ced41c583b1c6b59"} Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.238645 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-vx4fc" podStartSLOduration=3.238633193 podStartE2EDuration="3.238633193s" podCreationTimestamp="2025-11-24 21:41:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:43.220561505 +0000 UTC m=+1321.536813678" watchObservedRunningTime="2025-11-24 21:41:43.238633193 +0000 UTC m=+1321.554885366" Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.255288 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kxbs8" event={"ID":"8612c3b5-24cc-431a-a888-8be923564356","Type":"ContainerStarted","Data":"ae168591854fb450794eedc81a09117ffeb67a5c21bb5bd99b1f92cc45097998"} Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.268712 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9t9tc" event={"ID":"0646b5c8-f87a-4f27-9327-1bc87669623f","Type":"ContainerStarted","Data":"b8ec9ffc362f4e834991d257afb0ab3c25e8dc8469bbd7d228d0c8f67e91daee"} Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.280152 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-fhxnt" podStartSLOduration=3.273766899 podStartE2EDuration="3.273766899s" podCreationTimestamp="2025-11-24 21:41:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:43.248650022 +0000 UTC m=+1321.564902195" watchObservedRunningTime="2025-11-24 21:41:43.273766899 +0000 UTC m=+1321.590019072" Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.762854 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-b5rpb" Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.821603 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-dns-svc\") pod \"74f70de0-ded1-4353-8cc0-5f85dc3e60c5\" (UID: \"74f70de0-ded1-4353-8cc0-5f85dc3e60c5\") " Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.821682 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-ovsdbserver-sb\") pod \"74f70de0-ded1-4353-8cc0-5f85dc3e60c5\" (UID: \"74f70de0-ded1-4353-8cc0-5f85dc3e60c5\") " Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.821947 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-dns-swift-storage-0\") pod \"74f70de0-ded1-4353-8cc0-5f85dc3e60c5\" (UID: \"74f70de0-ded1-4353-8cc0-5f85dc3e60c5\") " Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.822185 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddzgw\" (UniqueName: \"kubernetes.io/projected/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-kube-api-access-ddzgw\") pod \"74f70de0-ded1-4353-8cc0-5f85dc3e60c5\" (UID: \"74f70de0-ded1-4353-8cc0-5f85dc3e60c5\") " Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.822213 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-config\") pod \"74f70de0-ded1-4353-8cc0-5f85dc3e60c5\" (UID: \"74f70de0-ded1-4353-8cc0-5f85dc3e60c5\") " Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.822304 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-ovsdbserver-nb\") pod \"74f70de0-ded1-4353-8cc0-5f85dc3e60c5\" (UID: \"74f70de0-ded1-4353-8cc0-5f85dc3e60c5\") " Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.831248 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-kube-api-access-ddzgw" (OuterVolumeSpecName: "kube-api-access-ddzgw") pod "74f70de0-ded1-4353-8cc0-5f85dc3e60c5" (UID: "74f70de0-ded1-4353-8cc0-5f85dc3e60c5"). InnerVolumeSpecName "kube-api-access-ddzgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.867311 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "74f70de0-ded1-4353-8cc0-5f85dc3e60c5" (UID: "74f70de0-ded1-4353-8cc0-5f85dc3e60c5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.890202 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-config" (OuterVolumeSpecName: "config") pod "74f70de0-ded1-4353-8cc0-5f85dc3e60c5" (UID: "74f70de0-ded1-4353-8cc0-5f85dc3e60c5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.897576 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "74f70de0-ded1-4353-8cc0-5f85dc3e60c5" (UID: "74f70de0-ded1-4353-8cc0-5f85dc3e60c5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.901398 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "74f70de0-ded1-4353-8cc0-5f85dc3e60c5" (UID: "74f70de0-ded1-4353-8cc0-5f85dc3e60c5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.934906 4915 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.934935 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddzgw\" (UniqueName: \"kubernetes.io/projected/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-kube-api-access-ddzgw\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.934960 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.934969 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.934990 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:43 crc kubenswrapper[4915]: I1124 21:41:43.937676 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "74f70de0-ded1-4353-8cc0-5f85dc3e60c5" (UID: "74f70de0-ded1-4353-8cc0-5f85dc3e60c5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:41:44 crc kubenswrapper[4915]: I1124 21:41:44.037612 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74f70de0-ded1-4353-8cc0-5f85dc3e60c5-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:44 crc kubenswrapper[4915]: I1124 21:41:44.311963 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65","Type":"ContainerStarted","Data":"a74cc944c166411e0c6fefde59b6652666b03d118c0df8320b1494a5fd81653a"} Nov 24 21:41:44 crc kubenswrapper[4915]: I1124 21:41:44.317992 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-88gfs" event={"ID":"8fd83616-8f63-4754-8286-5c25487c8b9c","Type":"ContainerStarted","Data":"24c2d0e96b3cc6a144cfec8e590bd50c82ffa45b995f78adae77ea1e3f89a80d"} Nov 24 21:41:44 crc kubenswrapper[4915]: I1124 21:41:44.318139 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-88gfs" Nov 24 21:41:44 crc kubenswrapper[4915]: I1124 21:41:44.341204 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-b5rpb" event={"ID":"74f70de0-ded1-4353-8cc0-5f85dc3e60c5","Type":"ContainerDied","Data":"2a8c59e379cd6b597b53e67de2d58849884918430747d4dbe1b25754233396e9"} Nov 24 21:41:44 crc kubenswrapper[4915]: I1124 21:41:44.341253 4915 scope.go:117] "RemoveContainer" containerID="efb39e8f599309d1497a84221ccfe2da98670d37d81187f6c8739e7a848c0580" Nov 24 21:41:44 crc kubenswrapper[4915]: I1124 21:41:44.341358 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-b5rpb" Nov 24 21:41:44 crc kubenswrapper[4915]: I1124 21:41:44.352439 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"02f62db4-2e1b-48c4-b644-06155ffe44c1","Type":"ContainerStarted","Data":"45f3441e0d9b4c10d15761afb48a6076151047d91bbdb2c5c841b491a5167801"} Nov 24 21:41:44 crc kubenswrapper[4915]: I1124 21:41:44.386202 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-88gfs" podStartSLOduration=4.386182903 podStartE2EDuration="4.386182903s" podCreationTimestamp="2025-11-24 21:41:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:44.335832745 +0000 UTC m=+1322.652084928" watchObservedRunningTime="2025-11-24 21:41:44.386182903 +0000 UTC m=+1322.702435076" Nov 24 21:41:44 crc kubenswrapper[4915]: I1124 21:41:44.452325 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-b5rpb"] Nov 24 21:41:44 crc kubenswrapper[4915]: I1124 21:41:44.456659 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-b5rpb"] Nov 24 21:41:45 crc kubenswrapper[4915]: I1124 21:41:45.377307 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65","Type":"ContainerStarted","Data":"aff408eb1b2f687b95645eb55ce9a32050be3995b0d1c61f1c8c9cfeb8f949e3"} Nov 24 21:41:45 crc kubenswrapper[4915]: I1124 21:41:45.377744 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b27ccd25-ed01-4a02-8b2e-eb6cc361fc65" containerName="glance-log" containerID="cri-o://a74cc944c166411e0c6fefde59b6652666b03d118c0df8320b1494a5fd81653a" gracePeriod=30 Nov 24 21:41:45 crc kubenswrapper[4915]: I1124 21:41:45.378282 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b27ccd25-ed01-4a02-8b2e-eb6cc361fc65" containerName="glance-httpd" containerID="cri-o://aff408eb1b2f687b95645eb55ce9a32050be3995b0d1c61f1c8c9cfeb8f949e3" gracePeriod=30 Nov 24 21:41:45 crc kubenswrapper[4915]: I1124 21:41:45.389242 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"02f62db4-2e1b-48c4-b644-06155ffe44c1","Type":"ContainerStarted","Data":"7b35266cbd48bb8adf693da249f5333fa1d4127174ef4107d4838d81b046ae60"} Nov 24 21:41:45 crc kubenswrapper[4915]: I1124 21:41:45.389315 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="02f62db4-2e1b-48c4-b644-06155ffe44c1" containerName="glance-log" containerID="cri-o://45f3441e0d9b4c10d15761afb48a6076151047d91bbdb2c5c841b491a5167801" gracePeriod=30 Nov 24 21:41:45 crc kubenswrapper[4915]: I1124 21:41:45.389425 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="02f62db4-2e1b-48c4-b644-06155ffe44c1" containerName="glance-httpd" containerID="cri-o://7b35266cbd48bb8adf693da249f5333fa1d4127174ef4107d4838d81b046ae60" gracePeriod=30 Nov 24 21:41:45 crc kubenswrapper[4915]: I1124 21:41:45.403440 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.403422411 podStartE2EDuration="5.403422411s" podCreationTimestamp="2025-11-24 21:41:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:45.40150408 +0000 UTC m=+1323.717756253" watchObservedRunningTime="2025-11-24 21:41:45.403422411 +0000 UTC m=+1323.719674584" Nov 24 21:41:46 crc kubenswrapper[4915]: I1124 21:41:46.400692 4915 generic.go:334] "Generic (PLEG): container finished" podID="b27ccd25-ed01-4a02-8b2e-eb6cc361fc65" containerID="aff408eb1b2f687b95645eb55ce9a32050be3995b0d1c61f1c8c9cfeb8f949e3" exitCode=0 Nov 24 21:41:46 crc kubenswrapper[4915]: I1124 21:41:46.401028 4915 generic.go:334] "Generic (PLEG): container finished" podID="b27ccd25-ed01-4a02-8b2e-eb6cc361fc65" containerID="a74cc944c166411e0c6fefde59b6652666b03d118c0df8320b1494a5fd81653a" exitCode=143 Nov 24 21:41:46 crc kubenswrapper[4915]: I1124 21:41:46.400902 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65","Type":"ContainerDied","Data":"aff408eb1b2f687b95645eb55ce9a32050be3995b0d1c61f1c8c9cfeb8f949e3"} Nov 24 21:41:46 crc kubenswrapper[4915]: I1124 21:41:46.401099 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65","Type":"ContainerDied","Data":"a74cc944c166411e0c6fefde59b6652666b03d118c0df8320b1494a5fd81653a"} Nov 24 21:41:46 crc kubenswrapper[4915]: I1124 21:41:46.403368 4915 generic.go:334] "Generic (PLEG): container finished" podID="02f62db4-2e1b-48c4-b644-06155ffe44c1" containerID="7b35266cbd48bb8adf693da249f5333fa1d4127174ef4107d4838d81b046ae60" exitCode=143 Nov 24 21:41:46 crc kubenswrapper[4915]: I1124 21:41:46.403383 4915 generic.go:334] "Generic (PLEG): container finished" podID="02f62db4-2e1b-48c4-b644-06155ffe44c1" containerID="45f3441e0d9b4c10d15761afb48a6076151047d91bbdb2c5c841b491a5167801" exitCode=143 Nov 24 21:41:46 crc kubenswrapper[4915]: I1124 21:41:46.403430 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"02f62db4-2e1b-48c4-b644-06155ffe44c1","Type":"ContainerDied","Data":"7b35266cbd48bb8adf693da249f5333fa1d4127174ef4107d4838d81b046ae60"} Nov 24 21:41:46 crc kubenswrapper[4915]: I1124 21:41:46.403480 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"02f62db4-2e1b-48c4-b644-06155ffe44c1","Type":"ContainerDied","Data":"45f3441e0d9b4c10d15761afb48a6076151047d91bbdb2c5c841b491a5167801"} Nov 24 21:41:46 crc kubenswrapper[4915]: I1124 21:41:46.405455 4915 generic.go:334] "Generic (PLEG): container finished" podID="57c34cab-0ab6-4b4d-9278-f61863a50b22" containerID="bfd0790b766d393930046cfb400c998ce4a2cfac5f9d11f1e812e3ecd4dc862b" exitCode=0 Nov 24 21:41:46 crc kubenswrapper[4915]: I1124 21:41:46.405482 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fhxnt" event={"ID":"57c34cab-0ab6-4b4d-9278-f61863a50b22","Type":"ContainerDied","Data":"bfd0790b766d393930046cfb400c998ce4a2cfac5f9d11f1e812e3ecd4dc862b"} Nov 24 21:41:46 crc kubenswrapper[4915]: I1124 21:41:46.426405 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.426390013 podStartE2EDuration="6.426390013s" podCreationTimestamp="2025-11-24 21:41:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:41:45.439986606 +0000 UTC m=+1323.756238799" watchObservedRunningTime="2025-11-24 21:41:46.426390013 +0000 UTC m=+1324.742642186" Nov 24 21:41:46 crc kubenswrapper[4915]: I1124 21:41:46.449305 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74f70de0-ded1-4353-8cc0-5f85dc3e60c5" path="/var/lib/kubelet/pods/74f70de0-ded1-4353-8cc0-5f85dc3e60c5/volumes" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.260580 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.272857 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fhxnt" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.395485 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/57c34cab-0ab6-4b4d-9278-f61863a50b22-credential-keys\") pod \"57c34cab-0ab6-4b4d-9278-f61863a50b22\" (UID: \"57c34cab-0ab6-4b4d-9278-f61863a50b22\") " Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.395553 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs4xx\" (UniqueName: \"kubernetes.io/projected/02f62db4-2e1b-48c4-b644-06155ffe44c1-kube-api-access-fs4xx\") pod \"02f62db4-2e1b-48c4-b644-06155ffe44c1\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.395688 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02f62db4-2e1b-48c4-b644-06155ffe44c1-config-data\") pod \"02f62db4-2e1b-48c4-b644-06155ffe44c1\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.395716 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/57c34cab-0ab6-4b4d-9278-f61863a50b22-fernet-keys\") pod \"57c34cab-0ab6-4b4d-9278-f61863a50b22\" (UID: \"57c34cab-0ab6-4b4d-9278-f61863a50b22\") " Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.395755 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/02f62db4-2e1b-48c4-b644-06155ffe44c1-httpd-run\") pod \"02f62db4-2e1b-48c4-b644-06155ffe44c1\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.395811 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02f62db4-2e1b-48c4-b644-06155ffe44c1-internal-tls-certs\") pod \"02f62db4-2e1b-48c4-b644-06155ffe44c1\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.395845 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02f62db4-2e1b-48c4-b644-06155ffe44c1-logs\") pod \"02f62db4-2e1b-48c4-b644-06155ffe44c1\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.395867 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c34cab-0ab6-4b4d-9278-f61863a50b22-combined-ca-bundle\") pod \"57c34cab-0ab6-4b4d-9278-f61863a50b22\" (UID: \"57c34cab-0ab6-4b4d-9278-f61863a50b22\") " Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.395912 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfhp7\" (UniqueName: \"kubernetes.io/projected/57c34cab-0ab6-4b4d-9278-f61863a50b22-kube-api-access-hfhp7\") pod \"57c34cab-0ab6-4b4d-9278-f61863a50b22\" (UID: \"57c34cab-0ab6-4b4d-9278-f61863a50b22\") " Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.395935 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"02f62db4-2e1b-48c4-b644-06155ffe44c1\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.395957 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c34cab-0ab6-4b4d-9278-f61863a50b22-config-data\") pod \"57c34cab-0ab6-4b4d-9278-f61863a50b22\" (UID: \"57c34cab-0ab6-4b4d-9278-f61863a50b22\") " Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.395982 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57c34cab-0ab6-4b4d-9278-f61863a50b22-scripts\") pod \"57c34cab-0ab6-4b4d-9278-f61863a50b22\" (UID: \"57c34cab-0ab6-4b4d-9278-f61863a50b22\") " Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.396076 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02f62db4-2e1b-48c4-b644-06155ffe44c1-scripts\") pod \"02f62db4-2e1b-48c4-b644-06155ffe44c1\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.396097 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f62db4-2e1b-48c4-b644-06155ffe44c1-combined-ca-bundle\") pod \"02f62db4-2e1b-48c4-b644-06155ffe44c1\" (UID: \"02f62db4-2e1b-48c4-b644-06155ffe44c1\") " Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.397590 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02f62db4-2e1b-48c4-b644-06155ffe44c1-logs" (OuterVolumeSpecName: "logs") pod "02f62db4-2e1b-48c4-b644-06155ffe44c1" (UID: "02f62db4-2e1b-48c4-b644-06155ffe44c1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.397801 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02f62db4-2e1b-48c4-b644-06155ffe44c1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "02f62db4-2e1b-48c4-b644-06155ffe44c1" (UID: "02f62db4-2e1b-48c4-b644-06155ffe44c1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.405021 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57c34cab-0ab6-4b4d-9278-f61863a50b22-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "57c34cab-0ab6-4b4d-9278-f61863a50b22" (UID: "57c34cab-0ab6-4b4d-9278-f61863a50b22"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.405453 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02f62db4-2e1b-48c4-b644-06155ffe44c1-kube-api-access-fs4xx" (OuterVolumeSpecName: "kube-api-access-fs4xx") pod "02f62db4-2e1b-48c4-b644-06155ffe44c1" (UID: "02f62db4-2e1b-48c4-b644-06155ffe44c1"). InnerVolumeSpecName "kube-api-access-fs4xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.405745 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57c34cab-0ab6-4b4d-9278-f61863a50b22-kube-api-access-hfhp7" (OuterVolumeSpecName: "kube-api-access-hfhp7") pod "57c34cab-0ab6-4b4d-9278-f61863a50b22" (UID: "57c34cab-0ab6-4b4d-9278-f61863a50b22"). InnerVolumeSpecName "kube-api-access-hfhp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.424146 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02f62db4-2e1b-48c4-b644-06155ffe44c1-scripts" (OuterVolumeSpecName: "scripts") pod "02f62db4-2e1b-48c4-b644-06155ffe44c1" (UID: "02f62db4-2e1b-48c4-b644-06155ffe44c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.424708 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "02f62db4-2e1b-48c4-b644-06155ffe44c1" (UID: "02f62db4-2e1b-48c4-b644-06155ffe44c1"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.431092 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57c34cab-0ab6-4b4d-9278-f61863a50b22-scripts" (OuterVolumeSpecName: "scripts") pod "57c34cab-0ab6-4b4d-9278-f61863a50b22" (UID: "57c34cab-0ab6-4b4d-9278-f61863a50b22"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.431191 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57c34cab-0ab6-4b4d-9278-f61863a50b22-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "57c34cab-0ab6-4b4d-9278-f61863a50b22" (UID: "57c34cab-0ab6-4b4d-9278-f61863a50b22"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.447172 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fhxnt" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.468455 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.483389 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57c34cab-0ab6-4b4d-9278-f61863a50b22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57c34cab-0ab6-4b4d-9278-f61863a50b22" (UID: "57c34cab-0ab6-4b4d-9278-f61863a50b22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.498266 4915 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/57c34cab-0ab6-4b4d-9278-f61863a50b22-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.498305 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fs4xx\" (UniqueName: \"kubernetes.io/projected/02f62db4-2e1b-48c4-b644-06155ffe44c1-kube-api-access-fs4xx\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.498320 4915 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/57c34cab-0ab6-4b4d-9278-f61863a50b22-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.498330 4915 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/02f62db4-2e1b-48c4-b644-06155ffe44c1-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.498340 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02f62db4-2e1b-48c4-b644-06155ffe44c1-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.498351 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57c34cab-0ab6-4b4d-9278-f61863a50b22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.498362 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfhp7\" (UniqueName: \"kubernetes.io/projected/57c34cab-0ab6-4b4d-9278-f61863a50b22-kube-api-access-hfhp7\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.498392 4915 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.498403 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57c34cab-0ab6-4b4d-9278-f61863a50b22-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.498414 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02f62db4-2e1b-48c4-b644-06155ffe44c1-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.499370 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02f62db4-2e1b-48c4-b644-06155ffe44c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02f62db4-2e1b-48c4-b644-06155ffe44c1" (UID: "02f62db4-2e1b-48c4-b644-06155ffe44c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.499390 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57c34cab-0ab6-4b4d-9278-f61863a50b22-config-data" (OuterVolumeSpecName: "config-data") pod "57c34cab-0ab6-4b4d-9278-f61863a50b22" (UID: "57c34cab-0ab6-4b4d-9278-f61863a50b22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.568031 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fhxnt" event={"ID":"57c34cab-0ab6-4b4d-9278-f61863a50b22","Type":"ContainerDied","Data":"2d4b2ebfaa93d1ea8a9be306b362e49953c41ae748d27e350f933cebecf8c96e"} Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.568072 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d4b2ebfaa93d1ea8a9be306b362e49953c41ae748d27e350f933cebecf8c96e" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.568086 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"02f62db4-2e1b-48c4-b644-06155ffe44c1","Type":"ContainerDied","Data":"8f57e1f3e7ad876b9c942ab7371bf8caefb6b0908ac2d43f8aa33065d6fd9938"} Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.568110 4915 scope.go:117] "RemoveContainer" containerID="7b35266cbd48bb8adf693da249f5333fa1d4127174ef4107d4838d81b046ae60" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.580625 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02f62db4-2e1b-48c4-b644-06155ffe44c1-config-data" (OuterVolumeSpecName: "config-data") pod "02f62db4-2e1b-48c4-b644-06155ffe44c1" (UID: "02f62db4-2e1b-48c4-b644-06155ffe44c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.584865 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-fhxnt"] Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.594746 4915 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.600369 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02f62db4-2e1b-48c4-b644-06155ffe44c1-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.603117 4915 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.603202 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57c34cab-0ab6-4b4d-9278-f61863a50b22-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.603273 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02f62db4-2e1b-48c4-b644-06155ffe44c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.603665 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-fhxnt"] Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.611560 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02f62db4-2e1b-48c4-b644-06155ffe44c1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "02f62db4-2e1b-48c4-b644-06155ffe44c1" (UID: "02f62db4-2e1b-48c4-b644-06155ffe44c1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.670959 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-lf4m9"] Nov 24 21:41:48 crc kubenswrapper[4915]: E1124 21:41:48.672107 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74f70de0-ded1-4353-8cc0-5f85dc3e60c5" containerName="init" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.672129 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="74f70de0-ded1-4353-8cc0-5f85dc3e60c5" containerName="init" Nov 24 21:41:48 crc kubenswrapper[4915]: E1124 21:41:48.672148 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57c34cab-0ab6-4b4d-9278-f61863a50b22" containerName="keystone-bootstrap" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.672156 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="57c34cab-0ab6-4b4d-9278-f61863a50b22" containerName="keystone-bootstrap" Nov 24 21:41:48 crc kubenswrapper[4915]: E1124 21:41:48.672169 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f62db4-2e1b-48c4-b644-06155ffe44c1" containerName="glance-log" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.672175 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f62db4-2e1b-48c4-b644-06155ffe44c1" containerName="glance-log" Nov 24 21:41:48 crc kubenswrapper[4915]: E1124 21:41:48.672191 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02f62db4-2e1b-48c4-b644-06155ffe44c1" containerName="glance-httpd" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.672197 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="02f62db4-2e1b-48c4-b644-06155ffe44c1" containerName="glance-httpd" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.672388 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="02f62db4-2e1b-48c4-b644-06155ffe44c1" containerName="glance-httpd" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.672409 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="02f62db4-2e1b-48c4-b644-06155ffe44c1" containerName="glance-log" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.672424 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="57c34cab-0ab6-4b4d-9278-f61863a50b22" containerName="keystone-bootstrap" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.672434 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="74f70de0-ded1-4353-8cc0-5f85dc3e60c5" containerName="init" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.673217 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lf4m9" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.696590 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lf4m9"] Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.706214 4915 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02f62db4-2e1b-48c4-b644-06155ffe44c1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.808641 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.808652 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnvk6\" (UniqueName: \"kubernetes.io/projected/582b5498-6320-42dc-9a5b-9a6fd28c791b-kube-api-access-gnvk6\") pod \"keystone-bootstrap-lf4m9\" (UID: \"582b5498-6320-42dc-9a5b-9a6fd28c791b\") " pod="openstack/keystone-bootstrap-lf4m9" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.808881 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582b5498-6320-42dc-9a5b-9a6fd28c791b-scripts\") pod \"keystone-bootstrap-lf4m9\" (UID: \"582b5498-6320-42dc-9a5b-9a6fd28c791b\") " pod="openstack/keystone-bootstrap-lf4m9" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.808934 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/582b5498-6320-42dc-9a5b-9a6fd28c791b-credential-keys\") pod \"keystone-bootstrap-lf4m9\" (UID: \"582b5498-6320-42dc-9a5b-9a6fd28c791b\") " pod="openstack/keystone-bootstrap-lf4m9" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.809244 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582b5498-6320-42dc-9a5b-9a6fd28c791b-config-data\") pod \"keystone-bootstrap-lf4m9\" (UID: \"582b5498-6320-42dc-9a5b-9a6fd28c791b\") " pod="openstack/keystone-bootstrap-lf4m9" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.809389 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/582b5498-6320-42dc-9a5b-9a6fd28c791b-fernet-keys\") pod \"keystone-bootstrap-lf4m9\" (UID: \"582b5498-6320-42dc-9a5b-9a6fd28c791b\") " pod="openstack/keystone-bootstrap-lf4m9" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.809560 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582b5498-6320-42dc-9a5b-9a6fd28c791b-combined-ca-bundle\") pod \"keystone-bootstrap-lf4m9\" (UID: \"582b5498-6320-42dc-9a5b-9a6fd28c791b\") " pod="openstack/keystone-bootstrap-lf4m9" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.827981 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.845371 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.847938 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.854718 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.857098 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.862420 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.913001 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnvk6\" (UniqueName: \"kubernetes.io/projected/582b5498-6320-42dc-9a5b-9a6fd28c791b-kube-api-access-gnvk6\") pod \"keystone-bootstrap-lf4m9\" (UID: \"582b5498-6320-42dc-9a5b-9a6fd28c791b\") " pod="openstack/keystone-bootstrap-lf4m9" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.913037 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582b5498-6320-42dc-9a5b-9a6fd28c791b-scripts\") pod \"keystone-bootstrap-lf4m9\" (UID: \"582b5498-6320-42dc-9a5b-9a6fd28c791b\") " pod="openstack/keystone-bootstrap-lf4m9" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.913063 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/582b5498-6320-42dc-9a5b-9a6fd28c791b-credential-keys\") pod \"keystone-bootstrap-lf4m9\" (UID: \"582b5498-6320-42dc-9a5b-9a6fd28c791b\") " pod="openstack/keystone-bootstrap-lf4m9" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.913129 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582b5498-6320-42dc-9a5b-9a6fd28c791b-config-data\") pod \"keystone-bootstrap-lf4m9\" (UID: \"582b5498-6320-42dc-9a5b-9a6fd28c791b\") " pod="openstack/keystone-bootstrap-lf4m9" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.913164 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/582b5498-6320-42dc-9a5b-9a6fd28c791b-fernet-keys\") pod \"keystone-bootstrap-lf4m9\" (UID: \"582b5498-6320-42dc-9a5b-9a6fd28c791b\") " pod="openstack/keystone-bootstrap-lf4m9" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.913209 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582b5498-6320-42dc-9a5b-9a6fd28c791b-combined-ca-bundle\") pod \"keystone-bootstrap-lf4m9\" (UID: \"582b5498-6320-42dc-9a5b-9a6fd28c791b\") " pod="openstack/keystone-bootstrap-lf4m9" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.917464 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582b5498-6320-42dc-9a5b-9a6fd28c791b-scripts\") pod \"keystone-bootstrap-lf4m9\" (UID: \"582b5498-6320-42dc-9a5b-9a6fd28c791b\") " pod="openstack/keystone-bootstrap-lf4m9" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.918051 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582b5498-6320-42dc-9a5b-9a6fd28c791b-config-data\") pod \"keystone-bootstrap-lf4m9\" (UID: \"582b5498-6320-42dc-9a5b-9a6fd28c791b\") " pod="openstack/keystone-bootstrap-lf4m9" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.918657 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/582b5498-6320-42dc-9a5b-9a6fd28c791b-fernet-keys\") pod \"keystone-bootstrap-lf4m9\" (UID: \"582b5498-6320-42dc-9a5b-9a6fd28c791b\") " pod="openstack/keystone-bootstrap-lf4m9" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.922296 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/582b5498-6320-42dc-9a5b-9a6fd28c791b-credential-keys\") pod \"keystone-bootstrap-lf4m9\" (UID: \"582b5498-6320-42dc-9a5b-9a6fd28c791b\") " pod="openstack/keystone-bootstrap-lf4m9" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.936667 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582b5498-6320-42dc-9a5b-9a6fd28c791b-combined-ca-bundle\") pod \"keystone-bootstrap-lf4m9\" (UID: \"582b5498-6320-42dc-9a5b-9a6fd28c791b\") " pod="openstack/keystone-bootstrap-lf4m9" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.937580 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnvk6\" (UniqueName: \"kubernetes.io/projected/582b5498-6320-42dc-9a5b-9a6fd28c791b-kube-api-access-gnvk6\") pod \"keystone-bootstrap-lf4m9\" (UID: \"582b5498-6320-42dc-9a5b-9a6fd28c791b\") " pod="openstack/keystone-bootstrap-lf4m9" Nov 24 21:41:48 crc kubenswrapper[4915]: I1124 21:41:48.990092 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lf4m9" Nov 24 21:41:49 crc kubenswrapper[4915]: I1124 21:41:49.014658 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e385544-9a00-45e1-a13d-246a4fb83c1a-logs\") pod \"glance-default-internal-api-0\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:49 crc kubenswrapper[4915]: I1124 21:41:49.014707 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:49 crc kubenswrapper[4915]: I1124 21:41:49.014764 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e385544-9a00-45e1-a13d-246a4fb83c1a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:49 crc kubenswrapper[4915]: I1124 21:41:49.014800 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e385544-9a00-45e1-a13d-246a4fb83c1a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:49 crc kubenswrapper[4915]: I1124 21:41:49.014828 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e385544-9a00-45e1-a13d-246a4fb83c1a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:49 crc kubenswrapper[4915]: I1124 21:41:49.014887 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e385544-9a00-45e1-a13d-246a4fb83c1a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:49 crc kubenswrapper[4915]: I1124 21:41:49.014906 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj9pr\" (UniqueName: \"kubernetes.io/projected/1e385544-9a00-45e1-a13d-246a4fb83c1a-kube-api-access-sj9pr\") pod \"glance-default-internal-api-0\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:49 crc kubenswrapper[4915]: I1124 21:41:49.014979 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e385544-9a00-45e1-a13d-246a4fb83c1a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:49 crc kubenswrapper[4915]: I1124 21:41:49.117331 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e385544-9a00-45e1-a13d-246a4fb83c1a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:49 crc kubenswrapper[4915]: I1124 21:41:49.117401 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e385544-9a00-45e1-a13d-246a4fb83c1a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:49 crc kubenswrapper[4915]: I1124 21:41:49.117633 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e385544-9a00-45e1-a13d-246a4fb83c1a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:49 crc kubenswrapper[4915]: I1124 21:41:49.117811 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e385544-9a00-45e1-a13d-246a4fb83c1a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:49 crc kubenswrapper[4915]: I1124 21:41:49.117846 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj9pr\" (UniqueName: \"kubernetes.io/projected/1e385544-9a00-45e1-a13d-246a4fb83c1a-kube-api-access-sj9pr\") pod \"glance-default-internal-api-0\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:49 crc kubenswrapper[4915]: I1124 21:41:49.118058 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e385544-9a00-45e1-a13d-246a4fb83c1a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:49 crc kubenswrapper[4915]: I1124 21:41:49.118222 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e385544-9a00-45e1-a13d-246a4fb83c1a-logs\") pod \"glance-default-internal-api-0\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:49 crc kubenswrapper[4915]: I1124 21:41:49.118288 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:49 crc kubenswrapper[4915]: I1124 21:41:49.118492 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e385544-9a00-45e1-a13d-246a4fb83c1a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:49 crc kubenswrapper[4915]: I1124 21:41:49.118550 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Nov 24 21:41:49 crc kubenswrapper[4915]: I1124 21:41:49.123653 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e385544-9a00-45e1-a13d-246a4fb83c1a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:49 crc kubenswrapper[4915]: I1124 21:41:49.128325 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e385544-9a00-45e1-a13d-246a4fb83c1a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:49 crc kubenswrapper[4915]: I1124 21:41:49.128638 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e385544-9a00-45e1-a13d-246a4fb83c1a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:49 crc kubenswrapper[4915]: I1124 21:41:49.132210 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e385544-9a00-45e1-a13d-246a4fb83c1a-logs\") pod \"glance-default-internal-api-0\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:49 crc kubenswrapper[4915]: I1124 21:41:49.145733 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj9pr\" (UniqueName: \"kubernetes.io/projected/1e385544-9a00-45e1-a13d-246a4fb83c1a-kube-api-access-sj9pr\") pod \"glance-default-internal-api-0\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:49 crc kubenswrapper[4915]: I1124 21:41:49.147253 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e385544-9a00-45e1-a13d-246a4fb83c1a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:49 crc kubenswrapper[4915]: I1124 21:41:49.155846 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:41:49 crc kubenswrapper[4915]: I1124 21:41:49.172190 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:41:50 crc kubenswrapper[4915]: I1124 21:41:50.442889 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02f62db4-2e1b-48c4-b644-06155ffe44c1" path="/var/lib/kubelet/pods/02f62db4-2e1b-48c4-b644-06155ffe44c1/volumes" Nov 24 21:41:50 crc kubenswrapper[4915]: I1124 21:41:50.444141 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57c34cab-0ab6-4b4d-9278-f61863a50b22" path="/var/lib/kubelet/pods/57c34cab-0ab6-4b4d-9278-f61863a50b22/volumes" Nov 24 21:41:51 crc kubenswrapper[4915]: I1124 21:41:51.307969 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-88gfs" Nov 24 21:41:51 crc kubenswrapper[4915]: I1124 21:41:51.375426 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-96hc8"] Nov 24 21:41:51 crc kubenswrapper[4915]: I1124 21:41:51.375677 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" podUID="f4e32b86-3497-4bf8-84d7-594ad6597982" containerName="dnsmasq-dns" containerID="cri-o://3e3a1fb8bb9a4614f248a4b139ca3149303c45375fe0e170a91191828e42d958" gracePeriod=10 Nov 24 21:41:52 crc kubenswrapper[4915]: I1124 21:41:52.514390 4915 generic.go:334] "Generic (PLEG): container finished" podID="f4e32b86-3497-4bf8-84d7-594ad6597982" containerID="3e3a1fb8bb9a4614f248a4b139ca3149303c45375fe0e170a91191828e42d958" exitCode=0 Nov 24 21:41:52 crc kubenswrapper[4915]: I1124 21:41:52.514454 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" event={"ID":"f4e32b86-3497-4bf8-84d7-594ad6597982","Type":"ContainerDied","Data":"3e3a1fb8bb9a4614f248a4b139ca3149303c45375fe0e170a91191828e42d958"} Nov 24 21:41:54 crc kubenswrapper[4915]: I1124 21:41:54.327465 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:41:54 crc kubenswrapper[4915]: I1124 21:41:54.327912 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:41:56 crc kubenswrapper[4915]: I1124 21:41:56.374679 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" podUID="f4e32b86-3497-4bf8-84d7-594ad6597982" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.166:5353: connect: connection refused" Nov 24 21:41:57 crc kubenswrapper[4915]: I1124 21:41:57.856399 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 21:41:57 crc kubenswrapper[4915]: I1124 21:41:57.916905 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-logs\") pod \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " Nov 24 21:41:57 crc kubenswrapper[4915]: I1124 21:41:57.917090 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-combined-ca-bundle\") pod \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " Nov 24 21:41:57 crc kubenswrapper[4915]: I1124 21:41:57.917141 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-httpd-run\") pod \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " Nov 24 21:41:57 crc kubenswrapper[4915]: I1124 21:41:57.917199 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-config-data\") pod \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " Nov 24 21:41:57 crc kubenswrapper[4915]: I1124 21:41:57.917293 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-scripts\") pod \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " Nov 24 21:41:57 crc kubenswrapper[4915]: I1124 21:41:57.917347 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgzzp\" (UniqueName: \"kubernetes.io/projected/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-kube-api-access-rgzzp\") pod \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " Nov 24 21:41:57 crc kubenswrapper[4915]: I1124 21:41:57.917421 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-public-tls-certs\") pod \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " Nov 24 21:41:57 crc kubenswrapper[4915]: I1124 21:41:57.917467 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\" (UID: \"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65\") " Nov 24 21:41:57 crc kubenswrapper[4915]: I1124 21:41:57.919568 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b27ccd25-ed01-4a02-8b2e-eb6cc361fc65" (UID: "b27ccd25-ed01-4a02-8b2e-eb6cc361fc65"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:41:57 crc kubenswrapper[4915]: I1124 21:41:57.919698 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-logs" (OuterVolumeSpecName: "logs") pod "b27ccd25-ed01-4a02-8b2e-eb6cc361fc65" (UID: "b27ccd25-ed01-4a02-8b2e-eb6cc361fc65"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:41:57 crc kubenswrapper[4915]: I1124 21:41:57.925110 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "b27ccd25-ed01-4a02-8b2e-eb6cc361fc65" (UID: "b27ccd25-ed01-4a02-8b2e-eb6cc361fc65"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 21:41:57 crc kubenswrapper[4915]: I1124 21:41:57.926324 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-scripts" (OuterVolumeSpecName: "scripts") pod "b27ccd25-ed01-4a02-8b2e-eb6cc361fc65" (UID: "b27ccd25-ed01-4a02-8b2e-eb6cc361fc65"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:41:57 crc kubenswrapper[4915]: I1124 21:41:57.945066 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-kube-api-access-rgzzp" (OuterVolumeSpecName: "kube-api-access-rgzzp") pod "b27ccd25-ed01-4a02-8b2e-eb6cc361fc65" (UID: "b27ccd25-ed01-4a02-8b2e-eb6cc361fc65"). InnerVolumeSpecName "kube-api-access-rgzzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:41:57 crc kubenswrapper[4915]: I1124 21:41:57.962132 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b27ccd25-ed01-4a02-8b2e-eb6cc361fc65" (UID: "b27ccd25-ed01-4a02-8b2e-eb6cc361fc65"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:41:57 crc kubenswrapper[4915]: I1124 21:41:57.996872 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-config-data" (OuterVolumeSpecName: "config-data") pod "b27ccd25-ed01-4a02-8b2e-eb6cc361fc65" (UID: "b27ccd25-ed01-4a02-8b2e-eb6cc361fc65"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.022334 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.022370 4915 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.022379 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.022388 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.022398 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgzzp\" (UniqueName: \"kubernetes.io/projected/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-kube-api-access-rgzzp\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.022426 4915 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.022435 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.030812 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b27ccd25-ed01-4a02-8b2e-eb6cc361fc65" (UID: "b27ccd25-ed01-4a02-8b2e-eb6cc361fc65"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.046949 4915 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.125076 4915 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.125137 4915 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 24 21:41:58 crc kubenswrapper[4915]: E1124 21:41:58.236857 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Nov 24 21:41:58 crc kubenswrapper[4915]: E1124 21:41:58.237017 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n9h695h79h6fh64fh84h647h66h5f7h59bh65h64h664h698hddh547h5cfh645h58ch5cbh678h7bh655h678h697h5f8h59ch88h588hc4h675h5fdq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9b7vt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(c1a56691-c50a-4917-9331-4920a62c5a3b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.240709 4915 scope.go:117] "RemoveContainer" containerID="45f3441e0d9b4c10d15761afb48a6076151047d91bbdb2c5c841b491a5167801" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.583236 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b27ccd25-ed01-4a02-8b2e-eb6cc361fc65","Type":"ContainerDied","Data":"c2a99eca3313c3af75ed096926396acd95c4f7c5d766eb3443439501c0511998"} Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.583306 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.610371 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.622905 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.643599 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:41:58 crc kubenswrapper[4915]: E1124 21:41:58.644201 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b27ccd25-ed01-4a02-8b2e-eb6cc361fc65" containerName="glance-log" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.644217 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b27ccd25-ed01-4a02-8b2e-eb6cc361fc65" containerName="glance-log" Nov 24 21:41:58 crc kubenswrapper[4915]: E1124 21:41:58.644237 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b27ccd25-ed01-4a02-8b2e-eb6cc361fc65" containerName="glance-httpd" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.644244 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b27ccd25-ed01-4a02-8b2e-eb6cc361fc65" containerName="glance-httpd" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.644464 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="b27ccd25-ed01-4a02-8b2e-eb6cc361fc65" containerName="glance-log" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.644486 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="b27ccd25-ed01-4a02-8b2e-eb6cc361fc65" containerName="glance-httpd" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.645938 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.649629 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.649629 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.657352 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:41:58 crc kubenswrapper[4915]: E1124 21:41:58.722924 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Nov 24 21:41:58 crc kubenswrapper[4915]: E1124 21:41:58.723073 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kdfq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-6jszw_openstack(8c21efec-e27b-4e9e-bdc6-a4d9a0eab412): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:41:58 crc kubenswrapper[4915]: E1124 21:41:58.726171 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-6jszw" podUID="8c21efec-e27b-4e9e-bdc6-a4d9a0eab412" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.737619 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnsn7\" (UniqueName: \"kubernetes.io/projected/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-kube-api-access-pnsn7\") pod \"glance-default-external-api-0\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.737701 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-config-data\") pod \"glance-default-external-api-0\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.737765 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.737825 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-scripts\") pod \"glance-default-external-api-0\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.737895 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.737938 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-logs\") pod \"glance-default-external-api-0\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.737968 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.737992 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.841324 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.841429 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-logs\") pod \"glance-default-external-api-0\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.841468 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.841498 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.841525 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnsn7\" (UniqueName: \"kubernetes.io/projected/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-kube-api-access-pnsn7\") pod \"glance-default-external-api-0\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.841592 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-config-data\") pod \"glance-default-external-api-0\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.841842 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.841921 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.842221 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.842376 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-logs\") pod \"glance-default-external-api-0\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.843075 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-scripts\") pod \"glance-default-external-api-0\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.848931 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.849243 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-scripts\") pod \"glance-default-external-api-0\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.850347 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-config-data\") pod \"glance-default-external-api-0\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.854022 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.864510 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnsn7\" (UniqueName: \"kubernetes.io/projected/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-kube-api-access-pnsn7\") pod \"glance-default-external-api-0\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.883578 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " pod="openstack/glance-default-external-api-0" Nov 24 21:41:58 crc kubenswrapper[4915]: I1124 21:41:58.968273 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 21:41:59 crc kubenswrapper[4915]: E1124 21:41:59.600168 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-6jszw" podUID="8c21efec-e27b-4e9e-bdc6-a4d9a0eab412" Nov 24 21:42:00 crc kubenswrapper[4915]: I1124 21:42:00.441619 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b27ccd25-ed01-4a02-8b2e-eb6cc361fc65" path="/var/lib/kubelet/pods/b27ccd25-ed01-4a02-8b2e-eb6cc361fc65/volumes" Nov 24 21:42:01 crc kubenswrapper[4915]: I1124 21:42:01.622648 4915 generic.go:334] "Generic (PLEG): container finished" podID="54e7cfde-938c-4b51-8cfc-a1f290de00fd" containerID="f9340f261202a38f689b7c9fce867b689e2b68762d7b650e05089de27c813aad" exitCode=0 Nov 24 21:42:01 crc kubenswrapper[4915]: I1124 21:42:01.622938 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vx4fc" event={"ID":"54e7cfde-938c-4b51-8cfc-a1f290de00fd","Type":"ContainerDied","Data":"f9340f261202a38f689b7c9fce867b689e2b68762d7b650e05089de27c813aad"} Nov 24 21:42:06 crc kubenswrapper[4915]: I1124 21:42:06.373897 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" podUID="f4e32b86-3497-4bf8-84d7-594ad6597982" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.166:5353: i/o timeout" Nov 24 21:42:07 crc kubenswrapper[4915]: E1124 21:42:07.699718 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 24 21:42:07 crc kubenswrapper[4915]: E1124 21:42:07.700528 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p2k2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-kxbs8_openstack(8612c3b5-24cc-431a-a888-8be923564356): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:42:07 crc kubenswrapper[4915]: E1124 21:42:07.701742 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-kxbs8" podUID="8612c3b5-24cc-431a-a888-8be923564356" Nov 24 21:42:07 crc kubenswrapper[4915]: I1124 21:42:07.868814 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" Nov 24 21:42:07 crc kubenswrapper[4915]: I1124 21:42:07.885281 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vx4fc" Nov 24 21:42:07 crc kubenswrapper[4915]: I1124 21:42:07.980826 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4e32b86-3497-4bf8-84d7-594ad6597982-ovsdbserver-sb\") pod \"f4e32b86-3497-4bf8-84d7-594ad6597982\" (UID: \"f4e32b86-3497-4bf8-84d7-594ad6597982\") " Nov 24 21:42:07 crc kubenswrapper[4915]: I1124 21:42:07.980888 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4e32b86-3497-4bf8-84d7-594ad6597982-ovsdbserver-nb\") pod \"f4e32b86-3497-4bf8-84d7-594ad6597982\" (UID: \"f4e32b86-3497-4bf8-84d7-594ad6597982\") " Nov 24 21:42:07 crc kubenswrapper[4915]: I1124 21:42:07.980919 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54e7cfde-938c-4b51-8cfc-a1f290de00fd-combined-ca-bundle\") pod \"54e7cfde-938c-4b51-8cfc-a1f290de00fd\" (UID: \"54e7cfde-938c-4b51-8cfc-a1f290de00fd\") " Nov 24 21:42:07 crc kubenswrapper[4915]: I1124 21:42:07.980945 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/54e7cfde-938c-4b51-8cfc-a1f290de00fd-config\") pod \"54e7cfde-938c-4b51-8cfc-a1f290de00fd\" (UID: \"54e7cfde-938c-4b51-8cfc-a1f290de00fd\") " Nov 24 21:42:07 crc kubenswrapper[4915]: I1124 21:42:07.981116 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4e32b86-3497-4bf8-84d7-594ad6597982-dns-swift-storage-0\") pod \"f4e32b86-3497-4bf8-84d7-594ad6597982\" (UID: \"f4e32b86-3497-4bf8-84d7-594ad6597982\") " Nov 24 21:42:07 crc kubenswrapper[4915]: I1124 21:42:07.981154 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpf6p\" (UniqueName: \"kubernetes.io/projected/54e7cfde-938c-4b51-8cfc-a1f290de00fd-kube-api-access-gpf6p\") pod \"54e7cfde-938c-4b51-8cfc-a1f290de00fd\" (UID: \"54e7cfde-938c-4b51-8cfc-a1f290de00fd\") " Nov 24 21:42:07 crc kubenswrapper[4915]: I1124 21:42:07.981310 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdblp\" (UniqueName: \"kubernetes.io/projected/f4e32b86-3497-4bf8-84d7-594ad6597982-kube-api-access-rdblp\") pod \"f4e32b86-3497-4bf8-84d7-594ad6597982\" (UID: \"f4e32b86-3497-4bf8-84d7-594ad6597982\") " Nov 24 21:42:07 crc kubenswrapper[4915]: I1124 21:42:07.981362 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4e32b86-3497-4bf8-84d7-594ad6597982-config\") pod \"f4e32b86-3497-4bf8-84d7-594ad6597982\" (UID: \"f4e32b86-3497-4bf8-84d7-594ad6597982\") " Nov 24 21:42:07 crc kubenswrapper[4915]: I1124 21:42:07.981448 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4e32b86-3497-4bf8-84d7-594ad6597982-dns-svc\") pod \"f4e32b86-3497-4bf8-84d7-594ad6597982\" (UID: \"f4e32b86-3497-4bf8-84d7-594ad6597982\") " Nov 24 21:42:07 crc kubenswrapper[4915]: I1124 21:42:07.992457 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54e7cfde-938c-4b51-8cfc-a1f290de00fd-kube-api-access-gpf6p" (OuterVolumeSpecName: "kube-api-access-gpf6p") pod "54e7cfde-938c-4b51-8cfc-a1f290de00fd" (UID: "54e7cfde-938c-4b51-8cfc-a1f290de00fd"). InnerVolumeSpecName "kube-api-access-gpf6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:42:07 crc kubenswrapper[4915]: I1124 21:42:07.994914 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4e32b86-3497-4bf8-84d7-594ad6597982-kube-api-access-rdblp" (OuterVolumeSpecName: "kube-api-access-rdblp") pod "f4e32b86-3497-4bf8-84d7-594ad6597982" (UID: "f4e32b86-3497-4bf8-84d7-594ad6597982"). InnerVolumeSpecName "kube-api-access-rdblp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:42:08 crc kubenswrapper[4915]: I1124 21:42:08.017906 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54e7cfde-938c-4b51-8cfc-a1f290de00fd-config" (OuterVolumeSpecName: "config") pod "54e7cfde-938c-4b51-8cfc-a1f290de00fd" (UID: "54e7cfde-938c-4b51-8cfc-a1f290de00fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:08 crc kubenswrapper[4915]: I1124 21:42:08.050295 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54e7cfde-938c-4b51-8cfc-a1f290de00fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54e7cfde-938c-4b51-8cfc-a1f290de00fd" (UID: "54e7cfde-938c-4b51-8cfc-a1f290de00fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:08 crc kubenswrapper[4915]: I1124 21:42:08.050524 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4e32b86-3497-4bf8-84d7-594ad6597982-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f4e32b86-3497-4bf8-84d7-594ad6597982" (UID: "f4e32b86-3497-4bf8-84d7-594ad6597982"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:42:08 crc kubenswrapper[4915]: I1124 21:42:08.061698 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4e32b86-3497-4bf8-84d7-594ad6597982-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f4e32b86-3497-4bf8-84d7-594ad6597982" (UID: "f4e32b86-3497-4bf8-84d7-594ad6597982"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:42:08 crc kubenswrapper[4915]: I1124 21:42:08.080586 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4e32b86-3497-4bf8-84d7-594ad6597982-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f4e32b86-3497-4bf8-84d7-594ad6597982" (UID: "f4e32b86-3497-4bf8-84d7-594ad6597982"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:42:08 crc kubenswrapper[4915]: I1124 21:42:08.084076 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f4e32b86-3497-4bf8-84d7-594ad6597982-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:08 crc kubenswrapper[4915]: I1124 21:42:08.084119 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f4e32b86-3497-4bf8-84d7-594ad6597982-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:08 crc kubenswrapper[4915]: I1124 21:42:08.084079 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4e32b86-3497-4bf8-84d7-594ad6597982-config" (OuterVolumeSpecName: "config") pod "f4e32b86-3497-4bf8-84d7-594ad6597982" (UID: "f4e32b86-3497-4bf8-84d7-594ad6597982"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:42:08 crc kubenswrapper[4915]: I1124 21:42:08.084130 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54e7cfde-938c-4b51-8cfc-a1f290de00fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:08 crc kubenswrapper[4915]: I1124 21:42:08.084178 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/54e7cfde-938c-4b51-8cfc-a1f290de00fd-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:08 crc kubenswrapper[4915]: I1124 21:42:08.084201 4915 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f4e32b86-3497-4bf8-84d7-594ad6597982-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:08 crc kubenswrapper[4915]: I1124 21:42:08.084215 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpf6p\" (UniqueName: \"kubernetes.io/projected/54e7cfde-938c-4b51-8cfc-a1f290de00fd-kube-api-access-gpf6p\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:08 crc kubenswrapper[4915]: I1124 21:42:08.084228 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdblp\" (UniqueName: \"kubernetes.io/projected/f4e32b86-3497-4bf8-84d7-594ad6597982-kube-api-access-rdblp\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:08 crc kubenswrapper[4915]: I1124 21:42:08.090112 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4e32b86-3497-4bf8-84d7-594ad6597982-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f4e32b86-3497-4bf8-84d7-594ad6597982" (UID: "f4e32b86-3497-4bf8-84d7-594ad6597982"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:42:08 crc kubenswrapper[4915]: I1124 21:42:08.186313 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4e32b86-3497-4bf8-84d7-594ad6597982-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:08 crc kubenswrapper[4915]: I1124 21:42:08.186353 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f4e32b86-3497-4bf8-84d7-594ad6597982-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:08 crc kubenswrapper[4915]: I1124 21:42:08.703127 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vx4fc" event={"ID":"54e7cfde-938c-4b51-8cfc-a1f290de00fd","Type":"ContainerDied","Data":"ed2a5e651acba51109dd2037fec06d1041650476842dbaa166938e77e1b1f17d"} Nov 24 21:42:08 crc kubenswrapper[4915]: I1124 21:42:08.703437 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed2a5e651acba51109dd2037fec06d1041650476842dbaa166938e77e1b1f17d" Nov 24 21:42:08 crc kubenswrapper[4915]: I1124 21:42:08.703144 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vx4fc" Nov 24 21:42:08 crc kubenswrapper[4915]: I1124 21:42:08.705056 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" event={"ID":"f4e32b86-3497-4bf8-84d7-594ad6597982","Type":"ContainerDied","Data":"2fa83ef236ecdff6e03aca5ebb12122ab08a91383df5630211cb72b91d105767"} Nov 24 21:42:08 crc kubenswrapper[4915]: I1124 21:42:08.705098 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" Nov 24 21:42:08 crc kubenswrapper[4915]: E1124 21:42:08.709734 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-kxbs8" podUID="8612c3b5-24cc-431a-a888-8be923564356" Nov 24 21:42:08 crc kubenswrapper[4915]: I1124 21:42:08.756295 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-96hc8"] Nov 24 21:42:08 crc kubenswrapper[4915]: I1124 21:42:08.764722 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-96hc8"] Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.062715 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-ml86b"] Nov 24 21:42:09 crc kubenswrapper[4915]: E1124 21:42:09.063299 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e32b86-3497-4bf8-84d7-594ad6597982" containerName="dnsmasq-dns" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.063326 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e32b86-3497-4bf8-84d7-594ad6597982" containerName="dnsmasq-dns" Nov 24 21:42:09 crc kubenswrapper[4915]: E1124 21:42:09.063363 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4e32b86-3497-4bf8-84d7-594ad6597982" containerName="init" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.063370 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4e32b86-3497-4bf8-84d7-594ad6597982" containerName="init" Nov 24 21:42:09 crc kubenswrapper[4915]: E1124 21:42:09.063381 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54e7cfde-938c-4b51-8cfc-a1f290de00fd" containerName="neutron-db-sync" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.063390 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="54e7cfde-938c-4b51-8cfc-a1f290de00fd" containerName="neutron-db-sync" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.063632 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4e32b86-3497-4bf8-84d7-594ad6597982" containerName="dnsmasq-dns" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.063651 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="54e7cfde-938c-4b51-8cfc-a1f290de00fd" containerName="neutron-db-sync" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.065183 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-ml86b" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.072804 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-ml86b"] Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.118520 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e5395dd-1fa1-461c-b0eb-edc5c817955c-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-ml86b\" (UID: \"9e5395dd-1fa1-461c-b0eb-edc5c817955c\") " pod="openstack/dnsmasq-dns-6b7b667979-ml86b" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.118591 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e5395dd-1fa1-461c-b0eb-edc5c817955c-dns-svc\") pod \"dnsmasq-dns-6b7b667979-ml86b\" (UID: \"9e5395dd-1fa1-461c-b0eb-edc5c817955c\") " pod="openstack/dnsmasq-dns-6b7b667979-ml86b" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.118632 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e5395dd-1fa1-461c-b0eb-edc5c817955c-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-ml86b\" (UID: \"9e5395dd-1fa1-461c-b0eb-edc5c817955c\") " pod="openstack/dnsmasq-dns-6b7b667979-ml86b" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.118684 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e5395dd-1fa1-461c-b0eb-edc5c817955c-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-ml86b\" (UID: \"9e5395dd-1fa1-461c-b0eb-edc5c817955c\") " pod="openstack/dnsmasq-dns-6b7b667979-ml86b" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.118747 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd4wm\" (UniqueName: \"kubernetes.io/projected/9e5395dd-1fa1-461c-b0eb-edc5c817955c-kube-api-access-kd4wm\") pod \"dnsmasq-dns-6b7b667979-ml86b\" (UID: \"9e5395dd-1fa1-461c-b0eb-edc5c817955c\") " pod="openstack/dnsmasq-dns-6b7b667979-ml86b" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.118797 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e5395dd-1fa1-461c-b0eb-edc5c817955c-config\") pod \"dnsmasq-dns-6b7b667979-ml86b\" (UID: \"9e5395dd-1fa1-461c-b0eb-edc5c817955c\") " pod="openstack/dnsmasq-dns-6b7b667979-ml86b" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.133539 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-598d4b5ccb-wjxn7"] Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.135880 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-598d4b5ccb-wjxn7" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.138193 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-7vxkw" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.138516 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.138939 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.139635 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.148027 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-598d4b5ccb-wjxn7"] Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.221957 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/521647b8-476f-4319-aa06-f9e5af5fdbe9-config\") pod \"neutron-598d4b5ccb-wjxn7\" (UID: \"521647b8-476f-4319-aa06-f9e5af5fdbe9\") " pod="openstack/neutron-598d4b5ccb-wjxn7" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.227936 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/521647b8-476f-4319-aa06-f9e5af5fdbe9-ovndb-tls-certs\") pod \"neutron-598d4b5ccb-wjxn7\" (UID: \"521647b8-476f-4319-aa06-f9e5af5fdbe9\") " pod="openstack/neutron-598d4b5ccb-wjxn7" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.228053 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e5395dd-1fa1-461c-b0eb-edc5c817955c-dns-svc\") pod \"dnsmasq-dns-6b7b667979-ml86b\" (UID: \"9e5395dd-1fa1-461c-b0eb-edc5c817955c\") " pod="openstack/dnsmasq-dns-6b7b667979-ml86b" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.228170 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e5395dd-1fa1-461c-b0eb-edc5c817955c-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-ml86b\" (UID: \"9e5395dd-1fa1-461c-b0eb-edc5c817955c\") " pod="openstack/dnsmasq-dns-6b7b667979-ml86b" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.228302 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e5395dd-1fa1-461c-b0eb-edc5c817955c-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-ml86b\" (UID: \"9e5395dd-1fa1-461c-b0eb-edc5c817955c\") " pod="openstack/dnsmasq-dns-6b7b667979-ml86b" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.228439 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd4wm\" (UniqueName: \"kubernetes.io/projected/9e5395dd-1fa1-461c-b0eb-edc5c817955c-kube-api-access-kd4wm\") pod \"dnsmasq-dns-6b7b667979-ml86b\" (UID: \"9e5395dd-1fa1-461c-b0eb-edc5c817955c\") " pod="openstack/dnsmasq-dns-6b7b667979-ml86b" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.228488 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521647b8-476f-4319-aa06-f9e5af5fdbe9-combined-ca-bundle\") pod \"neutron-598d4b5ccb-wjxn7\" (UID: \"521647b8-476f-4319-aa06-f9e5af5fdbe9\") " pod="openstack/neutron-598d4b5ccb-wjxn7" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.228509 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e5395dd-1fa1-461c-b0eb-edc5c817955c-config\") pod \"dnsmasq-dns-6b7b667979-ml86b\" (UID: \"9e5395dd-1fa1-461c-b0eb-edc5c817955c\") " pod="openstack/dnsmasq-dns-6b7b667979-ml86b" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.228557 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7cjb\" (UniqueName: \"kubernetes.io/projected/521647b8-476f-4319-aa06-f9e5af5fdbe9-kube-api-access-b7cjb\") pod \"neutron-598d4b5ccb-wjxn7\" (UID: \"521647b8-476f-4319-aa06-f9e5af5fdbe9\") " pod="openstack/neutron-598d4b5ccb-wjxn7" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.228621 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/521647b8-476f-4319-aa06-f9e5af5fdbe9-httpd-config\") pod \"neutron-598d4b5ccb-wjxn7\" (UID: \"521647b8-476f-4319-aa06-f9e5af5fdbe9\") " pod="openstack/neutron-598d4b5ccb-wjxn7" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.228693 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e5395dd-1fa1-461c-b0eb-edc5c817955c-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-ml86b\" (UID: \"9e5395dd-1fa1-461c-b0eb-edc5c817955c\") " pod="openstack/dnsmasq-dns-6b7b667979-ml86b" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.229611 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e5395dd-1fa1-461c-b0eb-edc5c817955c-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-ml86b\" (UID: \"9e5395dd-1fa1-461c-b0eb-edc5c817955c\") " pod="openstack/dnsmasq-dns-6b7b667979-ml86b" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.230229 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e5395dd-1fa1-461c-b0eb-edc5c817955c-dns-svc\") pod \"dnsmasq-dns-6b7b667979-ml86b\" (UID: \"9e5395dd-1fa1-461c-b0eb-edc5c817955c\") " pod="openstack/dnsmasq-dns-6b7b667979-ml86b" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.233525 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e5395dd-1fa1-461c-b0eb-edc5c817955c-config\") pod \"dnsmasq-dns-6b7b667979-ml86b\" (UID: \"9e5395dd-1fa1-461c-b0eb-edc5c817955c\") " pod="openstack/dnsmasq-dns-6b7b667979-ml86b" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.235539 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e5395dd-1fa1-461c-b0eb-edc5c817955c-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-ml86b\" (UID: \"9e5395dd-1fa1-461c-b0eb-edc5c817955c\") " pod="openstack/dnsmasq-dns-6b7b667979-ml86b" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.236155 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e5395dd-1fa1-461c-b0eb-edc5c817955c-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-ml86b\" (UID: \"9e5395dd-1fa1-461c-b0eb-edc5c817955c\") " pod="openstack/dnsmasq-dns-6b7b667979-ml86b" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.273805 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd4wm\" (UniqueName: \"kubernetes.io/projected/9e5395dd-1fa1-461c-b0eb-edc5c817955c-kube-api-access-kd4wm\") pod \"dnsmasq-dns-6b7b667979-ml86b\" (UID: \"9e5395dd-1fa1-461c-b0eb-edc5c817955c\") " pod="openstack/dnsmasq-dns-6b7b667979-ml86b" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.330304 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521647b8-476f-4319-aa06-f9e5af5fdbe9-combined-ca-bundle\") pod \"neutron-598d4b5ccb-wjxn7\" (UID: \"521647b8-476f-4319-aa06-f9e5af5fdbe9\") " pod="openstack/neutron-598d4b5ccb-wjxn7" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.330352 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7cjb\" (UniqueName: \"kubernetes.io/projected/521647b8-476f-4319-aa06-f9e5af5fdbe9-kube-api-access-b7cjb\") pod \"neutron-598d4b5ccb-wjxn7\" (UID: \"521647b8-476f-4319-aa06-f9e5af5fdbe9\") " pod="openstack/neutron-598d4b5ccb-wjxn7" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.330392 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/521647b8-476f-4319-aa06-f9e5af5fdbe9-httpd-config\") pod \"neutron-598d4b5ccb-wjxn7\" (UID: \"521647b8-476f-4319-aa06-f9e5af5fdbe9\") " pod="openstack/neutron-598d4b5ccb-wjxn7" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.330439 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/521647b8-476f-4319-aa06-f9e5af5fdbe9-config\") pod \"neutron-598d4b5ccb-wjxn7\" (UID: \"521647b8-476f-4319-aa06-f9e5af5fdbe9\") " pod="openstack/neutron-598d4b5ccb-wjxn7" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.330463 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/521647b8-476f-4319-aa06-f9e5af5fdbe9-ovndb-tls-certs\") pod \"neutron-598d4b5ccb-wjxn7\" (UID: \"521647b8-476f-4319-aa06-f9e5af5fdbe9\") " pod="openstack/neutron-598d4b5ccb-wjxn7" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.334753 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/521647b8-476f-4319-aa06-f9e5af5fdbe9-httpd-config\") pod \"neutron-598d4b5ccb-wjxn7\" (UID: \"521647b8-476f-4319-aa06-f9e5af5fdbe9\") " pod="openstack/neutron-598d4b5ccb-wjxn7" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.337056 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521647b8-476f-4319-aa06-f9e5af5fdbe9-combined-ca-bundle\") pod \"neutron-598d4b5ccb-wjxn7\" (UID: \"521647b8-476f-4319-aa06-f9e5af5fdbe9\") " pod="openstack/neutron-598d4b5ccb-wjxn7" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.341509 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/521647b8-476f-4319-aa06-f9e5af5fdbe9-config\") pod \"neutron-598d4b5ccb-wjxn7\" (UID: \"521647b8-476f-4319-aa06-f9e5af5fdbe9\") " pod="openstack/neutron-598d4b5ccb-wjxn7" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.346908 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/521647b8-476f-4319-aa06-f9e5af5fdbe9-ovndb-tls-certs\") pod \"neutron-598d4b5ccb-wjxn7\" (UID: \"521647b8-476f-4319-aa06-f9e5af5fdbe9\") " pod="openstack/neutron-598d4b5ccb-wjxn7" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.347593 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7cjb\" (UniqueName: \"kubernetes.io/projected/521647b8-476f-4319-aa06-f9e5af5fdbe9-kube-api-access-b7cjb\") pod \"neutron-598d4b5ccb-wjxn7\" (UID: \"521647b8-476f-4319-aa06-f9e5af5fdbe9\") " pod="openstack/neutron-598d4b5ccb-wjxn7" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.382520 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-ml86b" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.454425 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-598d4b5ccb-wjxn7" Nov 24 21:42:09 crc kubenswrapper[4915]: E1124 21:42:09.693270 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 24 21:42:09 crc kubenswrapper[4915]: E1124 21:42:09.693478 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-75jb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-9t9tc_openstack(0646b5c8-f87a-4f27-9327-1bc87669623f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 21:42:09 crc kubenswrapper[4915]: E1124 21:42:09.702280 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-9t9tc" podUID="0646b5c8-f87a-4f27-9327-1bc87669623f" Nov 24 21:42:09 crc kubenswrapper[4915]: E1124 21:42:09.733153 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-9t9tc" podUID="0646b5c8-f87a-4f27-9327-1bc87669623f" Nov 24 21:42:09 crc kubenswrapper[4915]: I1124 21:42:09.755410 4915 scope.go:117] "RemoveContainer" containerID="aff408eb1b2f687b95645eb55ce9a32050be3995b0d1c61f1c8c9cfeb8f949e3" Nov 24 21:42:10 crc kubenswrapper[4915]: I1124 21:42:10.326309 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lf4m9"] Nov 24 21:42:10 crc kubenswrapper[4915]: I1124 21:42:10.346927 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:42:10 crc kubenswrapper[4915]: I1124 21:42:10.447603 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4e32b86-3497-4bf8-84d7-594ad6597982" path="/var/lib/kubelet/pods/f4e32b86-3497-4bf8-84d7-594ad6597982/volumes" Nov 24 21:42:10 crc kubenswrapper[4915]: I1124 21:42:10.550734 4915 scope.go:117] "RemoveContainer" containerID="a74cc944c166411e0c6fefde59b6652666b03d118c0df8320b1494a5fd81653a" Nov 24 21:42:10 crc kubenswrapper[4915]: I1124 21:42:10.830067 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1e385544-9a00-45e1-a13d-246a4fb83c1a","Type":"ContainerStarted","Data":"94b7a860d9995963ad6fef87623e96a3bb4b6cab00f8852d3d4507597a7ec927"} Nov 24 21:42:10 crc kubenswrapper[4915]: I1124 21:42:10.847673 4915 scope.go:117] "RemoveContainer" containerID="3e3a1fb8bb9a4614f248a4b139ca3149303c45375fe0e170a91191828e42d958" Nov 24 21:42:10 crc kubenswrapper[4915]: I1124 21:42:10.858753 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lf4m9" event={"ID":"582b5498-6320-42dc-9a5b-9a6fd28c791b","Type":"ContainerStarted","Data":"be92341b1612fb992f3be45afdde43d0b98264bbb1d810e850de1d84d53165a8"} Nov 24 21:42:10 crc kubenswrapper[4915]: I1124 21:42:10.979615 4915 scope.go:117] "RemoveContainer" containerID="d81c9e77a320b75356796ad1a116cb4ead474d898825d2c356a26d7b92dee185" Nov 24 21:42:11 crc kubenswrapper[4915]: W1124 21:42:11.071658 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e61db9f_1ee2_4a9a_88ee_3da3d975514a.slice/crio-44d266f0d2e1029385396fae3e264f866bd8b540ad4011a2b88ece3f9cadb8e4 WatchSource:0}: Error finding container 44d266f0d2e1029385396fae3e264f866bd8b540ad4011a2b88ece3f9cadb8e4: Status 404 returned error can't find the container with id 44d266f0d2e1029385396fae3e264f866bd8b540ad4011a2b88ece3f9cadb8e4 Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.086177 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.353356 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-598d4b5ccb-wjxn7"] Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.363604 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-ml86b"] Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.375078 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5f59b8f679-96hc8" podUID="f4e32b86-3497-4bf8-84d7-594ad6597982" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.166:5353: i/o timeout" Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.750339 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-855b85d565-grqmb"] Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.752367 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-855b85d565-grqmb" Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.758890 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.759091 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.759727 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-855b85d565-grqmb"] Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.829150 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e93587e9-d7d1-4cbb-894c-ad138ffa8fdd-internal-tls-certs\") pod \"neutron-855b85d565-grqmb\" (UID: \"e93587e9-d7d1-4cbb-894c-ad138ffa8fdd\") " pod="openstack/neutron-855b85d565-grqmb" Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.829202 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e93587e9-d7d1-4cbb-894c-ad138ffa8fdd-httpd-config\") pod \"neutron-855b85d565-grqmb\" (UID: \"e93587e9-d7d1-4cbb-894c-ad138ffa8fdd\") " pod="openstack/neutron-855b85d565-grqmb" Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.829276 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93587e9-d7d1-4cbb-894c-ad138ffa8fdd-combined-ca-bundle\") pod \"neutron-855b85d565-grqmb\" (UID: \"e93587e9-d7d1-4cbb-894c-ad138ffa8fdd\") " pod="openstack/neutron-855b85d565-grqmb" Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.829307 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e93587e9-d7d1-4cbb-894c-ad138ffa8fdd-public-tls-certs\") pod \"neutron-855b85d565-grqmb\" (UID: \"e93587e9-d7d1-4cbb-894c-ad138ffa8fdd\") " pod="openstack/neutron-855b85d565-grqmb" Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.829339 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjzjl\" (UniqueName: \"kubernetes.io/projected/e93587e9-d7d1-4cbb-894c-ad138ffa8fdd-kube-api-access-sjzjl\") pod \"neutron-855b85d565-grqmb\" (UID: \"e93587e9-d7d1-4cbb-894c-ad138ffa8fdd\") " pod="openstack/neutron-855b85d565-grqmb" Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.829398 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e93587e9-d7d1-4cbb-894c-ad138ffa8fdd-ovndb-tls-certs\") pod \"neutron-855b85d565-grqmb\" (UID: \"e93587e9-d7d1-4cbb-894c-ad138ffa8fdd\") " pod="openstack/neutron-855b85d565-grqmb" Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.829462 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e93587e9-d7d1-4cbb-894c-ad138ffa8fdd-config\") pod \"neutron-855b85d565-grqmb\" (UID: \"e93587e9-d7d1-4cbb-894c-ad138ffa8fdd\") " pod="openstack/neutron-855b85d565-grqmb" Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.919822 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7e61db9f-1ee2-4a9a-88ee-3da3d975514a","Type":"ContainerStarted","Data":"44d266f0d2e1029385396fae3e264f866bd8b540ad4011a2b88ece3f9cadb8e4"} Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.920833 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-598d4b5ccb-wjxn7" event={"ID":"521647b8-476f-4319-aa06-f9e5af5fdbe9","Type":"ContainerStarted","Data":"b369783a695a576c024bc7c7bcc2dc1c591e4f7d02b5a1e7c832103d1cbde7d3"} Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.920852 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-598d4b5ccb-wjxn7" event={"ID":"521647b8-476f-4319-aa06-f9e5af5fdbe9","Type":"ContainerStarted","Data":"9054c5ac5f304c3fea0bc37a8cd1e95793b7cef8cb21b93541765749ed403568"} Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.924292 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lf4m9" event={"ID":"582b5498-6320-42dc-9a5b-9a6fd28c791b","Type":"ContainerStarted","Data":"3b3e7d12bbf100725c78908321ba25aaaf6c50003e7611cbd694cc1c36d254e9"} Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.931066 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e93587e9-d7d1-4cbb-894c-ad138ffa8fdd-public-tls-certs\") pod \"neutron-855b85d565-grqmb\" (UID: \"e93587e9-d7d1-4cbb-894c-ad138ffa8fdd\") " pod="openstack/neutron-855b85d565-grqmb" Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.931118 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjzjl\" (UniqueName: \"kubernetes.io/projected/e93587e9-d7d1-4cbb-894c-ad138ffa8fdd-kube-api-access-sjzjl\") pod \"neutron-855b85d565-grqmb\" (UID: \"e93587e9-d7d1-4cbb-894c-ad138ffa8fdd\") " pod="openstack/neutron-855b85d565-grqmb" Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.931188 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e93587e9-d7d1-4cbb-894c-ad138ffa8fdd-ovndb-tls-certs\") pod \"neutron-855b85d565-grqmb\" (UID: \"e93587e9-d7d1-4cbb-894c-ad138ffa8fdd\") " pod="openstack/neutron-855b85d565-grqmb" Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.931247 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e93587e9-d7d1-4cbb-894c-ad138ffa8fdd-config\") pod \"neutron-855b85d565-grqmb\" (UID: \"e93587e9-d7d1-4cbb-894c-ad138ffa8fdd\") " pod="openstack/neutron-855b85d565-grqmb" Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.931302 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e93587e9-d7d1-4cbb-894c-ad138ffa8fdd-internal-tls-certs\") pod \"neutron-855b85d565-grqmb\" (UID: \"e93587e9-d7d1-4cbb-894c-ad138ffa8fdd\") " pod="openstack/neutron-855b85d565-grqmb" Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.931320 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e93587e9-d7d1-4cbb-894c-ad138ffa8fdd-httpd-config\") pod \"neutron-855b85d565-grqmb\" (UID: \"e93587e9-d7d1-4cbb-894c-ad138ffa8fdd\") " pod="openstack/neutron-855b85d565-grqmb" Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.931375 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93587e9-d7d1-4cbb-894c-ad138ffa8fdd-combined-ca-bundle\") pod \"neutron-855b85d565-grqmb\" (UID: \"e93587e9-d7d1-4cbb-894c-ad138ffa8fdd\") " pod="openstack/neutron-855b85d565-grqmb" Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.940531 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e93587e9-d7d1-4cbb-894c-ad138ffa8fdd-ovndb-tls-certs\") pod \"neutron-855b85d565-grqmb\" (UID: \"e93587e9-d7d1-4cbb-894c-ad138ffa8fdd\") " pod="openstack/neutron-855b85d565-grqmb" Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.941295 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93587e9-d7d1-4cbb-894c-ad138ffa8fdd-combined-ca-bundle\") pod \"neutron-855b85d565-grqmb\" (UID: \"e93587e9-d7d1-4cbb-894c-ad138ffa8fdd\") " pod="openstack/neutron-855b85d565-grqmb" Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.942048 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e93587e9-d7d1-4cbb-894c-ad138ffa8fdd-config\") pod \"neutron-855b85d565-grqmb\" (UID: \"e93587e9-d7d1-4cbb-894c-ad138ffa8fdd\") " pod="openstack/neutron-855b85d565-grqmb" Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.945745 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e93587e9-d7d1-4cbb-894c-ad138ffa8fdd-internal-tls-certs\") pod \"neutron-855b85d565-grqmb\" (UID: \"e93587e9-d7d1-4cbb-894c-ad138ffa8fdd\") " pod="openstack/neutron-855b85d565-grqmb" Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.946166 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-lf4m9" podStartSLOduration=23.946147987 podStartE2EDuration="23.946147987s" podCreationTimestamp="2025-11-24 21:41:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:42:11.939807416 +0000 UTC m=+1350.256059589" watchObservedRunningTime="2025-11-24 21:42:11.946147987 +0000 UTC m=+1350.262400160" Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.948393 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1a56691-c50a-4917-9331-4920a62c5a3b","Type":"ContainerStarted","Data":"1b10ff0cafbb37746e4eda668a03a692705626c3d1ca7de1b39096e6444c2157"} Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.948585 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e93587e9-d7d1-4cbb-894c-ad138ffa8fdd-public-tls-certs\") pod \"neutron-855b85d565-grqmb\" (UID: \"e93587e9-d7d1-4cbb-894c-ad138ffa8fdd\") " pod="openstack/neutron-855b85d565-grqmb" Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.951386 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e93587e9-d7d1-4cbb-894c-ad138ffa8fdd-httpd-config\") pod \"neutron-855b85d565-grqmb\" (UID: \"e93587e9-d7d1-4cbb-894c-ad138ffa8fdd\") " pod="openstack/neutron-855b85d565-grqmb" Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.955718 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjzjl\" (UniqueName: \"kubernetes.io/projected/e93587e9-d7d1-4cbb-894c-ad138ffa8fdd-kube-api-access-sjzjl\") pod \"neutron-855b85d565-grqmb\" (UID: \"e93587e9-d7d1-4cbb-894c-ad138ffa8fdd\") " pod="openstack/neutron-855b85d565-grqmb" Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.962709 4915 generic.go:334] "Generic (PLEG): container finished" podID="9e5395dd-1fa1-461c-b0eb-edc5c817955c" containerID="c0b5d949dd930184037b43f20b1a2e9299a00f8fe33bffa547431335eaea539b" exitCode=0 Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.962862 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-ml86b" event={"ID":"9e5395dd-1fa1-461c-b0eb-edc5c817955c","Type":"ContainerDied","Data":"c0b5d949dd930184037b43f20b1a2e9299a00f8fe33bffa547431335eaea539b"} Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.962892 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-ml86b" event={"ID":"9e5395dd-1fa1-461c-b0eb-edc5c817955c","Type":"ContainerStarted","Data":"074a66bac4a424746932f022f11e4175a17b0a6866b6cff997ed8eb6dfea7c3d"} Nov 24 21:42:11 crc kubenswrapper[4915]: I1124 21:42:11.980928 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2dxdv" event={"ID":"578a54f5-2d6f-4c21-b549-55cd00237570","Type":"ContainerStarted","Data":"ce418a7f9c3df84160520ae439d16daefaa676f20397a06de14da0a48733b4b1"} Nov 24 21:42:12 crc kubenswrapper[4915]: I1124 21:42:12.019302 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-2dxdv" podStartSLOduration=7.253894024 podStartE2EDuration="32.019267238s" podCreationTimestamp="2025-11-24 21:41:40 +0000 UTC" firstStartedPulling="2025-11-24 21:41:42.355366237 +0000 UTC m=+1320.671618420" lastFinishedPulling="2025-11-24 21:42:07.120739451 +0000 UTC m=+1345.436991634" observedRunningTime="2025-11-24 21:42:12.002009483 +0000 UTC m=+1350.318261656" watchObservedRunningTime="2025-11-24 21:42:12.019267238 +0000 UTC m=+1350.335519411" Nov 24 21:42:12 crc kubenswrapper[4915]: I1124 21:42:12.122350 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-855b85d565-grqmb" Nov 24 21:42:12 crc kubenswrapper[4915]: I1124 21:42:12.813188 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-855b85d565-grqmb"] Nov 24 21:42:12 crc kubenswrapper[4915]: W1124 21:42:12.830600 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode93587e9_d7d1_4cbb_894c_ad138ffa8fdd.slice/crio-fd469e6fd9f4a3fcd4429847a2722d9c322abc7d27ebdc5be04ad813303473b4 WatchSource:0}: Error finding container fd469e6fd9f4a3fcd4429847a2722d9c322abc7d27ebdc5be04ad813303473b4: Status 404 returned error can't find the container with id fd469e6fd9f4a3fcd4429847a2722d9c322abc7d27ebdc5be04ad813303473b4 Nov 24 21:42:13 crc kubenswrapper[4915]: I1124 21:42:13.014258 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-598d4b5ccb-wjxn7" event={"ID":"521647b8-476f-4319-aa06-f9e5af5fdbe9","Type":"ContainerStarted","Data":"36528c4fd166c46acc50b734dbc2700ef96a38c050fd1120d8ea510c6977f90e"} Nov 24 21:42:13 crc kubenswrapper[4915]: I1124 21:42:13.015367 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-598d4b5ccb-wjxn7" Nov 24 21:42:13 crc kubenswrapper[4915]: I1124 21:42:13.017852 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1e385544-9a00-45e1-a13d-246a4fb83c1a","Type":"ContainerStarted","Data":"f3e8acb1a79d00a6496f0fe12dd9bac40012355de41215adb38f498553c99853"} Nov 24 21:42:13 crc kubenswrapper[4915]: I1124 21:42:13.019885 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-ml86b" event={"ID":"9e5395dd-1fa1-461c-b0eb-edc5c817955c","Type":"ContainerStarted","Data":"a2f806c8ed4bac045164780c23846ef30c72f1cc469575104ff81f9ce88ae615"} Nov 24 21:42:13 crc kubenswrapper[4915]: I1124 21:42:13.020657 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-ml86b" Nov 24 21:42:13 crc kubenswrapper[4915]: I1124 21:42:13.023183 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-855b85d565-grqmb" event={"ID":"e93587e9-d7d1-4cbb-894c-ad138ffa8fdd","Type":"ContainerStarted","Data":"fd469e6fd9f4a3fcd4429847a2722d9c322abc7d27ebdc5be04ad813303473b4"} Nov 24 21:42:13 crc kubenswrapper[4915]: I1124 21:42:13.029699 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7e61db9f-1ee2-4a9a-88ee-3da3d975514a","Type":"ContainerStarted","Data":"7acd04682579663946c79236b8a9dce5d70edb35ad16629d503e76912c8f9f87"} Nov 24 21:42:13 crc kubenswrapper[4915]: I1124 21:42:13.029749 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7e61db9f-1ee2-4a9a-88ee-3da3d975514a","Type":"ContainerStarted","Data":"a431a8479b3b83816b2ae374c619415c393d9ae1771f2b17130475d272e4ac73"} Nov 24 21:42:13 crc kubenswrapper[4915]: I1124 21:42:13.061930 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-598d4b5ccb-wjxn7" podStartSLOduration=4.061909112 podStartE2EDuration="4.061909112s" podCreationTimestamp="2025-11-24 21:42:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:42:13.045291404 +0000 UTC m=+1351.361543577" watchObservedRunningTime="2025-11-24 21:42:13.061909112 +0000 UTC m=+1351.378161295" Nov 24 21:42:13 crc kubenswrapper[4915]: I1124 21:42:13.090105 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=15.090080791 podStartE2EDuration="15.090080791s" podCreationTimestamp="2025-11-24 21:41:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:42:13.088659152 +0000 UTC m=+1351.404911325" watchObservedRunningTime="2025-11-24 21:42:13.090080791 +0000 UTC m=+1351.406332964" Nov 24 21:42:13 crc kubenswrapper[4915]: I1124 21:42:13.128027 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-ml86b" podStartSLOduration=4.128002583 podStartE2EDuration="4.128002583s" podCreationTimestamp="2025-11-24 21:42:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:42:13.108409925 +0000 UTC m=+1351.424662098" watchObservedRunningTime="2025-11-24 21:42:13.128002583 +0000 UTC m=+1351.444254766" Nov 24 21:42:14 crc kubenswrapper[4915]: I1124 21:42:14.048241 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-6jszw" event={"ID":"8c21efec-e27b-4e9e-bdc6-a4d9a0eab412","Type":"ContainerStarted","Data":"ce8b42e8ea3db5b79b2a1cb9cb42c5123911d4be24151650c31db77bdffa919d"} Nov 24 21:42:14 crc kubenswrapper[4915]: I1124 21:42:14.053607 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1e385544-9a00-45e1-a13d-246a4fb83c1a","Type":"ContainerStarted","Data":"db52c0172ea04d0a948de02991c171a2413e5b7665489c613c5f8155042a9753"} Nov 24 21:42:14 crc kubenswrapper[4915]: I1124 21:42:14.055634 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-855b85d565-grqmb" event={"ID":"e93587e9-d7d1-4cbb-894c-ad138ffa8fdd","Type":"ContainerStarted","Data":"e00b4026f99b4717c44a81c62a26d02b99e82d1a542a5e795c5b2e67e4618c59"} Nov 24 21:42:14 crc kubenswrapper[4915]: I1124 21:42:14.055665 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-855b85d565-grqmb" event={"ID":"e93587e9-d7d1-4cbb-894c-ad138ffa8fdd","Type":"ContainerStarted","Data":"7c9105301451303542e621a496f4cf512c051d56da2aa583b5e8f49f3ab87324"} Nov 24 21:42:14 crc kubenswrapper[4915]: I1124 21:42:14.063358 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-6jszw" podStartSLOduration=3.061457565 podStartE2EDuration="34.063340343s" podCreationTimestamp="2025-11-24 21:41:40 +0000 UTC" firstStartedPulling="2025-11-24 21:41:41.901392176 +0000 UTC m=+1320.217644359" lastFinishedPulling="2025-11-24 21:42:12.903274964 +0000 UTC m=+1351.219527137" observedRunningTime="2025-11-24 21:42:14.061250386 +0000 UTC m=+1352.377502559" watchObservedRunningTime="2025-11-24 21:42:14.063340343 +0000 UTC m=+1352.379592526" Nov 24 21:42:14 crc kubenswrapper[4915]: I1124 21:42:14.099874 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-855b85d565-grqmb" podStartSLOduration=3.099850497 podStartE2EDuration="3.099850497s" podCreationTimestamp="2025-11-24 21:42:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:42:14.079224791 +0000 UTC m=+1352.395476984" watchObservedRunningTime="2025-11-24 21:42:14.099850497 +0000 UTC m=+1352.416102670" Nov 24 21:42:14 crc kubenswrapper[4915]: I1124 21:42:14.131145 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=26.131090179 podStartE2EDuration="26.131090179s" podCreationTimestamp="2025-11-24 21:41:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:42:14.116064924 +0000 UTC m=+1352.432317097" watchObservedRunningTime="2025-11-24 21:42:14.131090179 +0000 UTC m=+1352.447342362" Nov 24 21:42:15 crc kubenswrapper[4915]: I1124 21:42:15.072043 4915 generic.go:334] "Generic (PLEG): container finished" podID="578a54f5-2d6f-4c21-b549-55cd00237570" containerID="ce418a7f9c3df84160520ae439d16daefaa676f20397a06de14da0a48733b4b1" exitCode=0 Nov 24 21:42:15 crc kubenswrapper[4915]: I1124 21:42:15.072258 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2dxdv" event={"ID":"578a54f5-2d6f-4c21-b549-55cd00237570","Type":"ContainerDied","Data":"ce418a7f9c3df84160520ae439d16daefaa676f20397a06de14da0a48733b4b1"} Nov 24 21:42:15 crc kubenswrapper[4915]: I1124 21:42:15.074066 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-855b85d565-grqmb" Nov 24 21:42:17 crc kubenswrapper[4915]: I1124 21:42:17.094337 4915 generic.go:334] "Generic (PLEG): container finished" podID="582b5498-6320-42dc-9a5b-9a6fd28c791b" containerID="3b3e7d12bbf100725c78908321ba25aaaf6c50003e7611cbd694cc1c36d254e9" exitCode=0 Nov 24 21:42:17 crc kubenswrapper[4915]: I1124 21:42:17.094409 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lf4m9" event={"ID":"582b5498-6320-42dc-9a5b-9a6fd28c791b","Type":"ContainerDied","Data":"3b3e7d12bbf100725c78908321ba25aaaf6c50003e7611cbd694cc1c36d254e9"} Nov 24 21:42:17 crc kubenswrapper[4915]: I1124 21:42:17.898586 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2dxdv" Nov 24 21:42:17 crc kubenswrapper[4915]: I1124 21:42:17.970372 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/578a54f5-2d6f-4c21-b549-55cd00237570-config-data\") pod \"578a54f5-2d6f-4c21-b549-55cd00237570\" (UID: \"578a54f5-2d6f-4c21-b549-55cd00237570\") " Nov 24 21:42:17 crc kubenswrapper[4915]: I1124 21:42:17.970470 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/578a54f5-2d6f-4c21-b549-55cd00237570-scripts\") pod \"578a54f5-2d6f-4c21-b549-55cd00237570\" (UID: \"578a54f5-2d6f-4c21-b549-55cd00237570\") " Nov 24 21:42:17 crc kubenswrapper[4915]: I1124 21:42:17.970540 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/578a54f5-2d6f-4c21-b549-55cd00237570-logs\") pod \"578a54f5-2d6f-4c21-b549-55cd00237570\" (UID: \"578a54f5-2d6f-4c21-b549-55cd00237570\") " Nov 24 21:42:17 crc kubenswrapper[4915]: I1124 21:42:17.970604 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94lhq\" (UniqueName: \"kubernetes.io/projected/578a54f5-2d6f-4c21-b549-55cd00237570-kube-api-access-94lhq\") pod \"578a54f5-2d6f-4c21-b549-55cd00237570\" (UID: \"578a54f5-2d6f-4c21-b549-55cd00237570\") " Nov 24 21:42:17 crc kubenswrapper[4915]: I1124 21:42:17.970687 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/578a54f5-2d6f-4c21-b549-55cd00237570-combined-ca-bundle\") pod \"578a54f5-2d6f-4c21-b549-55cd00237570\" (UID: \"578a54f5-2d6f-4c21-b549-55cd00237570\") " Nov 24 21:42:17 crc kubenswrapper[4915]: I1124 21:42:17.971074 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/578a54f5-2d6f-4c21-b549-55cd00237570-logs" (OuterVolumeSpecName: "logs") pod "578a54f5-2d6f-4c21-b549-55cd00237570" (UID: "578a54f5-2d6f-4c21-b549-55cd00237570"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:42:17 crc kubenswrapper[4915]: I1124 21:42:17.971266 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/578a54f5-2d6f-4c21-b549-55cd00237570-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:17 crc kubenswrapper[4915]: I1124 21:42:17.979883 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/578a54f5-2d6f-4c21-b549-55cd00237570-scripts" (OuterVolumeSpecName: "scripts") pod "578a54f5-2d6f-4c21-b549-55cd00237570" (UID: "578a54f5-2d6f-4c21-b549-55cd00237570"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:17 crc kubenswrapper[4915]: I1124 21:42:17.980077 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/578a54f5-2d6f-4c21-b549-55cd00237570-kube-api-access-94lhq" (OuterVolumeSpecName: "kube-api-access-94lhq") pod "578a54f5-2d6f-4c21-b549-55cd00237570" (UID: "578a54f5-2d6f-4c21-b549-55cd00237570"). InnerVolumeSpecName "kube-api-access-94lhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.019238 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/578a54f5-2d6f-4c21-b549-55cd00237570-config-data" (OuterVolumeSpecName: "config-data") pod "578a54f5-2d6f-4c21-b549-55cd00237570" (UID: "578a54f5-2d6f-4c21-b549-55cd00237570"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.038179 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/578a54f5-2d6f-4c21-b549-55cd00237570-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "578a54f5-2d6f-4c21-b549-55cd00237570" (UID: "578a54f5-2d6f-4c21-b549-55cd00237570"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.073112 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/578a54f5-2d6f-4c21-b549-55cd00237570-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.073144 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94lhq\" (UniqueName: \"kubernetes.io/projected/578a54f5-2d6f-4c21-b549-55cd00237570-kube-api-access-94lhq\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.073157 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/578a54f5-2d6f-4c21-b549-55cd00237570-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.073165 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/578a54f5-2d6f-4c21-b549-55cd00237570-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.109660 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1a56691-c50a-4917-9331-4920a62c5a3b","Type":"ContainerStarted","Data":"d4e4e77ad89876f92a5feeb744fc6ed7d62060a1cb6473adc880a964c65dce0e"} Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.111908 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2dxdv" Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.111911 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2dxdv" event={"ID":"578a54f5-2d6f-4c21-b549-55cd00237570","Type":"ContainerDied","Data":"8afc805208fa72676a2ed26e58ca2ae2ec2d00b48d81a6b3012984bec3df0488"} Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.112154 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8afc805208fa72676a2ed26e58ca2ae2ec2d00b48d81a6b3012984bec3df0488" Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.113940 4915 generic.go:334] "Generic (PLEG): container finished" podID="8c21efec-e27b-4e9e-bdc6-a4d9a0eab412" containerID="ce8b42e8ea3db5b79b2a1cb9cb42c5123911d4be24151650c31db77bdffa919d" exitCode=0 Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.114039 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-6jszw" event={"ID":"8c21efec-e27b-4e9e-bdc6-a4d9a0eab412","Type":"ContainerDied","Data":"ce8b42e8ea3db5b79b2a1cb9cb42c5123911d4be24151650c31db77bdffa919d"} Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.530389 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lf4m9" Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.696355 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnvk6\" (UniqueName: \"kubernetes.io/projected/582b5498-6320-42dc-9a5b-9a6fd28c791b-kube-api-access-gnvk6\") pod \"582b5498-6320-42dc-9a5b-9a6fd28c791b\" (UID: \"582b5498-6320-42dc-9a5b-9a6fd28c791b\") " Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.696449 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/582b5498-6320-42dc-9a5b-9a6fd28c791b-fernet-keys\") pod \"582b5498-6320-42dc-9a5b-9a6fd28c791b\" (UID: \"582b5498-6320-42dc-9a5b-9a6fd28c791b\") " Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.696552 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582b5498-6320-42dc-9a5b-9a6fd28c791b-config-data\") pod \"582b5498-6320-42dc-9a5b-9a6fd28c791b\" (UID: \"582b5498-6320-42dc-9a5b-9a6fd28c791b\") " Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.696684 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582b5498-6320-42dc-9a5b-9a6fd28c791b-scripts\") pod \"582b5498-6320-42dc-9a5b-9a6fd28c791b\" (UID: \"582b5498-6320-42dc-9a5b-9a6fd28c791b\") " Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.696794 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/582b5498-6320-42dc-9a5b-9a6fd28c791b-credential-keys\") pod \"582b5498-6320-42dc-9a5b-9a6fd28c791b\" (UID: \"582b5498-6320-42dc-9a5b-9a6fd28c791b\") " Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.696856 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582b5498-6320-42dc-9a5b-9a6fd28c791b-combined-ca-bundle\") pod \"582b5498-6320-42dc-9a5b-9a6fd28c791b\" (UID: \"582b5498-6320-42dc-9a5b-9a6fd28c791b\") " Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.703692 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/582b5498-6320-42dc-9a5b-9a6fd28c791b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "582b5498-6320-42dc-9a5b-9a6fd28c791b" (UID: "582b5498-6320-42dc-9a5b-9a6fd28c791b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.704824 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/582b5498-6320-42dc-9a5b-9a6fd28c791b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "582b5498-6320-42dc-9a5b-9a6fd28c791b" (UID: "582b5498-6320-42dc-9a5b-9a6fd28c791b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.704868 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/582b5498-6320-42dc-9a5b-9a6fd28c791b-kube-api-access-gnvk6" (OuterVolumeSpecName: "kube-api-access-gnvk6") pod "582b5498-6320-42dc-9a5b-9a6fd28c791b" (UID: "582b5498-6320-42dc-9a5b-9a6fd28c791b"). InnerVolumeSpecName "kube-api-access-gnvk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.706172 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/582b5498-6320-42dc-9a5b-9a6fd28c791b-scripts" (OuterVolumeSpecName: "scripts") pod "582b5498-6320-42dc-9a5b-9a6fd28c791b" (UID: "582b5498-6320-42dc-9a5b-9a6fd28c791b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.730186 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/582b5498-6320-42dc-9a5b-9a6fd28c791b-config-data" (OuterVolumeSpecName: "config-data") pod "582b5498-6320-42dc-9a5b-9a6fd28c791b" (UID: "582b5498-6320-42dc-9a5b-9a6fd28c791b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.735865 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/582b5498-6320-42dc-9a5b-9a6fd28c791b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "582b5498-6320-42dc-9a5b-9a6fd28c791b" (UID: "582b5498-6320-42dc-9a5b-9a6fd28c791b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.799485 4915 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/582b5498-6320-42dc-9a5b-9a6fd28c791b-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.799514 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/582b5498-6320-42dc-9a5b-9a6fd28c791b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.799529 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnvk6\" (UniqueName: \"kubernetes.io/projected/582b5498-6320-42dc-9a5b-9a6fd28c791b-kube-api-access-gnvk6\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.799542 4915 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/582b5498-6320-42dc-9a5b-9a6fd28c791b-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.799552 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/582b5498-6320-42dc-9a5b-9a6fd28c791b-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.799562 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/582b5498-6320-42dc-9a5b-9a6fd28c791b-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.971142 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 21:42:18 crc kubenswrapper[4915]: I1124 21:42:18.971216 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.007711 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.021995 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.124761 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lf4m9" event={"ID":"582b5498-6320-42dc-9a5b-9a6fd28c791b","Type":"ContainerDied","Data":"be92341b1612fb992f3be45afdde43d0b98264bbb1d810e850de1d84d53165a8"} Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.124830 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be92341b1612fb992f3be45afdde43d0b98264bbb1d810e850de1d84d53165a8" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.124910 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lf4m9" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.125234 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.125279 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.132725 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6db74dbdbb-6ntbq"] Nov 24 21:42:19 crc kubenswrapper[4915]: E1124 21:42:19.133586 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="582b5498-6320-42dc-9a5b-9a6fd28c791b" containerName="keystone-bootstrap" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.133636 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="582b5498-6320-42dc-9a5b-9a6fd28c791b" containerName="keystone-bootstrap" Nov 24 21:42:19 crc kubenswrapper[4915]: E1124 21:42:19.133665 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="578a54f5-2d6f-4c21-b549-55cd00237570" containerName="placement-db-sync" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.133701 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="578a54f5-2d6f-4c21-b549-55cd00237570" containerName="placement-db-sync" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.134138 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="582b5498-6320-42dc-9a5b-9a6fd28c791b" containerName="keystone-bootstrap" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.134441 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="578a54f5-2d6f-4c21-b549-55cd00237570" containerName="placement-db-sync" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.136580 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6db74dbdbb-6ntbq" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.140083 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.140570 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.140810 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.141489 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.141716 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-dlc2b" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.147345 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6db74dbdbb-6ntbq"] Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.173013 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.173270 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.173430 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.173511 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.244248 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.301647 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.319888 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff00b457-c47c-46b0-aa05-a0fe3d54ffa0-combined-ca-bundle\") pod \"placement-6db74dbdbb-6ntbq\" (UID: \"ff00b457-c47c-46b0-aa05-a0fe3d54ffa0\") " pod="openstack/placement-6db74dbdbb-6ntbq" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.319940 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff00b457-c47c-46b0-aa05-a0fe3d54ffa0-public-tls-certs\") pod \"placement-6db74dbdbb-6ntbq\" (UID: \"ff00b457-c47c-46b0-aa05-a0fe3d54ffa0\") " pod="openstack/placement-6db74dbdbb-6ntbq" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.319962 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff00b457-c47c-46b0-aa05-a0fe3d54ffa0-logs\") pod \"placement-6db74dbdbb-6ntbq\" (UID: \"ff00b457-c47c-46b0-aa05-a0fe3d54ffa0\") " pod="openstack/placement-6db74dbdbb-6ntbq" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.320046 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff00b457-c47c-46b0-aa05-a0fe3d54ffa0-scripts\") pod \"placement-6db74dbdbb-6ntbq\" (UID: \"ff00b457-c47c-46b0-aa05-a0fe3d54ffa0\") " pod="openstack/placement-6db74dbdbb-6ntbq" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.320086 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pdg5\" (UniqueName: \"kubernetes.io/projected/ff00b457-c47c-46b0-aa05-a0fe3d54ffa0-kube-api-access-4pdg5\") pod \"placement-6db74dbdbb-6ntbq\" (UID: \"ff00b457-c47c-46b0-aa05-a0fe3d54ffa0\") " pod="openstack/placement-6db74dbdbb-6ntbq" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.320106 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff00b457-c47c-46b0-aa05-a0fe3d54ffa0-config-data\") pod \"placement-6db74dbdbb-6ntbq\" (UID: \"ff00b457-c47c-46b0-aa05-a0fe3d54ffa0\") " pod="openstack/placement-6db74dbdbb-6ntbq" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.320131 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff00b457-c47c-46b0-aa05-a0fe3d54ffa0-internal-tls-certs\") pod \"placement-6db74dbdbb-6ntbq\" (UID: \"ff00b457-c47c-46b0-aa05-a0fe3d54ffa0\") " pod="openstack/placement-6db74dbdbb-6ntbq" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.341540 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6b68f549f-hk4pm"] Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.346401 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6b68f549f-hk4pm" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.350082 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.350164 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-x7cdq" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.350086 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.351397 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.352255 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.358972 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.370267 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6b68f549f-hk4pm"] Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.384006 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-ml86b" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.423597 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff00b457-c47c-46b0-aa05-a0fe3d54ffa0-logs\") pod \"placement-6db74dbdbb-6ntbq\" (UID: \"ff00b457-c47c-46b0-aa05-a0fe3d54ffa0\") " pod="openstack/placement-6db74dbdbb-6ntbq" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.423643 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9535890f-6177-4219-9584-7e0200661d82-internal-tls-certs\") pod \"keystone-6b68f549f-hk4pm\" (UID: \"9535890f-6177-4219-9584-7e0200661d82\") " pod="openstack/keystone-6b68f549f-hk4pm" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.423720 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff00b457-c47c-46b0-aa05-a0fe3d54ffa0-scripts\") pod \"placement-6db74dbdbb-6ntbq\" (UID: \"ff00b457-c47c-46b0-aa05-a0fe3d54ffa0\") " pod="openstack/placement-6db74dbdbb-6ntbq" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.423749 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9535890f-6177-4219-9584-7e0200661d82-config-data\") pod \"keystone-6b68f549f-hk4pm\" (UID: \"9535890f-6177-4219-9584-7e0200661d82\") " pod="openstack/keystone-6b68f549f-hk4pm" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.423810 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pdg5\" (UniqueName: \"kubernetes.io/projected/ff00b457-c47c-46b0-aa05-a0fe3d54ffa0-kube-api-access-4pdg5\") pod \"placement-6db74dbdbb-6ntbq\" (UID: \"ff00b457-c47c-46b0-aa05-a0fe3d54ffa0\") " pod="openstack/placement-6db74dbdbb-6ntbq" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.423829 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff00b457-c47c-46b0-aa05-a0fe3d54ffa0-config-data\") pod \"placement-6db74dbdbb-6ntbq\" (UID: \"ff00b457-c47c-46b0-aa05-a0fe3d54ffa0\") " pod="openstack/placement-6db74dbdbb-6ntbq" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.423858 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9535890f-6177-4219-9584-7e0200661d82-scripts\") pod \"keystone-6b68f549f-hk4pm\" (UID: \"9535890f-6177-4219-9584-7e0200661d82\") " pod="openstack/keystone-6b68f549f-hk4pm" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.423887 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9535890f-6177-4219-9584-7e0200661d82-public-tls-certs\") pod \"keystone-6b68f549f-hk4pm\" (UID: \"9535890f-6177-4219-9584-7e0200661d82\") " pod="openstack/keystone-6b68f549f-hk4pm" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.423918 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff00b457-c47c-46b0-aa05-a0fe3d54ffa0-internal-tls-certs\") pod \"placement-6db74dbdbb-6ntbq\" (UID: \"ff00b457-c47c-46b0-aa05-a0fe3d54ffa0\") " pod="openstack/placement-6db74dbdbb-6ntbq" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.424005 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9535890f-6177-4219-9584-7e0200661d82-fernet-keys\") pod \"keystone-6b68f549f-hk4pm\" (UID: \"9535890f-6177-4219-9584-7e0200661d82\") " pod="openstack/keystone-6b68f549f-hk4pm" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.424076 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9535890f-6177-4219-9584-7e0200661d82-credential-keys\") pod \"keystone-6b68f549f-hk4pm\" (UID: \"9535890f-6177-4219-9584-7e0200661d82\") " pod="openstack/keystone-6b68f549f-hk4pm" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.424108 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9535890f-6177-4219-9584-7e0200661d82-combined-ca-bundle\") pod \"keystone-6b68f549f-hk4pm\" (UID: \"9535890f-6177-4219-9584-7e0200661d82\") " pod="openstack/keystone-6b68f549f-hk4pm" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.424172 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff00b457-c47c-46b0-aa05-a0fe3d54ffa0-combined-ca-bundle\") pod \"placement-6db74dbdbb-6ntbq\" (UID: \"ff00b457-c47c-46b0-aa05-a0fe3d54ffa0\") " pod="openstack/placement-6db74dbdbb-6ntbq" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.424227 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb6j9\" (UniqueName: \"kubernetes.io/projected/9535890f-6177-4219-9584-7e0200661d82-kube-api-access-vb6j9\") pod \"keystone-6b68f549f-hk4pm\" (UID: \"9535890f-6177-4219-9584-7e0200661d82\") " pod="openstack/keystone-6b68f549f-hk4pm" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.424267 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff00b457-c47c-46b0-aa05-a0fe3d54ffa0-public-tls-certs\") pod \"placement-6db74dbdbb-6ntbq\" (UID: \"ff00b457-c47c-46b0-aa05-a0fe3d54ffa0\") " pod="openstack/placement-6db74dbdbb-6ntbq" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.425718 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff00b457-c47c-46b0-aa05-a0fe3d54ffa0-logs\") pod \"placement-6db74dbdbb-6ntbq\" (UID: \"ff00b457-c47c-46b0-aa05-a0fe3d54ffa0\") " pod="openstack/placement-6db74dbdbb-6ntbq" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.435276 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff00b457-c47c-46b0-aa05-a0fe3d54ffa0-public-tls-certs\") pod \"placement-6db74dbdbb-6ntbq\" (UID: \"ff00b457-c47c-46b0-aa05-a0fe3d54ffa0\") " pod="openstack/placement-6db74dbdbb-6ntbq" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.438380 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff00b457-c47c-46b0-aa05-a0fe3d54ffa0-config-data\") pod \"placement-6db74dbdbb-6ntbq\" (UID: \"ff00b457-c47c-46b0-aa05-a0fe3d54ffa0\") " pod="openstack/placement-6db74dbdbb-6ntbq" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.472803 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff00b457-c47c-46b0-aa05-a0fe3d54ffa0-internal-tls-certs\") pod \"placement-6db74dbdbb-6ntbq\" (UID: \"ff00b457-c47c-46b0-aa05-a0fe3d54ffa0\") " pod="openstack/placement-6db74dbdbb-6ntbq" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.472836 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff00b457-c47c-46b0-aa05-a0fe3d54ffa0-scripts\") pod \"placement-6db74dbdbb-6ntbq\" (UID: \"ff00b457-c47c-46b0-aa05-a0fe3d54ffa0\") " pod="openstack/placement-6db74dbdbb-6ntbq" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.472887 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff00b457-c47c-46b0-aa05-a0fe3d54ffa0-combined-ca-bundle\") pod \"placement-6db74dbdbb-6ntbq\" (UID: \"ff00b457-c47c-46b0-aa05-a0fe3d54ffa0\") " pod="openstack/placement-6db74dbdbb-6ntbq" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.477400 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pdg5\" (UniqueName: \"kubernetes.io/projected/ff00b457-c47c-46b0-aa05-a0fe3d54ffa0-kube-api-access-4pdg5\") pod \"placement-6db74dbdbb-6ntbq\" (UID: \"ff00b457-c47c-46b0-aa05-a0fe3d54ffa0\") " pod="openstack/placement-6db74dbdbb-6ntbq" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.481394 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6db74dbdbb-6ntbq" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.529539 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9535890f-6177-4219-9584-7e0200661d82-scripts\") pod \"keystone-6b68f549f-hk4pm\" (UID: \"9535890f-6177-4219-9584-7e0200661d82\") " pod="openstack/keystone-6b68f549f-hk4pm" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.529730 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9535890f-6177-4219-9584-7e0200661d82-public-tls-certs\") pod \"keystone-6b68f549f-hk4pm\" (UID: \"9535890f-6177-4219-9584-7e0200661d82\") " pod="openstack/keystone-6b68f549f-hk4pm" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.529858 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9535890f-6177-4219-9584-7e0200661d82-fernet-keys\") pod \"keystone-6b68f549f-hk4pm\" (UID: \"9535890f-6177-4219-9584-7e0200661d82\") " pod="openstack/keystone-6b68f549f-hk4pm" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.529938 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9535890f-6177-4219-9584-7e0200661d82-credential-keys\") pod \"keystone-6b68f549f-hk4pm\" (UID: \"9535890f-6177-4219-9584-7e0200661d82\") " pod="openstack/keystone-6b68f549f-hk4pm" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.529969 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9535890f-6177-4219-9584-7e0200661d82-combined-ca-bundle\") pod \"keystone-6b68f549f-hk4pm\" (UID: \"9535890f-6177-4219-9584-7e0200661d82\") " pod="openstack/keystone-6b68f549f-hk4pm" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.530028 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb6j9\" (UniqueName: \"kubernetes.io/projected/9535890f-6177-4219-9584-7e0200661d82-kube-api-access-vb6j9\") pod \"keystone-6b68f549f-hk4pm\" (UID: \"9535890f-6177-4219-9584-7e0200661d82\") " pod="openstack/keystone-6b68f549f-hk4pm" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.530061 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9535890f-6177-4219-9584-7e0200661d82-internal-tls-certs\") pod \"keystone-6b68f549f-hk4pm\" (UID: \"9535890f-6177-4219-9584-7e0200661d82\") " pod="openstack/keystone-6b68f549f-hk4pm" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.530151 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9535890f-6177-4219-9584-7e0200661d82-config-data\") pod \"keystone-6b68f549f-hk4pm\" (UID: \"9535890f-6177-4219-9584-7e0200661d82\") " pod="openstack/keystone-6b68f549f-hk4pm" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.555632 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9535890f-6177-4219-9584-7e0200661d82-public-tls-certs\") pod \"keystone-6b68f549f-hk4pm\" (UID: \"9535890f-6177-4219-9584-7e0200661d82\") " pod="openstack/keystone-6b68f549f-hk4pm" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.558168 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9535890f-6177-4219-9584-7e0200661d82-scripts\") pod \"keystone-6b68f549f-hk4pm\" (UID: \"9535890f-6177-4219-9584-7e0200661d82\") " pod="openstack/keystone-6b68f549f-hk4pm" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.559359 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9535890f-6177-4219-9584-7e0200661d82-internal-tls-certs\") pod \"keystone-6b68f549f-hk4pm\" (UID: \"9535890f-6177-4219-9584-7e0200661d82\") " pod="openstack/keystone-6b68f549f-hk4pm" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.563441 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-88gfs"] Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.563672 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-88gfs" podUID="8fd83616-8f63-4754-8286-5c25487c8b9c" containerName="dnsmasq-dns" containerID="cri-o://24c2d0e96b3cc6a144cfec8e590bd50c82ffa45b995f78adae77ea1e3f89a80d" gracePeriod=10 Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.569791 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/9535890f-6177-4219-9584-7e0200661d82-credential-keys\") pod \"keystone-6b68f549f-hk4pm\" (UID: \"9535890f-6177-4219-9584-7e0200661d82\") " pod="openstack/keystone-6b68f549f-hk4pm" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.577263 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9535890f-6177-4219-9584-7e0200661d82-config-data\") pod \"keystone-6b68f549f-hk4pm\" (UID: \"9535890f-6177-4219-9584-7e0200661d82\") " pod="openstack/keystone-6b68f549f-hk4pm" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.582887 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9535890f-6177-4219-9584-7e0200661d82-combined-ca-bundle\") pod \"keystone-6b68f549f-hk4pm\" (UID: \"9535890f-6177-4219-9584-7e0200661d82\") " pod="openstack/keystone-6b68f549f-hk4pm" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.584887 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9535890f-6177-4219-9584-7e0200661d82-fernet-keys\") pod \"keystone-6b68f549f-hk4pm\" (UID: \"9535890f-6177-4219-9584-7e0200661d82\") " pod="openstack/keystone-6b68f549f-hk4pm" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.588851 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb6j9\" (UniqueName: \"kubernetes.io/projected/9535890f-6177-4219-9584-7e0200661d82-kube-api-access-vb6j9\") pod \"keystone-6b68f549f-hk4pm\" (UID: \"9535890f-6177-4219-9584-7e0200661d82\") " pod="openstack/keystone-6b68f549f-hk4pm" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.685445 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6b68f549f-hk4pm" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.890947 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-6jszw" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.939430 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c21efec-e27b-4e9e-bdc6-a4d9a0eab412-config-data\") pod \"8c21efec-e27b-4e9e-bdc6-a4d9a0eab412\" (UID: \"8c21efec-e27b-4e9e-bdc6-a4d9a0eab412\") " Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.939720 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c21efec-e27b-4e9e-bdc6-a4d9a0eab412-combined-ca-bundle\") pod \"8c21efec-e27b-4e9e-bdc6-a4d9a0eab412\" (UID: \"8c21efec-e27b-4e9e-bdc6-a4d9a0eab412\") " Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.939990 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdfq9\" (UniqueName: \"kubernetes.io/projected/8c21efec-e27b-4e9e-bdc6-a4d9a0eab412-kube-api-access-kdfq9\") pod \"8c21efec-e27b-4e9e-bdc6-a4d9a0eab412\" (UID: \"8c21efec-e27b-4e9e-bdc6-a4d9a0eab412\") " Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.963633 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c21efec-e27b-4e9e-bdc6-a4d9a0eab412-kube-api-access-kdfq9" (OuterVolumeSpecName: "kube-api-access-kdfq9") pod "8c21efec-e27b-4e9e-bdc6-a4d9a0eab412" (UID: "8c21efec-e27b-4e9e-bdc6-a4d9a0eab412"). InnerVolumeSpecName "kube-api-access-kdfq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:42:19 crc kubenswrapper[4915]: I1124 21:42:19.993508 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c21efec-e27b-4e9e-bdc6-a4d9a0eab412-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c21efec-e27b-4e9e-bdc6-a4d9a0eab412" (UID: "8c21efec-e27b-4e9e-bdc6-a4d9a0eab412"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:20 crc kubenswrapper[4915]: I1124 21:42:20.042393 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c21efec-e27b-4e9e-bdc6-a4d9a0eab412-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:20 crc kubenswrapper[4915]: I1124 21:42:20.042809 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdfq9\" (UniqueName: \"kubernetes.io/projected/8c21efec-e27b-4e9e-bdc6-a4d9a0eab412-kube-api-access-kdfq9\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:20 crc kubenswrapper[4915]: I1124 21:42:20.108527 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c21efec-e27b-4e9e-bdc6-a4d9a0eab412-config-data" (OuterVolumeSpecName: "config-data") pod "8c21efec-e27b-4e9e-bdc6-a4d9a0eab412" (UID: "8c21efec-e27b-4e9e-bdc6-a4d9a0eab412"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:20 crc kubenswrapper[4915]: I1124 21:42:20.130935 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6db74dbdbb-6ntbq"] Nov 24 21:42:20 crc kubenswrapper[4915]: I1124 21:42:20.146215 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c21efec-e27b-4e9e-bdc6-a4d9a0eab412-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:20 crc kubenswrapper[4915]: I1124 21:42:20.183466 4915 generic.go:334] "Generic (PLEG): container finished" podID="8fd83616-8f63-4754-8286-5c25487c8b9c" containerID="24c2d0e96b3cc6a144cfec8e590bd50c82ffa45b995f78adae77ea1e3f89a80d" exitCode=0 Nov 24 21:42:20 crc kubenswrapper[4915]: I1124 21:42:20.183537 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-88gfs" event={"ID":"8fd83616-8f63-4754-8286-5c25487c8b9c","Type":"ContainerDied","Data":"24c2d0e96b3cc6a144cfec8e590bd50c82ffa45b995f78adae77ea1e3f89a80d"} Nov 24 21:42:20 crc kubenswrapper[4915]: I1124 21:42:20.195630 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-6jszw" Nov 24 21:42:20 crc kubenswrapper[4915]: I1124 21:42:20.197554 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-6jszw" event={"ID":"8c21efec-e27b-4e9e-bdc6-a4d9a0eab412","Type":"ContainerDied","Data":"4f64b68fe67c5a007344a93686f60c363f22edfb40921743ce739eb4d278e0be"} Nov 24 21:42:20 crc kubenswrapper[4915]: I1124 21:42:20.197584 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f64b68fe67c5a007344a93686f60c363f22edfb40921743ce739eb4d278e0be" Nov 24 21:42:20 crc kubenswrapper[4915]: I1124 21:42:20.266501 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-88gfs" Nov 24 21:42:20 crc kubenswrapper[4915]: I1124 21:42:20.343675 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6b68f549f-hk4pm"] Nov 24 21:42:20 crc kubenswrapper[4915]: I1124 21:42:20.465319 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8fd83616-8f63-4754-8286-5c25487c8b9c-dns-swift-storage-0\") pod \"8fd83616-8f63-4754-8286-5c25487c8b9c\" (UID: \"8fd83616-8f63-4754-8286-5c25487c8b9c\") " Nov 24 21:42:20 crc kubenswrapper[4915]: I1124 21:42:20.465465 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fd83616-8f63-4754-8286-5c25487c8b9c-dns-svc\") pod \"8fd83616-8f63-4754-8286-5c25487c8b9c\" (UID: \"8fd83616-8f63-4754-8286-5c25487c8b9c\") " Nov 24 21:42:20 crc kubenswrapper[4915]: I1124 21:42:20.465499 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8vp\" (UniqueName: \"kubernetes.io/projected/8fd83616-8f63-4754-8286-5c25487c8b9c-kube-api-access-4g8vp\") pod \"8fd83616-8f63-4754-8286-5c25487c8b9c\" (UID: \"8fd83616-8f63-4754-8286-5c25487c8b9c\") " Nov 24 21:42:20 crc kubenswrapper[4915]: I1124 21:42:20.465647 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fd83616-8f63-4754-8286-5c25487c8b9c-ovsdbserver-sb\") pod \"8fd83616-8f63-4754-8286-5c25487c8b9c\" (UID: \"8fd83616-8f63-4754-8286-5c25487c8b9c\") " Nov 24 21:42:20 crc kubenswrapper[4915]: I1124 21:42:20.465685 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fd83616-8f63-4754-8286-5c25487c8b9c-ovsdbserver-nb\") pod \"8fd83616-8f63-4754-8286-5c25487c8b9c\" (UID: \"8fd83616-8f63-4754-8286-5c25487c8b9c\") " Nov 24 21:42:20 crc kubenswrapper[4915]: I1124 21:42:20.465732 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fd83616-8f63-4754-8286-5c25487c8b9c-config\") pod \"8fd83616-8f63-4754-8286-5c25487c8b9c\" (UID: \"8fd83616-8f63-4754-8286-5c25487c8b9c\") " Nov 24 21:42:20 crc kubenswrapper[4915]: I1124 21:42:20.512763 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fd83616-8f63-4754-8286-5c25487c8b9c-kube-api-access-4g8vp" (OuterVolumeSpecName: "kube-api-access-4g8vp") pod "8fd83616-8f63-4754-8286-5c25487c8b9c" (UID: "8fd83616-8f63-4754-8286-5c25487c8b9c"). InnerVolumeSpecName "kube-api-access-4g8vp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:42:20 crc kubenswrapper[4915]: I1124 21:42:20.581308 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g8vp\" (UniqueName: \"kubernetes.io/projected/8fd83616-8f63-4754-8286-5c25487c8b9c-kube-api-access-4g8vp\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:20 crc kubenswrapper[4915]: I1124 21:42:20.607326 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fd83616-8f63-4754-8286-5c25487c8b9c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8fd83616-8f63-4754-8286-5c25487c8b9c" (UID: "8fd83616-8f63-4754-8286-5c25487c8b9c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:42:20 crc kubenswrapper[4915]: I1124 21:42:20.664563 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fd83616-8f63-4754-8286-5c25487c8b9c-config" (OuterVolumeSpecName: "config") pod "8fd83616-8f63-4754-8286-5c25487c8b9c" (UID: "8fd83616-8f63-4754-8286-5c25487c8b9c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:42:20 crc kubenswrapper[4915]: I1124 21:42:20.682762 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8fd83616-8f63-4754-8286-5c25487c8b9c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:20 crc kubenswrapper[4915]: I1124 21:42:20.682798 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fd83616-8f63-4754-8286-5c25487c8b9c-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:20 crc kubenswrapper[4915]: I1124 21:42:20.704976 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fd83616-8f63-4754-8286-5c25487c8b9c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8fd83616-8f63-4754-8286-5c25487c8b9c" (UID: "8fd83616-8f63-4754-8286-5c25487c8b9c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:42:20 crc kubenswrapper[4915]: I1124 21:42:20.705839 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fd83616-8f63-4754-8286-5c25487c8b9c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8fd83616-8f63-4754-8286-5c25487c8b9c" (UID: "8fd83616-8f63-4754-8286-5c25487c8b9c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:42:20 crc kubenswrapper[4915]: I1124 21:42:20.785903 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8fd83616-8f63-4754-8286-5c25487c8b9c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:20 crc kubenswrapper[4915]: I1124 21:42:20.785942 4915 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8fd83616-8f63-4754-8286-5c25487c8b9c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:20 crc kubenswrapper[4915]: I1124 21:42:20.896347 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fd83616-8f63-4754-8286-5c25487c8b9c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8fd83616-8f63-4754-8286-5c25487c8b9c" (UID: "8fd83616-8f63-4754-8286-5c25487c8b9c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:42:20 crc kubenswrapper[4915]: I1124 21:42:20.994632 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fd83616-8f63-4754-8286-5c25487c8b9c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:21 crc kubenswrapper[4915]: I1124 21:42:21.212277 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6b68f549f-hk4pm" event={"ID":"9535890f-6177-4219-9584-7e0200661d82","Type":"ContainerStarted","Data":"d139c105ce86ebc120770f1902d6d6dad63e9311a2b600865df62d6fb40be35b"} Nov 24 21:42:21 crc kubenswrapper[4915]: I1124 21:42:21.212763 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6b68f549f-hk4pm" event={"ID":"9535890f-6177-4219-9584-7e0200661d82","Type":"ContainerStarted","Data":"8503a4f40427f1746d91cdd15588d331845cdb68410c53504b7f163c87392bd1"} Nov 24 21:42:21 crc kubenswrapper[4915]: I1124 21:42:21.212805 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6b68f549f-hk4pm" Nov 24 21:42:21 crc kubenswrapper[4915]: I1124 21:42:21.219892 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-88gfs" event={"ID":"8fd83616-8f63-4754-8286-5c25487c8b9c","Type":"ContainerDied","Data":"99db960ad300d200d79810da8193f405b3a3927f1a89dc56ced41c583b1c6b59"} Nov 24 21:42:21 crc kubenswrapper[4915]: I1124 21:42:21.219946 4915 scope.go:117] "RemoveContainer" containerID="24c2d0e96b3cc6a144cfec8e590bd50c82ffa45b995f78adae77ea1e3f89a80d" Nov 24 21:42:21 crc kubenswrapper[4915]: I1124 21:42:21.220076 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-88gfs" Nov 24 21:42:21 crc kubenswrapper[4915]: I1124 21:42:21.229409 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6db74dbdbb-6ntbq" event={"ID":"ff00b457-c47c-46b0-aa05-a0fe3d54ffa0","Type":"ContainerStarted","Data":"6749dbceacb222d91ce37870e8b6bca95651c2c230267f7ba63275627a1f2e6a"} Nov 24 21:42:21 crc kubenswrapper[4915]: I1124 21:42:21.229449 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6db74dbdbb-6ntbq" event={"ID":"ff00b457-c47c-46b0-aa05-a0fe3d54ffa0","Type":"ContainerStarted","Data":"2c837fc1cba834f215f8406b325c3a88a04bd621582096f8dfa2458d93a7c0fa"} Nov 24 21:42:21 crc kubenswrapper[4915]: I1124 21:42:21.229461 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6db74dbdbb-6ntbq" event={"ID":"ff00b457-c47c-46b0-aa05-a0fe3d54ffa0","Type":"ContainerStarted","Data":"a71e90e5618cf6345d61a3d5fb6048c315dbf53c43fa54da7efa257ad284f19a"} Nov 24 21:42:21 crc kubenswrapper[4915]: I1124 21:42:21.230875 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6db74dbdbb-6ntbq" Nov 24 21:42:21 crc kubenswrapper[4915]: I1124 21:42:21.230952 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6db74dbdbb-6ntbq" Nov 24 21:42:21 crc kubenswrapper[4915]: I1124 21:42:21.260340 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6b68f549f-hk4pm" podStartSLOduration=2.260313133 podStartE2EDuration="2.260313133s" podCreationTimestamp="2025-11-24 21:42:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:42:21.229275126 +0000 UTC m=+1359.545527299" watchObservedRunningTime="2025-11-24 21:42:21.260313133 +0000 UTC m=+1359.576565316" Nov 24 21:42:21 crc kubenswrapper[4915]: I1124 21:42:21.265736 4915 scope.go:117] "RemoveContainer" containerID="daf0c36e8f6c6d0b6ffb066a05d60ae761d9e2d8af9fa4e59e0debde414cc076" Nov 24 21:42:21 crc kubenswrapper[4915]: I1124 21:42:21.269839 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6db74dbdbb-6ntbq" podStartSLOduration=2.269818899 podStartE2EDuration="2.269818899s" podCreationTimestamp="2025-11-24 21:42:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:42:21.258147884 +0000 UTC m=+1359.574400067" watchObservedRunningTime="2025-11-24 21:42:21.269818899 +0000 UTC m=+1359.586071072" Nov 24 21:42:21 crc kubenswrapper[4915]: I1124 21:42:21.288495 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-88gfs"] Nov 24 21:42:21 crc kubenswrapper[4915]: I1124 21:42:21.295904 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-88gfs"] Nov 24 21:42:22 crc kubenswrapper[4915]: I1124 21:42:22.250076 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kxbs8" event={"ID":"8612c3b5-24cc-431a-a888-8be923564356","Type":"ContainerStarted","Data":"f14c105755b0b195f16aa2e008f514c7b484c7b9e33ba2ae6f3d7b07c625cfd7"} Nov 24 21:42:22 crc kubenswrapper[4915]: I1124 21:42:22.268266 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-kxbs8" podStartSLOduration=3.577656203 podStartE2EDuration="42.26825144s" podCreationTimestamp="2025-11-24 21:41:40 +0000 UTC" firstStartedPulling="2025-11-24 21:41:42.261536207 +0000 UTC m=+1320.577788380" lastFinishedPulling="2025-11-24 21:42:20.952131444 +0000 UTC m=+1359.268383617" observedRunningTime="2025-11-24 21:42:22.267423318 +0000 UTC m=+1360.583675511" watchObservedRunningTime="2025-11-24 21:42:22.26825144 +0000 UTC m=+1360.584503613" Nov 24 21:42:22 crc kubenswrapper[4915]: I1124 21:42:22.444126 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fd83616-8f63-4754-8286-5c25487c8b9c" path="/var/lib/kubelet/pods/8fd83616-8f63-4754-8286-5c25487c8b9c/volumes" Nov 24 21:42:23 crc kubenswrapper[4915]: I1124 21:42:23.259592 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9t9tc" event={"ID":"0646b5c8-f87a-4f27-9327-1bc87669623f","Type":"ContainerStarted","Data":"45a0e566a416f85d2f01b60ed2472996b7b19a62bc2f30b07c270d1c82cc380b"} Nov 24 21:42:23 crc kubenswrapper[4915]: I1124 21:42:23.284996 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-9t9tc" podStartSLOduration=3.287634394 podStartE2EDuration="43.284974034s" podCreationTimestamp="2025-11-24 21:41:40 +0000 UTC" firstStartedPulling="2025-11-24 21:41:42.038668698 +0000 UTC m=+1320.354920871" lastFinishedPulling="2025-11-24 21:42:22.036008338 +0000 UTC m=+1360.352260511" observedRunningTime="2025-11-24 21:42:23.277721208 +0000 UTC m=+1361.593973401" watchObservedRunningTime="2025-11-24 21:42:23.284974034 +0000 UTC m=+1361.601226207" Nov 24 21:42:23 crc kubenswrapper[4915]: I1124 21:42:23.494068 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 21:42:23 crc kubenswrapper[4915]: I1124 21:42:23.494495 4915 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 21:42:23 crc kubenswrapper[4915]: I1124 21:42:23.495157 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 21:42:23 crc kubenswrapper[4915]: I1124 21:42:23.495295 4915 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 21:42:23 crc kubenswrapper[4915]: I1124 21:42:23.496871 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 21:42:23 crc kubenswrapper[4915]: I1124 21:42:23.505146 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 21:42:24 crc kubenswrapper[4915]: I1124 21:42:24.326938 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:42:24 crc kubenswrapper[4915]: I1124 21:42:24.327334 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:42:24 crc kubenswrapper[4915]: I1124 21:42:24.327420 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 21:42:24 crc kubenswrapper[4915]: I1124 21:42:24.328485 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dce5b421849bc6dedc5b880936ade9c03271bcdbe605ecf2cad976e72aebbd14"} pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:42:24 crc kubenswrapper[4915]: I1124 21:42:24.328540 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" containerID="cri-o://dce5b421849bc6dedc5b880936ade9c03271bcdbe605ecf2cad976e72aebbd14" gracePeriod=600 Nov 24 21:42:25 crc kubenswrapper[4915]: I1124 21:42:25.281377 4915 generic.go:334] "Generic (PLEG): container finished" podID="8612c3b5-24cc-431a-a888-8be923564356" containerID="f14c105755b0b195f16aa2e008f514c7b484c7b9e33ba2ae6f3d7b07c625cfd7" exitCode=0 Nov 24 21:42:25 crc kubenswrapper[4915]: I1124 21:42:25.281547 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kxbs8" event={"ID":"8612c3b5-24cc-431a-a888-8be923564356","Type":"ContainerDied","Data":"f14c105755b0b195f16aa2e008f514c7b484c7b9e33ba2ae6f3d7b07c625cfd7"} Nov 24 21:42:25 crc kubenswrapper[4915]: I1124 21:42:25.287485 4915 generic.go:334] "Generic (PLEG): container finished" podID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerID="dce5b421849bc6dedc5b880936ade9c03271bcdbe605ecf2cad976e72aebbd14" exitCode=0 Nov 24 21:42:25 crc kubenswrapper[4915]: I1124 21:42:25.287528 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerDied","Data":"dce5b421849bc6dedc5b880936ade9c03271bcdbe605ecf2cad976e72aebbd14"} Nov 24 21:42:25 crc kubenswrapper[4915]: I1124 21:42:25.287560 4915 scope.go:117] "RemoveContainer" containerID="c001cd4c9ce6030e46567b52ccde6925b9b174f41dc20336633c7c1d5f367107" Nov 24 21:42:28 crc kubenswrapper[4915]: I1124 21:42:28.064509 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kxbs8" Nov 24 21:42:28 crc kubenswrapper[4915]: E1124 21:42:28.170557 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="c1a56691-c50a-4917-9331-4920a62c5a3b" Nov 24 21:42:28 crc kubenswrapper[4915]: I1124 21:42:28.186383 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8612c3b5-24cc-431a-a888-8be923564356-db-sync-config-data\") pod \"8612c3b5-24cc-431a-a888-8be923564356\" (UID: \"8612c3b5-24cc-431a-a888-8be923564356\") " Nov 24 21:42:28 crc kubenswrapper[4915]: I1124 21:42:28.186431 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8612c3b5-24cc-431a-a888-8be923564356-combined-ca-bundle\") pod \"8612c3b5-24cc-431a-a888-8be923564356\" (UID: \"8612c3b5-24cc-431a-a888-8be923564356\") " Nov 24 21:42:28 crc kubenswrapper[4915]: I1124 21:42:28.186549 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2k2d\" (UniqueName: \"kubernetes.io/projected/8612c3b5-24cc-431a-a888-8be923564356-kube-api-access-p2k2d\") pod \"8612c3b5-24cc-431a-a888-8be923564356\" (UID: \"8612c3b5-24cc-431a-a888-8be923564356\") " Nov 24 21:42:28 crc kubenswrapper[4915]: I1124 21:42:28.191074 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8612c3b5-24cc-431a-a888-8be923564356-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8612c3b5-24cc-431a-a888-8be923564356" (UID: "8612c3b5-24cc-431a-a888-8be923564356"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:28 crc kubenswrapper[4915]: I1124 21:42:28.195457 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8612c3b5-24cc-431a-a888-8be923564356-kube-api-access-p2k2d" (OuterVolumeSpecName: "kube-api-access-p2k2d") pod "8612c3b5-24cc-431a-a888-8be923564356" (UID: "8612c3b5-24cc-431a-a888-8be923564356"). InnerVolumeSpecName "kube-api-access-p2k2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:42:28 crc kubenswrapper[4915]: I1124 21:42:28.220159 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8612c3b5-24cc-431a-a888-8be923564356-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8612c3b5-24cc-431a-a888-8be923564356" (UID: "8612c3b5-24cc-431a-a888-8be923564356"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:28 crc kubenswrapper[4915]: I1124 21:42:28.288998 4915 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8612c3b5-24cc-431a-a888-8be923564356-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:28 crc kubenswrapper[4915]: I1124 21:42:28.289306 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8612c3b5-24cc-431a-a888-8be923564356-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:28 crc kubenswrapper[4915]: I1124 21:42:28.289315 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2k2d\" (UniqueName: \"kubernetes.io/projected/8612c3b5-24cc-431a-a888-8be923564356-kube-api-access-p2k2d\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:28 crc kubenswrapper[4915]: I1124 21:42:28.331749 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerStarted","Data":"e35b421db64e0c0a1d101af5d1cb0bc0580962fd3824f2e5b2c603b4d4ef9489"} Nov 24 21:42:28 crc kubenswrapper[4915]: I1124 21:42:28.337084 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1a56691-c50a-4917-9331-4920a62c5a3b","Type":"ContainerStarted","Data":"80981c977a1b036d4cd5e3ddcde9998a269162870a19d86273a4f119d352b187"} Nov 24 21:42:28 crc kubenswrapper[4915]: I1124 21:42:28.337236 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1a56691-c50a-4917-9331-4920a62c5a3b" containerName="ceilometer-notification-agent" containerID="cri-o://1b10ff0cafbb37746e4eda668a03a692705626c3d1ca7de1b39096e6444c2157" gracePeriod=30 Nov 24 21:42:28 crc kubenswrapper[4915]: I1124 21:42:28.337441 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 21:42:28 crc kubenswrapper[4915]: I1124 21:42:28.337483 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1a56691-c50a-4917-9331-4920a62c5a3b" containerName="proxy-httpd" containerID="cri-o://80981c977a1b036d4cd5e3ddcde9998a269162870a19d86273a4f119d352b187" gracePeriod=30 Nov 24 21:42:28 crc kubenswrapper[4915]: I1124 21:42:28.337520 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1a56691-c50a-4917-9331-4920a62c5a3b" containerName="sg-core" containerID="cri-o://d4e4e77ad89876f92a5feeb744fc6ed7d62060a1cb6473adc880a964c65dce0e" gracePeriod=30 Nov 24 21:42:28 crc kubenswrapper[4915]: I1124 21:42:28.346149 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kxbs8" event={"ID":"8612c3b5-24cc-431a-a888-8be923564356","Type":"ContainerDied","Data":"ae168591854fb450794eedc81a09117ffeb67a5c21bb5bd99b1f92cc45097998"} Nov 24 21:42:28 crc kubenswrapper[4915]: I1124 21:42:28.346196 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae168591854fb450794eedc81a09117ffeb67a5c21bb5bd99b1f92cc45097998" Nov 24 21:42:28 crc kubenswrapper[4915]: I1124 21:42:28.346266 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kxbs8" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.333173 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-66f8975c4c-2r5c7"] Nov 24 21:42:29 crc kubenswrapper[4915]: E1124 21:42:29.351896 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c21efec-e27b-4e9e-bdc6-a4d9a0eab412" containerName="heat-db-sync" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.351952 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c21efec-e27b-4e9e-bdc6-a4d9a0eab412" containerName="heat-db-sync" Nov 24 21:42:29 crc kubenswrapper[4915]: E1124 21:42:29.352009 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8612c3b5-24cc-431a-a888-8be923564356" containerName="barbican-db-sync" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.352015 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8612c3b5-24cc-431a-a888-8be923564356" containerName="barbican-db-sync" Nov 24 21:42:29 crc kubenswrapper[4915]: E1124 21:42:29.352044 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd83616-8f63-4754-8286-5c25487c8b9c" containerName="dnsmasq-dns" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.352051 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd83616-8f63-4754-8286-5c25487c8b9c" containerName="dnsmasq-dns" Nov 24 21:42:29 crc kubenswrapper[4915]: E1124 21:42:29.352068 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd83616-8f63-4754-8286-5c25487c8b9c" containerName="init" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.352073 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd83616-8f63-4754-8286-5c25487c8b9c" containerName="init" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.352434 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fd83616-8f63-4754-8286-5c25487c8b9c" containerName="dnsmasq-dns" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.352447 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="8612c3b5-24cc-431a-a888-8be923564356" containerName="barbican-db-sync" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.352467 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c21efec-e27b-4e9e-bdc6-a4d9a0eab412" containerName="heat-db-sync" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.353665 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7dd95d5f64-ht7d8"] Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.355118 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7dd95d5f64-ht7d8" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.356966 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-66f8975c4c-2r5c7" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.360326 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-66f8975c4c-2r5c7"] Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.364192 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.364450 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.364564 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5td8d" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.373029 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.380210 4915 generic.go:334] "Generic (PLEG): container finished" podID="c1a56691-c50a-4917-9331-4920a62c5a3b" containerID="80981c977a1b036d4cd5e3ddcde9998a269162870a19d86273a4f119d352b187" exitCode=0 Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.380256 4915 generic.go:334] "Generic (PLEG): container finished" podID="c1a56691-c50a-4917-9331-4920a62c5a3b" containerID="d4e4e77ad89876f92a5feeb744fc6ed7d62060a1cb6473adc880a964c65dce0e" exitCode=2 Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.380334 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1a56691-c50a-4917-9331-4920a62c5a3b","Type":"ContainerDied","Data":"80981c977a1b036d4cd5e3ddcde9998a269162870a19d86273a4f119d352b187"} Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.380366 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1a56691-c50a-4917-9331-4920a62c5a3b","Type":"ContainerDied","Data":"d4e4e77ad89876f92a5feeb744fc6ed7d62060a1cb6473adc880a964c65dce0e"} Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.386153 4915 generic.go:334] "Generic (PLEG): container finished" podID="0646b5c8-f87a-4f27-9327-1bc87669623f" containerID="45a0e566a416f85d2f01b60ed2472996b7b19a62bc2f30b07c270d1c82cc380b" exitCode=0 Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.386403 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9t9tc" event={"ID":"0646b5c8-f87a-4f27-9327-1bc87669623f","Type":"ContainerDied","Data":"45a0e566a416f85d2f01b60ed2472996b7b19a62bc2f30b07c270d1c82cc380b"} Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.393083 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7dd95d5f64-ht7d8"] Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.408306 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-n4drj"] Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.410186 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-n4drj" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.417721 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b2a93f7-9082-4391-b366-64cd870dc30e-config-data\") pod \"barbican-worker-66f8975c4c-2r5c7\" (UID: \"2b2a93f7-9082-4391-b366-64cd870dc30e\") " pod="openstack/barbican-worker-66f8975c4c-2r5c7" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.418255 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn86r\" (UniqueName: \"kubernetes.io/projected/2bce36ae-dbff-4c97-9f8e-43edd44dfad0-kube-api-access-rn86r\") pod \"barbican-keystone-listener-7dd95d5f64-ht7d8\" (UID: \"2bce36ae-dbff-4c97-9f8e-43edd44dfad0\") " pod="openstack/barbican-keystone-listener-7dd95d5f64-ht7d8" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.418319 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bce36ae-dbff-4c97-9f8e-43edd44dfad0-config-data\") pod \"barbican-keystone-listener-7dd95d5f64-ht7d8\" (UID: \"2bce36ae-dbff-4c97-9f8e-43edd44dfad0\") " pod="openstack/barbican-keystone-listener-7dd95d5f64-ht7d8" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.418428 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bce36ae-dbff-4c97-9f8e-43edd44dfad0-logs\") pod \"barbican-keystone-listener-7dd95d5f64-ht7d8\" (UID: \"2bce36ae-dbff-4c97-9f8e-43edd44dfad0\") " pod="openstack/barbican-keystone-listener-7dd95d5f64-ht7d8" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.418457 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b2a93f7-9082-4391-b366-64cd870dc30e-combined-ca-bundle\") pod \"barbican-worker-66f8975c4c-2r5c7\" (UID: \"2b2a93f7-9082-4391-b366-64cd870dc30e\") " pod="openstack/barbican-worker-66f8975c4c-2r5c7" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.418673 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b2a93f7-9082-4391-b366-64cd870dc30e-logs\") pod \"barbican-worker-66f8975c4c-2r5c7\" (UID: \"2b2a93f7-9082-4391-b366-64cd870dc30e\") " pod="openstack/barbican-worker-66f8975c4c-2r5c7" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.418893 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2bce36ae-dbff-4c97-9f8e-43edd44dfad0-config-data-custom\") pod \"barbican-keystone-listener-7dd95d5f64-ht7d8\" (UID: \"2bce36ae-dbff-4c97-9f8e-43edd44dfad0\") " pod="openstack/barbican-keystone-listener-7dd95d5f64-ht7d8" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.418931 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4dlb\" (UniqueName: \"kubernetes.io/projected/2b2a93f7-9082-4391-b366-64cd870dc30e-kube-api-access-q4dlb\") pod \"barbican-worker-66f8975c4c-2r5c7\" (UID: \"2b2a93f7-9082-4391-b366-64cd870dc30e\") " pod="openstack/barbican-worker-66f8975c4c-2r5c7" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.418963 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b2a93f7-9082-4391-b366-64cd870dc30e-config-data-custom\") pod \"barbican-worker-66f8975c4c-2r5c7\" (UID: \"2b2a93f7-9082-4391-b366-64cd870dc30e\") " pod="openstack/barbican-worker-66f8975c4c-2r5c7" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.418995 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bce36ae-dbff-4c97-9f8e-43edd44dfad0-combined-ca-bundle\") pod \"barbican-keystone-listener-7dd95d5f64-ht7d8\" (UID: \"2bce36ae-dbff-4c97-9f8e-43edd44dfad0\") " pod="openstack/barbican-keystone-listener-7dd95d5f64-ht7d8" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.433811 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-n4drj"] Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.520851 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b2a93f7-9082-4391-b366-64cd870dc30e-logs\") pod \"barbican-worker-66f8975c4c-2r5c7\" (UID: \"2b2a93f7-9082-4391-b366-64cd870dc30e\") " pod="openstack/barbican-worker-66f8975c4c-2r5c7" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.521025 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2bce36ae-dbff-4c97-9f8e-43edd44dfad0-config-data-custom\") pod \"barbican-keystone-listener-7dd95d5f64-ht7d8\" (UID: \"2bce36ae-dbff-4c97-9f8e-43edd44dfad0\") " pod="openstack/barbican-keystone-listener-7dd95d5f64-ht7d8" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.521058 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4dlb\" (UniqueName: \"kubernetes.io/projected/2b2a93f7-9082-4391-b366-64cd870dc30e-kube-api-access-q4dlb\") pod \"barbican-worker-66f8975c4c-2r5c7\" (UID: \"2b2a93f7-9082-4391-b366-64cd870dc30e\") " pod="openstack/barbican-worker-66f8975c4c-2r5c7" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.521123 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-n4drj\" (UID: \"23b05b9c-7263-4bd9-9f62-d1c95e92fa68\") " pod="openstack/dnsmasq-dns-848cf88cfc-n4drj" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.521156 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b2a93f7-9082-4391-b366-64cd870dc30e-config-data-custom\") pod \"barbican-worker-66f8975c4c-2r5c7\" (UID: \"2b2a93f7-9082-4391-b366-64cd870dc30e\") " pod="openstack/barbican-worker-66f8975c4c-2r5c7" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.521187 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bce36ae-dbff-4c97-9f8e-43edd44dfad0-combined-ca-bundle\") pod \"barbican-keystone-listener-7dd95d5f64-ht7d8\" (UID: \"2bce36ae-dbff-4c97-9f8e-43edd44dfad0\") " pod="openstack/barbican-keystone-listener-7dd95d5f64-ht7d8" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.521218 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-n4drj\" (UID: \"23b05b9c-7263-4bd9-9f62-d1c95e92fa68\") " pod="openstack/dnsmasq-dns-848cf88cfc-n4drj" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.521306 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b2a93f7-9082-4391-b366-64cd870dc30e-config-data\") pod \"barbican-worker-66f8975c4c-2r5c7\" (UID: \"2b2a93f7-9082-4391-b366-64cd870dc30e\") " pod="openstack/barbican-worker-66f8975c4c-2r5c7" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.521348 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-n4drj\" (UID: \"23b05b9c-7263-4bd9-9f62-d1c95e92fa68\") " pod="openstack/dnsmasq-dns-848cf88cfc-n4drj" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.521373 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2b2a93f7-9082-4391-b366-64cd870dc30e-logs\") pod \"barbican-worker-66f8975c4c-2r5c7\" (UID: \"2b2a93f7-9082-4391-b366-64cd870dc30e\") " pod="openstack/barbican-worker-66f8975c4c-2r5c7" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.521459 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn86r\" (UniqueName: \"kubernetes.io/projected/2bce36ae-dbff-4c97-9f8e-43edd44dfad0-kube-api-access-rn86r\") pod \"barbican-keystone-listener-7dd95d5f64-ht7d8\" (UID: \"2bce36ae-dbff-4c97-9f8e-43edd44dfad0\") " pod="openstack/barbican-keystone-listener-7dd95d5f64-ht7d8" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.521520 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bce36ae-dbff-4c97-9f8e-43edd44dfad0-config-data\") pod \"barbican-keystone-listener-7dd95d5f64-ht7d8\" (UID: \"2bce36ae-dbff-4c97-9f8e-43edd44dfad0\") " pod="openstack/barbican-keystone-listener-7dd95d5f64-ht7d8" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.521561 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-config\") pod \"dnsmasq-dns-848cf88cfc-n4drj\" (UID: \"23b05b9c-7263-4bd9-9f62-d1c95e92fa68\") " pod="openstack/dnsmasq-dns-848cf88cfc-n4drj" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.521797 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b2a93f7-9082-4391-b366-64cd870dc30e-combined-ca-bundle\") pod \"barbican-worker-66f8975c4c-2r5c7\" (UID: \"2b2a93f7-9082-4391-b366-64cd870dc30e\") " pod="openstack/barbican-worker-66f8975c4c-2r5c7" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.521838 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bce36ae-dbff-4c97-9f8e-43edd44dfad0-logs\") pod \"barbican-keystone-listener-7dd95d5f64-ht7d8\" (UID: \"2bce36ae-dbff-4c97-9f8e-43edd44dfad0\") " pod="openstack/barbican-keystone-listener-7dd95d5f64-ht7d8" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.522125 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp52t\" (UniqueName: \"kubernetes.io/projected/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-kube-api-access-fp52t\") pod \"dnsmasq-dns-848cf88cfc-n4drj\" (UID: \"23b05b9c-7263-4bd9-9f62-d1c95e92fa68\") " pod="openstack/dnsmasq-dns-848cf88cfc-n4drj" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.522182 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-n4drj\" (UID: \"23b05b9c-7263-4bd9-9f62-d1c95e92fa68\") " pod="openstack/dnsmasq-dns-848cf88cfc-n4drj" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.522141 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bce36ae-dbff-4c97-9f8e-43edd44dfad0-logs\") pod \"barbican-keystone-listener-7dd95d5f64-ht7d8\" (UID: \"2bce36ae-dbff-4c97-9f8e-43edd44dfad0\") " pod="openstack/barbican-keystone-listener-7dd95d5f64-ht7d8" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.547039 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-dcff88f94-kdzrd"] Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.549134 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-dcff88f94-kdzrd" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.554510 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.558627 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4dlb\" (UniqueName: \"kubernetes.io/projected/2b2a93f7-9082-4391-b366-64cd870dc30e-kube-api-access-q4dlb\") pod \"barbican-worker-66f8975c4c-2r5c7\" (UID: \"2b2a93f7-9082-4391-b366-64cd870dc30e\") " pod="openstack/barbican-worker-66f8975c4c-2r5c7" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.566342 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-dcff88f94-kdzrd"] Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.575054 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b2a93f7-9082-4391-b366-64cd870dc30e-combined-ca-bundle\") pod \"barbican-worker-66f8975c4c-2r5c7\" (UID: \"2b2a93f7-9082-4391-b366-64cd870dc30e\") " pod="openstack/barbican-worker-66f8975c4c-2r5c7" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.576056 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2b2a93f7-9082-4391-b366-64cd870dc30e-config-data-custom\") pod \"barbican-worker-66f8975c4c-2r5c7\" (UID: \"2b2a93f7-9082-4391-b366-64cd870dc30e\") " pod="openstack/barbican-worker-66f8975c4c-2r5c7" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.576496 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bce36ae-dbff-4c97-9f8e-43edd44dfad0-combined-ca-bundle\") pod \"barbican-keystone-listener-7dd95d5f64-ht7d8\" (UID: \"2bce36ae-dbff-4c97-9f8e-43edd44dfad0\") " pod="openstack/barbican-keystone-listener-7dd95d5f64-ht7d8" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.577243 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bce36ae-dbff-4c97-9f8e-43edd44dfad0-config-data\") pod \"barbican-keystone-listener-7dd95d5f64-ht7d8\" (UID: \"2bce36ae-dbff-4c97-9f8e-43edd44dfad0\") " pod="openstack/barbican-keystone-listener-7dd95d5f64-ht7d8" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.577259 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2bce36ae-dbff-4c97-9f8e-43edd44dfad0-config-data-custom\") pod \"barbican-keystone-listener-7dd95d5f64-ht7d8\" (UID: \"2bce36ae-dbff-4c97-9f8e-43edd44dfad0\") " pod="openstack/barbican-keystone-listener-7dd95d5f64-ht7d8" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.579437 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b2a93f7-9082-4391-b366-64cd870dc30e-config-data\") pod \"barbican-worker-66f8975c4c-2r5c7\" (UID: \"2b2a93f7-9082-4391-b366-64cd870dc30e\") " pod="openstack/barbican-worker-66f8975c4c-2r5c7" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.588505 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn86r\" (UniqueName: \"kubernetes.io/projected/2bce36ae-dbff-4c97-9f8e-43edd44dfad0-kube-api-access-rn86r\") pod \"barbican-keystone-listener-7dd95d5f64-ht7d8\" (UID: \"2bce36ae-dbff-4c97-9f8e-43edd44dfad0\") " pod="openstack/barbican-keystone-listener-7dd95d5f64-ht7d8" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.623666 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-config\") pod \"dnsmasq-dns-848cf88cfc-n4drj\" (UID: \"23b05b9c-7263-4bd9-9f62-d1c95e92fa68\") " pod="openstack/dnsmasq-dns-848cf88cfc-n4drj" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.623726 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr2c4\" (UniqueName: \"kubernetes.io/projected/bf0c5bda-da34-4d01-8806-6016a8a24a9a-kube-api-access-sr2c4\") pod \"barbican-api-dcff88f94-kdzrd\" (UID: \"bf0c5bda-da34-4d01-8806-6016a8a24a9a\") " pod="openstack/barbican-api-dcff88f94-kdzrd" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.623749 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp52t\" (UniqueName: \"kubernetes.io/projected/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-kube-api-access-fp52t\") pod \"dnsmasq-dns-848cf88cfc-n4drj\" (UID: \"23b05b9c-7263-4bd9-9f62-d1c95e92fa68\") " pod="openstack/dnsmasq-dns-848cf88cfc-n4drj" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.623769 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-n4drj\" (UID: \"23b05b9c-7263-4bd9-9f62-d1c95e92fa68\") " pod="openstack/dnsmasq-dns-848cf88cfc-n4drj" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.623828 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf0c5bda-da34-4d01-8806-6016a8a24a9a-logs\") pod \"barbican-api-dcff88f94-kdzrd\" (UID: \"bf0c5bda-da34-4d01-8806-6016a8a24a9a\") " pod="openstack/barbican-api-dcff88f94-kdzrd" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.623954 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-n4drj\" (UID: \"23b05b9c-7263-4bd9-9f62-d1c95e92fa68\") " pod="openstack/dnsmasq-dns-848cf88cfc-n4drj" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.624217 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf0c5bda-da34-4d01-8806-6016a8a24a9a-config-data-custom\") pod \"barbican-api-dcff88f94-kdzrd\" (UID: \"bf0c5bda-da34-4d01-8806-6016a8a24a9a\") " pod="openstack/barbican-api-dcff88f94-kdzrd" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.624252 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-n4drj\" (UID: \"23b05b9c-7263-4bd9-9f62-d1c95e92fa68\") " pod="openstack/dnsmasq-dns-848cf88cfc-n4drj" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.624302 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-n4drj\" (UID: \"23b05b9c-7263-4bd9-9f62-d1c95e92fa68\") " pod="openstack/dnsmasq-dns-848cf88cfc-n4drj" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.624545 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf0c5bda-da34-4d01-8806-6016a8a24a9a-config-data\") pod \"barbican-api-dcff88f94-kdzrd\" (UID: \"bf0c5bda-da34-4d01-8806-6016a8a24a9a\") " pod="openstack/barbican-api-dcff88f94-kdzrd" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.624567 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0c5bda-da34-4d01-8806-6016a8a24a9a-combined-ca-bundle\") pod \"barbican-api-dcff88f94-kdzrd\" (UID: \"bf0c5bda-da34-4d01-8806-6016a8a24a9a\") " pod="openstack/barbican-api-dcff88f94-kdzrd" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.625881 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-config\") pod \"dnsmasq-dns-848cf88cfc-n4drj\" (UID: \"23b05b9c-7263-4bd9-9f62-d1c95e92fa68\") " pod="openstack/dnsmasq-dns-848cf88cfc-n4drj" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.627436 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-n4drj\" (UID: \"23b05b9c-7263-4bd9-9f62-d1c95e92fa68\") " pod="openstack/dnsmasq-dns-848cf88cfc-n4drj" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.629998 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-n4drj\" (UID: \"23b05b9c-7263-4bd9-9f62-d1c95e92fa68\") " pod="openstack/dnsmasq-dns-848cf88cfc-n4drj" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.630558 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-n4drj\" (UID: \"23b05b9c-7263-4bd9-9f62-d1c95e92fa68\") " pod="openstack/dnsmasq-dns-848cf88cfc-n4drj" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.631072 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-n4drj\" (UID: \"23b05b9c-7263-4bd9-9f62-d1c95e92fa68\") " pod="openstack/dnsmasq-dns-848cf88cfc-n4drj" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.642505 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp52t\" (UniqueName: \"kubernetes.io/projected/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-kube-api-access-fp52t\") pod \"dnsmasq-dns-848cf88cfc-n4drj\" (UID: \"23b05b9c-7263-4bd9-9f62-d1c95e92fa68\") " pod="openstack/dnsmasq-dns-848cf88cfc-n4drj" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.720799 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7dd95d5f64-ht7d8" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.726530 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr2c4\" (UniqueName: \"kubernetes.io/projected/bf0c5bda-da34-4d01-8806-6016a8a24a9a-kube-api-access-sr2c4\") pod \"barbican-api-dcff88f94-kdzrd\" (UID: \"bf0c5bda-da34-4d01-8806-6016a8a24a9a\") " pod="openstack/barbican-api-dcff88f94-kdzrd" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.726598 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf0c5bda-da34-4d01-8806-6016a8a24a9a-logs\") pod \"barbican-api-dcff88f94-kdzrd\" (UID: \"bf0c5bda-da34-4d01-8806-6016a8a24a9a\") " pod="openstack/barbican-api-dcff88f94-kdzrd" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.726706 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf0c5bda-da34-4d01-8806-6016a8a24a9a-config-data-custom\") pod \"barbican-api-dcff88f94-kdzrd\" (UID: \"bf0c5bda-da34-4d01-8806-6016a8a24a9a\") " pod="openstack/barbican-api-dcff88f94-kdzrd" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.726792 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf0c5bda-da34-4d01-8806-6016a8a24a9a-config-data\") pod \"barbican-api-dcff88f94-kdzrd\" (UID: \"bf0c5bda-da34-4d01-8806-6016a8a24a9a\") " pod="openstack/barbican-api-dcff88f94-kdzrd" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.726816 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0c5bda-da34-4d01-8806-6016a8a24a9a-combined-ca-bundle\") pod \"barbican-api-dcff88f94-kdzrd\" (UID: \"bf0c5bda-da34-4d01-8806-6016a8a24a9a\") " pod="openstack/barbican-api-dcff88f94-kdzrd" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.727302 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf0c5bda-da34-4d01-8806-6016a8a24a9a-logs\") pod \"barbican-api-dcff88f94-kdzrd\" (UID: \"bf0c5bda-da34-4d01-8806-6016a8a24a9a\") " pod="openstack/barbican-api-dcff88f94-kdzrd" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.730942 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf0c5bda-da34-4d01-8806-6016a8a24a9a-config-data-custom\") pod \"barbican-api-dcff88f94-kdzrd\" (UID: \"bf0c5bda-da34-4d01-8806-6016a8a24a9a\") " pod="openstack/barbican-api-dcff88f94-kdzrd" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.732088 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf0c5bda-da34-4d01-8806-6016a8a24a9a-config-data\") pod \"barbican-api-dcff88f94-kdzrd\" (UID: \"bf0c5bda-da34-4d01-8806-6016a8a24a9a\") " pod="openstack/barbican-api-dcff88f94-kdzrd" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.740321 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0c5bda-da34-4d01-8806-6016a8a24a9a-combined-ca-bundle\") pod \"barbican-api-dcff88f94-kdzrd\" (UID: \"bf0c5bda-da34-4d01-8806-6016a8a24a9a\") " pod="openstack/barbican-api-dcff88f94-kdzrd" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.749366 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr2c4\" (UniqueName: \"kubernetes.io/projected/bf0c5bda-da34-4d01-8806-6016a8a24a9a-kube-api-access-sr2c4\") pod \"barbican-api-dcff88f94-kdzrd\" (UID: \"bf0c5bda-da34-4d01-8806-6016a8a24a9a\") " pod="openstack/barbican-api-dcff88f94-kdzrd" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.749397 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-66f8975c4c-2r5c7" Nov 24 21:42:29 crc kubenswrapper[4915]: I1124 21:42:29.757881 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-n4drj" Nov 24 21:42:30 crc kubenswrapper[4915]: I1124 21:42:30.009003 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-dcff88f94-kdzrd" Nov 24 21:42:30 crc kubenswrapper[4915]: I1124 21:42:30.323170 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7dd95d5f64-ht7d8"] Nov 24 21:42:30 crc kubenswrapper[4915]: I1124 21:42:30.399981 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7dd95d5f64-ht7d8" event={"ID":"2bce36ae-dbff-4c97-9f8e-43edd44dfad0","Type":"ContainerStarted","Data":"b8b7e4856e0cf346c91b46e890dcc633b67372fea4b9dfa702088371b387b7ab"} Nov 24 21:42:30 crc kubenswrapper[4915]: I1124 21:42:30.406318 4915 generic.go:334] "Generic (PLEG): container finished" podID="c1a56691-c50a-4917-9331-4920a62c5a3b" containerID="1b10ff0cafbb37746e4eda668a03a692705626c3d1ca7de1b39096e6444c2157" exitCode=0 Nov 24 21:42:30 crc kubenswrapper[4915]: I1124 21:42:30.406485 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1a56691-c50a-4917-9331-4920a62c5a3b","Type":"ContainerDied","Data":"1b10ff0cafbb37746e4eda668a03a692705626c3d1ca7de1b39096e6444c2157"} Nov 24 21:42:30 crc kubenswrapper[4915]: I1124 21:42:30.456900 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-66f8975c4c-2r5c7"] Nov 24 21:42:30 crc kubenswrapper[4915]: W1124 21:42:30.467064 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b2a93f7_9082_4391_b366_64cd870dc30e.slice/crio-a6e1657a2713690f6a74f63bb6367fa0b1330886eb8fd5fb3b9994f8c6eb4bb7 WatchSource:0}: Error finding container a6e1657a2713690f6a74f63bb6367fa0b1330886eb8fd5fb3b9994f8c6eb4bb7: Status 404 returned error can't find the container with id a6e1657a2713690f6a74f63bb6367fa0b1330886eb8fd5fb3b9994f8c6eb4bb7 Nov 24 21:42:30 crc kubenswrapper[4915]: I1124 21:42:30.790381 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-n4drj"] Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.030681 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-dcff88f94-kdzrd"] Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.187722 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.202580 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9t9tc" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.275825 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b7vt\" (UniqueName: \"kubernetes.io/projected/c1a56691-c50a-4917-9331-4920a62c5a3b-kube-api-access-9b7vt\") pod \"c1a56691-c50a-4917-9331-4920a62c5a3b\" (UID: \"c1a56691-c50a-4917-9331-4920a62c5a3b\") " Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.275904 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1a56691-c50a-4917-9331-4920a62c5a3b-log-httpd\") pod \"c1a56691-c50a-4917-9331-4920a62c5a3b\" (UID: \"c1a56691-c50a-4917-9331-4920a62c5a3b\") " Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.275951 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0646b5c8-f87a-4f27-9327-1bc87669623f-config-data\") pod \"0646b5c8-f87a-4f27-9327-1bc87669623f\" (UID: \"0646b5c8-f87a-4f27-9327-1bc87669623f\") " Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.276066 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a56691-c50a-4917-9331-4920a62c5a3b-combined-ca-bundle\") pod \"c1a56691-c50a-4917-9331-4920a62c5a3b\" (UID: \"c1a56691-c50a-4917-9331-4920a62c5a3b\") " Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.276148 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0646b5c8-f87a-4f27-9327-1bc87669623f-scripts\") pod \"0646b5c8-f87a-4f27-9327-1bc87669623f\" (UID: \"0646b5c8-f87a-4f27-9327-1bc87669623f\") " Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.276180 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75jb4\" (UniqueName: \"kubernetes.io/projected/0646b5c8-f87a-4f27-9327-1bc87669623f-kube-api-access-75jb4\") pod \"0646b5c8-f87a-4f27-9327-1bc87669623f\" (UID: \"0646b5c8-f87a-4f27-9327-1bc87669623f\") " Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.276224 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1a56691-c50a-4917-9331-4920a62c5a3b-run-httpd\") pod \"c1a56691-c50a-4917-9331-4920a62c5a3b\" (UID: \"c1a56691-c50a-4917-9331-4920a62c5a3b\") " Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.276279 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1a56691-c50a-4917-9331-4920a62c5a3b-sg-core-conf-yaml\") pod \"c1a56691-c50a-4917-9331-4920a62c5a3b\" (UID: \"c1a56691-c50a-4917-9331-4920a62c5a3b\") " Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.276398 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0646b5c8-f87a-4f27-9327-1bc87669623f-db-sync-config-data\") pod \"0646b5c8-f87a-4f27-9327-1bc87669623f\" (UID: \"0646b5c8-f87a-4f27-9327-1bc87669623f\") " Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.276430 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a56691-c50a-4917-9331-4920a62c5a3b-config-data\") pod \"c1a56691-c50a-4917-9331-4920a62c5a3b\" (UID: \"c1a56691-c50a-4917-9331-4920a62c5a3b\") " Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.276478 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0646b5c8-f87a-4f27-9327-1bc87669623f-etc-machine-id\") pod \"0646b5c8-f87a-4f27-9327-1bc87669623f\" (UID: \"0646b5c8-f87a-4f27-9327-1bc87669623f\") " Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.276521 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1a56691-c50a-4917-9331-4920a62c5a3b-scripts\") pod \"c1a56691-c50a-4917-9331-4920a62c5a3b\" (UID: \"c1a56691-c50a-4917-9331-4920a62c5a3b\") " Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.276561 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0646b5c8-f87a-4f27-9327-1bc87669623f-combined-ca-bundle\") pod \"0646b5c8-f87a-4f27-9327-1bc87669623f\" (UID: \"0646b5c8-f87a-4f27-9327-1bc87669623f\") " Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.276791 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1a56691-c50a-4917-9331-4920a62c5a3b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c1a56691-c50a-4917-9331-4920a62c5a3b" (UID: "c1a56691-c50a-4917-9331-4920a62c5a3b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.277212 4915 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1a56691-c50a-4917-9331-4920a62c5a3b-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.277251 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0646b5c8-f87a-4f27-9327-1bc87669623f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0646b5c8-f87a-4f27-9327-1bc87669623f" (UID: "0646b5c8-f87a-4f27-9327-1bc87669623f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.277790 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1a56691-c50a-4917-9331-4920a62c5a3b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c1a56691-c50a-4917-9331-4920a62c5a3b" (UID: "c1a56691-c50a-4917-9331-4920a62c5a3b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.283323 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0646b5c8-f87a-4f27-9327-1bc87669623f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0646b5c8-f87a-4f27-9327-1bc87669623f" (UID: "0646b5c8-f87a-4f27-9327-1bc87669623f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.283409 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0646b5c8-f87a-4f27-9327-1bc87669623f-kube-api-access-75jb4" (OuterVolumeSpecName: "kube-api-access-75jb4") pod "0646b5c8-f87a-4f27-9327-1bc87669623f" (UID: "0646b5c8-f87a-4f27-9327-1bc87669623f"). InnerVolumeSpecName "kube-api-access-75jb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.283765 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1a56691-c50a-4917-9331-4920a62c5a3b-kube-api-access-9b7vt" (OuterVolumeSpecName: "kube-api-access-9b7vt") pod "c1a56691-c50a-4917-9331-4920a62c5a3b" (UID: "c1a56691-c50a-4917-9331-4920a62c5a3b"). InnerVolumeSpecName "kube-api-access-9b7vt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.285050 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0646b5c8-f87a-4f27-9327-1bc87669623f-scripts" (OuterVolumeSpecName: "scripts") pod "0646b5c8-f87a-4f27-9327-1bc87669623f" (UID: "0646b5c8-f87a-4f27-9327-1bc87669623f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.285503 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1a56691-c50a-4917-9331-4920a62c5a3b-scripts" (OuterVolumeSpecName: "scripts") pod "c1a56691-c50a-4917-9331-4920a62c5a3b" (UID: "c1a56691-c50a-4917-9331-4920a62c5a3b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.333564 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0646b5c8-f87a-4f27-9327-1bc87669623f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0646b5c8-f87a-4f27-9327-1bc87669623f" (UID: "0646b5c8-f87a-4f27-9327-1bc87669623f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.336653 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1a56691-c50a-4917-9331-4920a62c5a3b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c1a56691-c50a-4917-9331-4920a62c5a3b" (UID: "c1a56691-c50a-4917-9331-4920a62c5a3b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.341932 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1a56691-c50a-4917-9331-4920a62c5a3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1a56691-c50a-4917-9331-4920a62c5a3b" (UID: "c1a56691-c50a-4917-9331-4920a62c5a3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.341948 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0646b5c8-f87a-4f27-9327-1bc87669623f-config-data" (OuterVolumeSpecName: "config-data") pod "0646b5c8-f87a-4f27-9327-1bc87669623f" (UID: "0646b5c8-f87a-4f27-9327-1bc87669623f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.379189 4915 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1a56691-c50a-4917-9331-4920a62c5a3b-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.379223 4915 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1a56691-c50a-4917-9331-4920a62c5a3b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.379236 4915 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0646b5c8-f87a-4f27-9327-1bc87669623f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.379250 4915 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0646b5c8-f87a-4f27-9327-1bc87669623f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.379265 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1a56691-c50a-4917-9331-4920a62c5a3b-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.379276 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0646b5c8-f87a-4f27-9327-1bc87669623f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.379289 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b7vt\" (UniqueName: \"kubernetes.io/projected/c1a56691-c50a-4917-9331-4920a62c5a3b-kube-api-access-9b7vt\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.379301 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0646b5c8-f87a-4f27-9327-1bc87669623f-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.379313 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1a56691-c50a-4917-9331-4920a62c5a3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.379323 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0646b5c8-f87a-4f27-9327-1bc87669623f-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.379335 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75jb4\" (UniqueName: \"kubernetes.io/projected/0646b5c8-f87a-4f27-9327-1bc87669623f-kube-api-access-75jb4\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.383881 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1a56691-c50a-4917-9331-4920a62c5a3b-config-data" (OuterVolumeSpecName: "config-data") pod "c1a56691-c50a-4917-9331-4920a62c5a3b" (UID: "c1a56691-c50a-4917-9331-4920a62c5a3b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.425621 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1a56691-c50a-4917-9331-4920a62c5a3b","Type":"ContainerDied","Data":"c4097db2d57cba19ea40fa801ebaed3b5c6bbccbaf481853d584ed1cc7a85af0"} Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.425697 4915 scope.go:117] "RemoveContainer" containerID="80981c977a1b036d4cd5e3ddcde9998a269162870a19d86273a4f119d352b187" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.425909 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.438373 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-dcff88f94-kdzrd" event={"ID":"bf0c5bda-da34-4d01-8806-6016a8a24a9a","Type":"ContainerStarted","Data":"ebac6b5dee363c1910f1e0fd36a228828c61ac69b1c7dbea011d4bc67d3216b1"} Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.438421 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-dcff88f94-kdzrd" event={"ID":"bf0c5bda-da34-4d01-8806-6016a8a24a9a","Type":"ContainerStarted","Data":"5cb4bd4e3cd768a0bf0f2ad90c645e5aaa25273151296d5371653394719b0bb4"} Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.441906 4915 generic.go:334] "Generic (PLEG): container finished" podID="23b05b9c-7263-4bd9-9f62-d1c95e92fa68" containerID="1ab180827e1b531367c4a282a768c223edbc42d038cdf79bb1efead2548423a9" exitCode=0 Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.442001 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-n4drj" event={"ID":"23b05b9c-7263-4bd9-9f62-d1c95e92fa68","Type":"ContainerDied","Data":"1ab180827e1b531367c4a282a768c223edbc42d038cdf79bb1efead2548423a9"} Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.442029 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-n4drj" event={"ID":"23b05b9c-7263-4bd9-9f62-d1c95e92fa68","Type":"ContainerStarted","Data":"9038df88f96f2faf8f1787489219e27ac6e865849435109a209575c3efe4f159"} Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.444546 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-66f8975c4c-2r5c7" event={"ID":"2b2a93f7-9082-4391-b366-64cd870dc30e","Type":"ContainerStarted","Data":"a6e1657a2713690f6a74f63bb6367fa0b1330886eb8fd5fb3b9994f8c6eb4bb7"} Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.446723 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9t9tc" event={"ID":"0646b5c8-f87a-4f27-9327-1bc87669623f","Type":"ContainerDied","Data":"b8ec9ffc362f4e834991d257afb0ab3c25e8dc8469bbd7d228d0c8f67e91daee"} Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.446925 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8ec9ffc362f4e834991d257afb0ab3c25e8dc8469bbd7d228d0c8f67e91daee" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.446819 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9t9tc" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.481185 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1a56691-c50a-4917-9331-4920a62c5a3b-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.548826 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.563339 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.581406 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:42:31 crc kubenswrapper[4915]: E1124 21:42:31.581943 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1a56691-c50a-4917-9331-4920a62c5a3b" containerName="sg-core" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.581957 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a56691-c50a-4917-9331-4920a62c5a3b" containerName="sg-core" Nov 24 21:42:31 crc kubenswrapper[4915]: E1124 21:42:31.581975 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0646b5c8-f87a-4f27-9327-1bc87669623f" containerName="cinder-db-sync" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.581981 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0646b5c8-f87a-4f27-9327-1bc87669623f" containerName="cinder-db-sync" Nov 24 21:42:31 crc kubenswrapper[4915]: E1124 21:42:31.581998 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1a56691-c50a-4917-9331-4920a62c5a3b" containerName="ceilometer-notification-agent" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.582005 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a56691-c50a-4917-9331-4920a62c5a3b" containerName="ceilometer-notification-agent" Nov 24 21:42:31 crc kubenswrapper[4915]: E1124 21:42:31.582030 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1a56691-c50a-4917-9331-4920a62c5a3b" containerName="proxy-httpd" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.582036 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1a56691-c50a-4917-9331-4920a62c5a3b" containerName="proxy-httpd" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.582236 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1a56691-c50a-4917-9331-4920a62c5a3b" containerName="proxy-httpd" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.582254 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1a56691-c50a-4917-9331-4920a62c5a3b" containerName="sg-core" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.582277 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1a56691-c50a-4917-9331-4920a62c5a3b" containerName="ceilometer-notification-agent" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.582289 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="0646b5c8-f87a-4f27-9327-1bc87669623f" containerName="cinder-db-sync" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.584301 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.585898 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.591154 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.591198 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.681979 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.683675 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.688526 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-29hzm" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.688890 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.689027 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.689235 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.691166 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cddeb8d-6499-40ce-866b-6009ade95f6c-config-data\") pod \"ceilometer-0\" (UID: \"8cddeb8d-6499-40ce-866b-6009ade95f6c\") " pod="openstack/ceilometer-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.691322 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cddeb8d-6499-40ce-866b-6009ade95f6c-log-httpd\") pod \"ceilometer-0\" (UID: \"8cddeb8d-6499-40ce-866b-6009ade95f6c\") " pod="openstack/ceilometer-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.691388 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cddeb8d-6499-40ce-866b-6009ade95f6c-scripts\") pod \"ceilometer-0\" (UID: \"8cddeb8d-6499-40ce-866b-6009ade95f6c\") " pod="openstack/ceilometer-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.691524 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cddeb8d-6499-40ce-866b-6009ade95f6c-run-httpd\") pod \"ceilometer-0\" (UID: \"8cddeb8d-6499-40ce-866b-6009ade95f6c\") " pod="openstack/ceilometer-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.691616 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hpw9\" (UniqueName: \"kubernetes.io/projected/8cddeb8d-6499-40ce-866b-6009ade95f6c-kube-api-access-7hpw9\") pod \"ceilometer-0\" (UID: \"8cddeb8d-6499-40ce-866b-6009ade95f6c\") " pod="openstack/ceilometer-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.691647 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cddeb8d-6499-40ce-866b-6009ade95f6c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8cddeb8d-6499-40ce-866b-6009ade95f6c\") " pod="openstack/ceilometer-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.692043 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8cddeb8d-6499-40ce-866b-6009ade95f6c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8cddeb8d-6499-40ce-866b-6009ade95f6c\") " pod="openstack/ceilometer-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.733902 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.789144 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-n4drj"] Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.808063 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cddeb8d-6499-40ce-866b-6009ade95f6c-config-data\") pod \"ceilometer-0\" (UID: \"8cddeb8d-6499-40ce-866b-6009ade95f6c\") " pod="openstack/ceilometer-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.808428 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cddeb8d-6499-40ce-866b-6009ade95f6c-log-httpd\") pod \"ceilometer-0\" (UID: \"8cddeb8d-6499-40ce-866b-6009ade95f6c\") " pod="openstack/ceilometer-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.808549 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cddeb8d-6499-40ce-866b-6009ade95f6c-scripts\") pod \"ceilometer-0\" (UID: \"8cddeb8d-6499-40ce-866b-6009ade95f6c\") " pod="openstack/ceilometer-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.809148 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cddeb8d-6499-40ce-866b-6009ade95f6c-run-httpd\") pod \"ceilometer-0\" (UID: \"8cddeb8d-6499-40ce-866b-6009ade95f6c\") " pod="openstack/ceilometer-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.809286 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a117d18-c25b-4817-a114-fdb48bf5151c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0a117d18-c25b-4817-a114-fdb48bf5151c\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.809482 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hpw9\" (UniqueName: \"kubernetes.io/projected/8cddeb8d-6499-40ce-866b-6009ade95f6c-kube-api-access-7hpw9\") pod \"ceilometer-0\" (UID: \"8cddeb8d-6499-40ce-866b-6009ade95f6c\") " pod="openstack/ceilometer-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.809841 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cddeb8d-6499-40ce-866b-6009ade95f6c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8cddeb8d-6499-40ce-866b-6009ade95f6c\") " pod="openstack/ceilometer-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.809988 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a117d18-c25b-4817-a114-fdb48bf5151c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0a117d18-c25b-4817-a114-fdb48bf5151c\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.810099 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a117d18-c25b-4817-a114-fdb48bf5151c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0a117d18-c25b-4817-a114-fdb48bf5151c\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.810996 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8cddeb8d-6499-40ce-866b-6009ade95f6c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8cddeb8d-6499-40ce-866b-6009ade95f6c\") " pod="openstack/ceilometer-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.811107 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a117d18-c25b-4817-a114-fdb48bf5151c-config-data\") pod \"cinder-scheduler-0\" (UID: \"0a117d18-c25b-4817-a114-fdb48bf5151c\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.811180 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a117d18-c25b-4817-a114-fdb48bf5151c-scripts\") pod \"cinder-scheduler-0\" (UID: \"0a117d18-c25b-4817-a114-fdb48bf5151c\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.811280 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfh4n\" (UniqueName: \"kubernetes.io/projected/0a117d18-c25b-4817-a114-fdb48bf5151c-kube-api-access-kfh4n\") pod \"cinder-scheduler-0\" (UID: \"0a117d18-c25b-4817-a114-fdb48bf5151c\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.811915 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cddeb8d-6499-40ce-866b-6009ade95f6c-log-httpd\") pod \"ceilometer-0\" (UID: \"8cddeb8d-6499-40ce-866b-6009ade95f6c\") " pod="openstack/ceilometer-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.813421 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cddeb8d-6499-40ce-866b-6009ade95f6c-config-data\") pod \"ceilometer-0\" (UID: \"8cddeb8d-6499-40ce-866b-6009ade95f6c\") " pod="openstack/ceilometer-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.818226 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cddeb8d-6499-40ce-866b-6009ade95f6c-run-httpd\") pod \"ceilometer-0\" (UID: \"8cddeb8d-6499-40ce-866b-6009ade95f6c\") " pod="openstack/ceilometer-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.825665 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8cddeb8d-6499-40ce-866b-6009ade95f6c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8cddeb8d-6499-40ce-866b-6009ade95f6c\") " pod="openstack/ceilometer-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.861584 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cddeb8d-6499-40ce-866b-6009ade95f6c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8cddeb8d-6499-40ce-866b-6009ade95f6c\") " pod="openstack/ceilometer-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.861945 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cddeb8d-6499-40ce-866b-6009ade95f6c-scripts\") pod \"ceilometer-0\" (UID: \"8cddeb8d-6499-40ce-866b-6009ade95f6c\") " pod="openstack/ceilometer-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.897872 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-gzpxc"] Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.903975 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-gzpxc" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.917315 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a117d18-c25b-4817-a114-fdb48bf5151c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0a117d18-c25b-4817-a114-fdb48bf5151c\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.917435 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a117d18-c25b-4817-a114-fdb48bf5151c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0a117d18-c25b-4817-a114-fdb48bf5151c\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.917487 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a117d18-c25b-4817-a114-fdb48bf5151c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0a117d18-c25b-4817-a114-fdb48bf5151c\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.917556 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a117d18-c25b-4817-a114-fdb48bf5151c-config-data\") pod \"cinder-scheduler-0\" (UID: \"0a117d18-c25b-4817-a114-fdb48bf5151c\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.917581 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a117d18-c25b-4817-a114-fdb48bf5151c-scripts\") pod \"cinder-scheduler-0\" (UID: \"0a117d18-c25b-4817-a114-fdb48bf5151c\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.917618 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfh4n\" (UniqueName: \"kubernetes.io/projected/0a117d18-c25b-4817-a114-fdb48bf5151c-kube-api-access-kfh4n\") pod \"cinder-scheduler-0\" (UID: \"0a117d18-c25b-4817-a114-fdb48bf5151c\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.918145 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a117d18-c25b-4817-a114-fdb48bf5151c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0a117d18-c25b-4817-a114-fdb48bf5151c\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.919661 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hpw9\" (UniqueName: \"kubernetes.io/projected/8cddeb8d-6499-40ce-866b-6009ade95f6c-kube-api-access-7hpw9\") pod \"ceilometer-0\" (UID: \"8cddeb8d-6499-40ce-866b-6009ade95f6c\") " pod="openstack/ceilometer-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.925636 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-gzpxc"] Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.929755 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a117d18-c25b-4817-a114-fdb48bf5151c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0a117d18-c25b-4817-a114-fdb48bf5151c\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.930627 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a117d18-c25b-4817-a114-fdb48bf5151c-config-data\") pod \"cinder-scheduler-0\" (UID: \"0a117d18-c25b-4817-a114-fdb48bf5151c\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.931135 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.934175 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a117d18-c25b-4817-a114-fdb48bf5151c-scripts\") pod \"cinder-scheduler-0\" (UID: \"0a117d18-c25b-4817-a114-fdb48bf5151c\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.953844 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfh4n\" (UniqueName: \"kubernetes.io/projected/0a117d18-c25b-4817-a114-fdb48bf5151c-kube-api-access-kfh4n\") pod \"cinder-scheduler-0\" (UID: \"0a117d18-c25b-4817-a114-fdb48bf5151c\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:31 crc kubenswrapper[4915]: I1124 21:42:31.976634 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a117d18-c25b-4817-a114-fdb48bf5151c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0a117d18-c25b-4817-a114-fdb48bf5151c\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.024696 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5cx5\" (UniqueName: \"kubernetes.io/projected/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-kube-api-access-p5cx5\") pod \"dnsmasq-dns-6578955fd5-gzpxc\" (UID: \"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140\") " pod="openstack/dnsmasq-dns-6578955fd5-gzpxc" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.024785 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-dns-svc\") pod \"dnsmasq-dns-6578955fd5-gzpxc\" (UID: \"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140\") " pod="openstack/dnsmasq-dns-6578955fd5-gzpxc" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.024802 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-config\") pod \"dnsmasq-dns-6578955fd5-gzpxc\" (UID: \"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140\") " pod="openstack/dnsmasq-dns-6578955fd5-gzpxc" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.024816 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-gzpxc\" (UID: \"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140\") " pod="openstack/dnsmasq-dns-6578955fd5-gzpxc" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.024921 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-gzpxc\" (UID: \"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140\") " pod="openstack/dnsmasq-dns-6578955fd5-gzpxc" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.024961 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-gzpxc\" (UID: \"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140\") " pod="openstack/dnsmasq-dns-6578955fd5-gzpxc" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.025149 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.057518 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.059545 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.067341 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.100077 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.128443 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-config\") pod \"dnsmasq-dns-6578955fd5-gzpxc\" (UID: \"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140\") " pod="openstack/dnsmasq-dns-6578955fd5-gzpxc" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.128486 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-dns-svc\") pod \"dnsmasq-dns-6578955fd5-gzpxc\" (UID: \"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140\") " pod="openstack/dnsmasq-dns-6578955fd5-gzpxc" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.128503 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-gzpxc\" (UID: \"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140\") " pod="openstack/dnsmasq-dns-6578955fd5-gzpxc" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.128576 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5434383-9f5d-41f5-a4c7-cf7931b42919-scripts\") pod \"cinder-api-0\" (UID: \"b5434383-9f5d-41f5-a4c7-cf7931b42919\") " pod="openstack/cinder-api-0" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.128740 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5434383-9f5d-41f5-a4c7-cf7931b42919-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b5434383-9f5d-41f5-a4c7-cf7931b42919\") " pod="openstack/cinder-api-0" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.128804 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b5434383-9f5d-41f5-a4c7-cf7931b42919-config-data-custom\") pod \"cinder-api-0\" (UID: \"b5434383-9f5d-41f5-a4c7-cf7931b42919\") " pod="openstack/cinder-api-0" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.128881 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skjss\" (UniqueName: \"kubernetes.io/projected/b5434383-9f5d-41f5-a4c7-cf7931b42919-kube-api-access-skjss\") pod \"cinder-api-0\" (UID: \"b5434383-9f5d-41f5-a4c7-cf7931b42919\") " pod="openstack/cinder-api-0" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.128928 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b5434383-9f5d-41f5-a4c7-cf7931b42919-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b5434383-9f5d-41f5-a4c7-cf7931b42919\") " pod="openstack/cinder-api-0" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.128956 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5434383-9f5d-41f5-a4c7-cf7931b42919-config-data\") pod \"cinder-api-0\" (UID: \"b5434383-9f5d-41f5-a4c7-cf7931b42919\") " pod="openstack/cinder-api-0" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.128973 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-gzpxc\" (UID: \"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140\") " pod="openstack/dnsmasq-dns-6578955fd5-gzpxc" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.129026 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5434383-9f5d-41f5-a4c7-cf7931b42919-logs\") pod \"cinder-api-0\" (UID: \"b5434383-9f5d-41f5-a4c7-cf7931b42919\") " pod="openstack/cinder-api-0" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.129056 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-gzpxc\" (UID: \"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140\") " pod="openstack/dnsmasq-dns-6578955fd5-gzpxc" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.129150 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5cx5\" (UniqueName: \"kubernetes.io/projected/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-kube-api-access-p5cx5\") pod \"dnsmasq-dns-6578955fd5-gzpxc\" (UID: \"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140\") " pod="openstack/dnsmasq-dns-6578955fd5-gzpxc" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.129626 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-gzpxc\" (UID: \"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140\") " pod="openstack/dnsmasq-dns-6578955fd5-gzpxc" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.129812 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-config\") pod \"dnsmasq-dns-6578955fd5-gzpxc\" (UID: \"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140\") " pod="openstack/dnsmasq-dns-6578955fd5-gzpxc" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.129829 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-gzpxc\" (UID: \"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140\") " pod="openstack/dnsmasq-dns-6578955fd5-gzpxc" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.130075 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-dns-svc\") pod \"dnsmasq-dns-6578955fd5-gzpxc\" (UID: \"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140\") " pod="openstack/dnsmasq-dns-6578955fd5-gzpxc" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.130104 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-gzpxc\" (UID: \"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140\") " pod="openstack/dnsmasq-dns-6578955fd5-gzpxc" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.154410 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5cx5\" (UniqueName: \"kubernetes.io/projected/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-kube-api-access-p5cx5\") pod \"dnsmasq-dns-6578955fd5-gzpxc\" (UID: \"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140\") " pod="openstack/dnsmasq-dns-6578955fd5-gzpxc" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.231664 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5434383-9f5d-41f5-a4c7-cf7931b42919-scripts\") pod \"cinder-api-0\" (UID: \"b5434383-9f5d-41f5-a4c7-cf7931b42919\") " pod="openstack/cinder-api-0" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.231788 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5434383-9f5d-41f5-a4c7-cf7931b42919-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b5434383-9f5d-41f5-a4c7-cf7931b42919\") " pod="openstack/cinder-api-0" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.231829 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b5434383-9f5d-41f5-a4c7-cf7931b42919-config-data-custom\") pod \"cinder-api-0\" (UID: \"b5434383-9f5d-41f5-a4c7-cf7931b42919\") " pod="openstack/cinder-api-0" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.231861 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skjss\" (UniqueName: \"kubernetes.io/projected/b5434383-9f5d-41f5-a4c7-cf7931b42919-kube-api-access-skjss\") pod \"cinder-api-0\" (UID: \"b5434383-9f5d-41f5-a4c7-cf7931b42919\") " pod="openstack/cinder-api-0" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.231896 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b5434383-9f5d-41f5-a4c7-cf7931b42919-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b5434383-9f5d-41f5-a4c7-cf7931b42919\") " pod="openstack/cinder-api-0" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.231918 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5434383-9f5d-41f5-a4c7-cf7931b42919-config-data\") pod \"cinder-api-0\" (UID: \"b5434383-9f5d-41f5-a4c7-cf7931b42919\") " pod="openstack/cinder-api-0" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.231962 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5434383-9f5d-41f5-a4c7-cf7931b42919-logs\") pod \"cinder-api-0\" (UID: \"b5434383-9f5d-41f5-a4c7-cf7931b42919\") " pod="openstack/cinder-api-0" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.234311 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b5434383-9f5d-41f5-a4c7-cf7931b42919-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b5434383-9f5d-41f5-a4c7-cf7931b42919\") " pod="openstack/cinder-api-0" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.235109 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5434383-9f5d-41f5-a4c7-cf7931b42919-logs\") pod \"cinder-api-0\" (UID: \"b5434383-9f5d-41f5-a4c7-cf7931b42919\") " pod="openstack/cinder-api-0" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.236623 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5434383-9f5d-41f5-a4c7-cf7931b42919-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b5434383-9f5d-41f5-a4c7-cf7931b42919\") " pod="openstack/cinder-api-0" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.237056 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b5434383-9f5d-41f5-a4c7-cf7931b42919-config-data-custom\") pod \"cinder-api-0\" (UID: \"b5434383-9f5d-41f5-a4c7-cf7931b42919\") " pod="openstack/cinder-api-0" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.238686 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5434383-9f5d-41f5-a4c7-cf7931b42919-config-data\") pod \"cinder-api-0\" (UID: \"b5434383-9f5d-41f5-a4c7-cf7931b42919\") " pod="openstack/cinder-api-0" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.241317 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5434383-9f5d-41f5-a4c7-cf7931b42919-scripts\") pod \"cinder-api-0\" (UID: \"b5434383-9f5d-41f5-a4c7-cf7931b42919\") " pod="openstack/cinder-api-0" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.256430 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skjss\" (UniqueName: \"kubernetes.io/projected/b5434383-9f5d-41f5-a4c7-cf7931b42919-kube-api-access-skjss\") pod \"cinder-api-0\" (UID: \"b5434383-9f5d-41f5-a4c7-cf7931b42919\") " pod="openstack/cinder-api-0" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.420433 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-gzpxc" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.438132 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.478867 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1a56691-c50a-4917-9331-4920a62c5a3b" path="/var/lib/kubelet/pods/c1a56691-c50a-4917-9331-4920a62c5a3b/volumes" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.480426 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-dcff88f94-kdzrd" event={"ID":"bf0c5bda-da34-4d01-8806-6016a8a24a9a","Type":"ContainerStarted","Data":"83decd64c460ab8530b49cf8004cdfc522963351a6ed7b0abb6a33421ab2bd0e"} Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.480600 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-dcff88f94-kdzrd" Nov 24 21:42:32 crc kubenswrapper[4915]: I1124 21:42:32.588512 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-dcff88f94-kdzrd" podStartSLOduration=3.588492834 podStartE2EDuration="3.588492834s" podCreationTimestamp="2025-11-24 21:42:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:42:32.584387843 +0000 UTC m=+1370.900640036" watchObservedRunningTime="2025-11-24 21:42:32.588492834 +0000 UTC m=+1370.904745007" Nov 24 21:42:33 crc kubenswrapper[4915]: I1124 21:42:33.489523 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-dcff88f94-kdzrd" Nov 24 21:42:33 crc kubenswrapper[4915]: I1124 21:42:33.871538 4915 scope.go:117] "RemoveContainer" containerID="d4e4e77ad89876f92a5feeb744fc6ed7d62060a1cb6473adc880a964c65dce0e" Nov 24 21:42:34 crc kubenswrapper[4915]: I1124 21:42:34.016896 4915 scope.go:117] "RemoveContainer" containerID="1b10ff0cafbb37746e4eda668a03a692705626c3d1ca7de1b39096e6444c2157" Nov 24 21:42:34 crc kubenswrapper[4915]: I1124 21:42:34.500891 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7dd95d5f64-ht7d8" event={"ID":"2bce36ae-dbff-4c97-9f8e-43edd44dfad0","Type":"ContainerStarted","Data":"d40044fd51b308a2c09fa4a5b298d4064dc86318ed23359531210dfe73ecf5a8"} Nov 24 21:42:34 crc kubenswrapper[4915]: I1124 21:42:34.505142 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-66f8975c4c-2r5c7" event={"ID":"2b2a93f7-9082-4391-b366-64cd870dc30e","Type":"ContainerStarted","Data":"b68895f7ef52cda78bc3aefa20a75a44590839339aff58787362a934ebe59dcb"} Nov 24 21:42:34 crc kubenswrapper[4915]: I1124 21:42:34.507226 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-n4drj" event={"ID":"23b05b9c-7263-4bd9-9f62-d1c95e92fa68","Type":"ContainerStarted","Data":"49749b5d0d911540baa722abbfd23c297006c6d11ec5c0227db1e44cd8ad3225"} Nov 24 21:42:34 crc kubenswrapper[4915]: I1124 21:42:34.507481 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-n4drj" podUID="23b05b9c-7263-4bd9-9f62-d1c95e92fa68" containerName="dnsmasq-dns" containerID="cri-o://49749b5d0d911540baa722abbfd23c297006c6d11ec5c0227db1e44cd8ad3225" gracePeriod=10 Nov 24 21:42:34 crc kubenswrapper[4915]: I1124 21:42:34.545254 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-n4drj" podStartSLOduration=5.545231223 podStartE2EDuration="5.545231223s" podCreationTimestamp="2025-11-24 21:42:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:42:34.533518637 +0000 UTC m=+1372.849770840" watchObservedRunningTime="2025-11-24 21:42:34.545231223 +0000 UTC m=+1372.861483396" Nov 24 21:42:34 crc kubenswrapper[4915]: I1124 21:42:34.558029 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-gzpxc"] Nov 24 21:42:34 crc kubenswrapper[4915]: I1124 21:42:34.759534 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-n4drj" Nov 24 21:42:34 crc kubenswrapper[4915]: I1124 21:42:34.939687 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 21:42:34 crc kubenswrapper[4915]: I1124 21:42:34.953226 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 21:42:34 crc kubenswrapper[4915]: I1124 21:42:34.963099 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.154967 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-n4drj" Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.213601 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-dns-swift-storage-0\") pod \"23b05b9c-7263-4bd9-9f62-d1c95e92fa68\" (UID: \"23b05b9c-7263-4bd9-9f62-d1c95e92fa68\") " Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.213685 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-config\") pod \"23b05b9c-7263-4bd9-9f62-d1c95e92fa68\" (UID: \"23b05b9c-7263-4bd9-9f62-d1c95e92fa68\") " Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.213871 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp52t\" (UniqueName: \"kubernetes.io/projected/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-kube-api-access-fp52t\") pod \"23b05b9c-7263-4bd9-9f62-d1c95e92fa68\" (UID: \"23b05b9c-7263-4bd9-9f62-d1c95e92fa68\") " Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.213923 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-ovsdbserver-nb\") pod \"23b05b9c-7263-4bd9-9f62-d1c95e92fa68\" (UID: \"23b05b9c-7263-4bd9-9f62-d1c95e92fa68\") " Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.213987 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-ovsdbserver-sb\") pod \"23b05b9c-7263-4bd9-9f62-d1c95e92fa68\" (UID: \"23b05b9c-7263-4bd9-9f62-d1c95e92fa68\") " Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.214123 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-dns-svc\") pod \"23b05b9c-7263-4bd9-9f62-d1c95e92fa68\" (UID: \"23b05b9c-7263-4bd9-9f62-d1c95e92fa68\") " Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.228216 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-kube-api-access-fp52t" (OuterVolumeSpecName: "kube-api-access-fp52t") pod "23b05b9c-7263-4bd9-9f62-d1c95e92fa68" (UID: "23b05b9c-7263-4bd9-9f62-d1c95e92fa68"). InnerVolumeSpecName "kube-api-access-fp52t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.311369 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "23b05b9c-7263-4bd9-9f62-d1c95e92fa68" (UID: "23b05b9c-7263-4bd9-9f62-d1c95e92fa68"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.311655 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "23b05b9c-7263-4bd9-9f62-d1c95e92fa68" (UID: "23b05b9c-7263-4bd9-9f62-d1c95e92fa68"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.311746 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "23b05b9c-7263-4bd9-9f62-d1c95e92fa68" (UID: "23b05b9c-7263-4bd9-9f62-d1c95e92fa68"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.312771 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "23b05b9c-7263-4bd9-9f62-d1c95e92fa68" (UID: "23b05b9c-7263-4bd9-9f62-d1c95e92fa68"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.317269 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fp52t\" (UniqueName: \"kubernetes.io/projected/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-kube-api-access-fp52t\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.317300 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.317309 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.317319 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.317328 4915 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.324411 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-config" (OuterVolumeSpecName: "config") pod "23b05b9c-7263-4bd9-9f62-d1c95e92fa68" (UID: "23b05b9c-7263-4bd9-9f62-d1c95e92fa68"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.419874 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23b05b9c-7263-4bd9-9f62-d1c95e92fa68-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.537553 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7dd95d5f64-ht7d8" event={"ID":"2bce36ae-dbff-4c97-9f8e-43edd44dfad0","Type":"ContainerStarted","Data":"8198ea8c1a6e14a3223c5e3246717fefbd0722eafdf9ea8aec7aada1efb67eed"} Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.551582 4915 generic.go:334] "Generic (PLEG): container finished" podID="4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140" containerID="dfca5ca6d5e8f82c6696193a03078f53441e1a6473fa31ce9b9dda2ab1adb4a6" exitCode=0 Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.554250 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-gzpxc" event={"ID":"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140","Type":"ContainerDied","Data":"dfca5ca6d5e8f82c6696193a03078f53441e1a6473fa31ce9b9dda2ab1adb4a6"} Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.554973 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-gzpxc" event={"ID":"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140","Type":"ContainerStarted","Data":"65f70f6d6b12ef79a4482df97069162ec96de1596982c02c2c10b85406a7bf60"} Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.559955 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8cddeb8d-6499-40ce-866b-6009ade95f6c","Type":"ContainerStarted","Data":"4d729a7333656b811ad4eb39a6bbfe1f469f09151172abe19d052f424403fa8e"} Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.561893 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b5434383-9f5d-41f5-a4c7-cf7931b42919","Type":"ContainerStarted","Data":"ead3e7e9eec95deb19dcc8a2c9ecdd1642a3a2b0972b83558514b6d1876865a1"} Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.566303 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7dd95d5f64-ht7d8" podStartSLOduration=2.907620065 podStartE2EDuration="6.566284954s" podCreationTimestamp="2025-11-24 21:42:29 +0000 UTC" firstStartedPulling="2025-11-24 21:42:30.33821908 +0000 UTC m=+1368.654471253" lastFinishedPulling="2025-11-24 21:42:33.996883969 +0000 UTC m=+1372.313136142" observedRunningTime="2025-11-24 21:42:35.566016857 +0000 UTC m=+1373.882269050" watchObservedRunningTime="2025-11-24 21:42:35.566284954 +0000 UTC m=+1373.882537127" Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.584885 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-66f8975c4c-2r5c7" event={"ID":"2b2a93f7-9082-4391-b366-64cd870dc30e","Type":"ContainerStarted","Data":"b348facfdfa135a6617123f1c167b83e8a5daffced821c749958ffba447262c6"} Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.594520 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0a117d18-c25b-4817-a114-fdb48bf5151c","Type":"ContainerStarted","Data":"491ba205407d35b276c95b2f9a229a455fef1daec816a0066aa733119bf84f29"} Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.620589 4915 generic.go:334] "Generic (PLEG): container finished" podID="23b05b9c-7263-4bd9-9f62-d1c95e92fa68" containerID="49749b5d0d911540baa722abbfd23c297006c6d11ec5c0227db1e44cd8ad3225" exitCode=0 Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.620648 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-n4drj" event={"ID":"23b05b9c-7263-4bd9-9f62-d1c95e92fa68","Type":"ContainerDied","Data":"49749b5d0d911540baa722abbfd23c297006c6d11ec5c0227db1e44cd8ad3225"} Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.620676 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-n4drj" event={"ID":"23b05b9c-7263-4bd9-9f62-d1c95e92fa68","Type":"ContainerDied","Data":"9038df88f96f2faf8f1787489219e27ac6e865849435109a209575c3efe4f159"} Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.620693 4915 scope.go:117] "RemoveContainer" containerID="49749b5d0d911540baa722abbfd23c297006c6d11ec5c0227db1e44cd8ad3225" Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.620870 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-n4drj" Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.651976 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-66f8975c4c-2r5c7" podStartSLOduration=3.061415881 podStartE2EDuration="6.651958593s" podCreationTimestamp="2025-11-24 21:42:29 +0000 UTC" firstStartedPulling="2025-11-24 21:42:30.470500426 +0000 UTC m=+1368.786752599" lastFinishedPulling="2025-11-24 21:42:34.061043138 +0000 UTC m=+1372.377295311" observedRunningTime="2025-11-24 21:42:35.62625051 +0000 UTC m=+1373.942502703" watchObservedRunningTime="2025-11-24 21:42:35.651958593 +0000 UTC m=+1373.968210766" Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.704735 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-n4drj"] Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.721521 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-n4drj"] Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.734432 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.827108 4915 scope.go:117] "RemoveContainer" containerID="1ab180827e1b531367c4a282a768c223edbc42d038cdf79bb1efead2548423a9" Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.904100 4915 scope.go:117] "RemoveContainer" containerID="49749b5d0d911540baa722abbfd23c297006c6d11ec5c0227db1e44cd8ad3225" Nov 24 21:42:35 crc kubenswrapper[4915]: E1124 21:42:35.905156 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49749b5d0d911540baa722abbfd23c297006c6d11ec5c0227db1e44cd8ad3225\": container with ID starting with 49749b5d0d911540baa722abbfd23c297006c6d11ec5c0227db1e44cd8ad3225 not found: ID does not exist" containerID="49749b5d0d911540baa722abbfd23c297006c6d11ec5c0227db1e44cd8ad3225" Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.905213 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49749b5d0d911540baa722abbfd23c297006c6d11ec5c0227db1e44cd8ad3225"} err="failed to get container status \"49749b5d0d911540baa722abbfd23c297006c6d11ec5c0227db1e44cd8ad3225\": rpc error: code = NotFound desc = could not find container \"49749b5d0d911540baa722abbfd23c297006c6d11ec5c0227db1e44cd8ad3225\": container with ID starting with 49749b5d0d911540baa722abbfd23c297006c6d11ec5c0227db1e44cd8ad3225 not found: ID does not exist" Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.905242 4915 scope.go:117] "RemoveContainer" containerID="1ab180827e1b531367c4a282a768c223edbc42d038cdf79bb1efead2548423a9" Nov 24 21:42:35 crc kubenswrapper[4915]: E1124 21:42:35.905771 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ab180827e1b531367c4a282a768c223edbc42d038cdf79bb1efead2548423a9\": container with ID starting with 1ab180827e1b531367c4a282a768c223edbc42d038cdf79bb1efead2548423a9 not found: ID does not exist" containerID="1ab180827e1b531367c4a282a768c223edbc42d038cdf79bb1efead2548423a9" Nov 24 21:42:35 crc kubenswrapper[4915]: I1124 21:42:35.905903 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ab180827e1b531367c4a282a768c223edbc42d038cdf79bb1efead2548423a9"} err="failed to get container status \"1ab180827e1b531367c4a282a768c223edbc42d038cdf79bb1efead2548423a9\": rpc error: code = NotFound desc = could not find container \"1ab180827e1b531367c4a282a768c223edbc42d038cdf79bb1efead2548423a9\": container with ID starting with 1ab180827e1b531367c4a282a768c223edbc42d038cdf79bb1efead2548423a9 not found: ID does not exist" Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.336550 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7985c5bcf4-b8zrw"] Nov 24 21:42:36 crc kubenswrapper[4915]: E1124 21:42:36.337334 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b05b9c-7263-4bd9-9f62-d1c95e92fa68" containerName="init" Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.337352 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b05b9c-7263-4bd9-9f62-d1c95e92fa68" containerName="init" Nov 24 21:42:36 crc kubenswrapper[4915]: E1124 21:42:36.337363 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23b05b9c-7263-4bd9-9f62-d1c95e92fa68" containerName="dnsmasq-dns" Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.337372 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="23b05b9c-7263-4bd9-9f62-d1c95e92fa68" containerName="dnsmasq-dns" Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.337607 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="23b05b9c-7263-4bd9-9f62-d1c95e92fa68" containerName="dnsmasq-dns" Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.338735 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7985c5bcf4-b8zrw" Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.343742 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.343908 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.352486 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7985c5bcf4-b8zrw"] Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.439867 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23b05b9c-7263-4bd9-9f62-d1c95e92fa68" path="/var/lib/kubelet/pods/23b05b9c-7263-4bd9-9f62-d1c95e92fa68/volumes" Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.449506 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35feb1a4-a62a-425e-a77f-48e60720b620-public-tls-certs\") pod \"barbican-api-7985c5bcf4-b8zrw\" (UID: \"35feb1a4-a62a-425e-a77f-48e60720b620\") " pod="openstack/barbican-api-7985c5bcf4-b8zrw" Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.449998 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35feb1a4-a62a-425e-a77f-48e60720b620-config-data\") pod \"barbican-api-7985c5bcf4-b8zrw\" (UID: \"35feb1a4-a62a-425e-a77f-48e60720b620\") " pod="openstack/barbican-api-7985c5bcf4-b8zrw" Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.450148 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8s5r\" (UniqueName: \"kubernetes.io/projected/35feb1a4-a62a-425e-a77f-48e60720b620-kube-api-access-n8s5r\") pod \"barbican-api-7985c5bcf4-b8zrw\" (UID: \"35feb1a4-a62a-425e-a77f-48e60720b620\") " pod="openstack/barbican-api-7985c5bcf4-b8zrw" Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.450277 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35feb1a4-a62a-425e-a77f-48e60720b620-logs\") pod \"barbican-api-7985c5bcf4-b8zrw\" (UID: \"35feb1a4-a62a-425e-a77f-48e60720b620\") " pod="openstack/barbican-api-7985c5bcf4-b8zrw" Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.450526 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35feb1a4-a62a-425e-a77f-48e60720b620-config-data-custom\") pod \"barbican-api-7985c5bcf4-b8zrw\" (UID: \"35feb1a4-a62a-425e-a77f-48e60720b620\") " pod="openstack/barbican-api-7985c5bcf4-b8zrw" Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.450676 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35feb1a4-a62a-425e-a77f-48e60720b620-internal-tls-certs\") pod \"barbican-api-7985c5bcf4-b8zrw\" (UID: \"35feb1a4-a62a-425e-a77f-48e60720b620\") " pod="openstack/barbican-api-7985c5bcf4-b8zrw" Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.450797 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35feb1a4-a62a-425e-a77f-48e60720b620-combined-ca-bundle\") pod \"barbican-api-7985c5bcf4-b8zrw\" (UID: \"35feb1a4-a62a-425e-a77f-48e60720b620\") " pod="openstack/barbican-api-7985c5bcf4-b8zrw" Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.552529 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35feb1a4-a62a-425e-a77f-48e60720b620-config-data-custom\") pod \"barbican-api-7985c5bcf4-b8zrw\" (UID: \"35feb1a4-a62a-425e-a77f-48e60720b620\") " pod="openstack/barbican-api-7985c5bcf4-b8zrw" Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.555006 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35feb1a4-a62a-425e-a77f-48e60720b620-internal-tls-certs\") pod \"barbican-api-7985c5bcf4-b8zrw\" (UID: \"35feb1a4-a62a-425e-a77f-48e60720b620\") " pod="openstack/barbican-api-7985c5bcf4-b8zrw" Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.555339 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35feb1a4-a62a-425e-a77f-48e60720b620-combined-ca-bundle\") pod \"barbican-api-7985c5bcf4-b8zrw\" (UID: \"35feb1a4-a62a-425e-a77f-48e60720b620\") " pod="openstack/barbican-api-7985c5bcf4-b8zrw" Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.556131 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35feb1a4-a62a-425e-a77f-48e60720b620-public-tls-certs\") pod \"barbican-api-7985c5bcf4-b8zrw\" (UID: \"35feb1a4-a62a-425e-a77f-48e60720b620\") " pod="openstack/barbican-api-7985c5bcf4-b8zrw" Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.556243 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35feb1a4-a62a-425e-a77f-48e60720b620-config-data\") pod \"barbican-api-7985c5bcf4-b8zrw\" (UID: \"35feb1a4-a62a-425e-a77f-48e60720b620\") " pod="openstack/barbican-api-7985c5bcf4-b8zrw" Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.556358 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8s5r\" (UniqueName: \"kubernetes.io/projected/35feb1a4-a62a-425e-a77f-48e60720b620-kube-api-access-n8s5r\") pod \"barbican-api-7985c5bcf4-b8zrw\" (UID: \"35feb1a4-a62a-425e-a77f-48e60720b620\") " pod="openstack/barbican-api-7985c5bcf4-b8zrw" Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.557286 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35feb1a4-a62a-425e-a77f-48e60720b620-logs\") pod \"barbican-api-7985c5bcf4-b8zrw\" (UID: \"35feb1a4-a62a-425e-a77f-48e60720b620\") " pod="openstack/barbican-api-7985c5bcf4-b8zrw" Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.558224 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35feb1a4-a62a-425e-a77f-48e60720b620-logs\") pod \"barbican-api-7985c5bcf4-b8zrw\" (UID: \"35feb1a4-a62a-425e-a77f-48e60720b620\") " pod="openstack/barbican-api-7985c5bcf4-b8zrw" Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.560122 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35feb1a4-a62a-425e-a77f-48e60720b620-internal-tls-certs\") pod \"barbican-api-7985c5bcf4-b8zrw\" (UID: \"35feb1a4-a62a-425e-a77f-48e60720b620\") " pod="openstack/barbican-api-7985c5bcf4-b8zrw" Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.568350 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35feb1a4-a62a-425e-a77f-48e60720b620-combined-ca-bundle\") pod \"barbican-api-7985c5bcf4-b8zrw\" (UID: \"35feb1a4-a62a-425e-a77f-48e60720b620\") " pod="openstack/barbican-api-7985c5bcf4-b8zrw" Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.568977 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35feb1a4-a62a-425e-a77f-48e60720b620-config-data\") pod \"barbican-api-7985c5bcf4-b8zrw\" (UID: \"35feb1a4-a62a-425e-a77f-48e60720b620\") " pod="openstack/barbican-api-7985c5bcf4-b8zrw" Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.578681 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/35feb1a4-a62a-425e-a77f-48e60720b620-public-tls-certs\") pod \"barbican-api-7985c5bcf4-b8zrw\" (UID: \"35feb1a4-a62a-425e-a77f-48e60720b620\") " pod="openstack/barbican-api-7985c5bcf4-b8zrw" Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.584321 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8s5r\" (UniqueName: \"kubernetes.io/projected/35feb1a4-a62a-425e-a77f-48e60720b620-kube-api-access-n8s5r\") pod \"barbican-api-7985c5bcf4-b8zrw\" (UID: \"35feb1a4-a62a-425e-a77f-48e60720b620\") " pod="openstack/barbican-api-7985c5bcf4-b8zrw" Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.592105 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/35feb1a4-a62a-425e-a77f-48e60720b620-config-data-custom\") pod \"barbican-api-7985c5bcf4-b8zrw\" (UID: \"35feb1a4-a62a-425e-a77f-48e60720b620\") " pod="openstack/barbican-api-7985c5bcf4-b8zrw" Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.635038 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b5434383-9f5d-41f5-a4c7-cf7931b42919","Type":"ContainerStarted","Data":"2e9a5a031b2a8399c5415d1b7e7093871d7397afd7826ed5f54c723693e6eaab"} Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.638703 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8cddeb8d-6499-40ce-866b-6009ade95f6c","Type":"ContainerStarted","Data":"af69cdeddcb763ac67d6fb42617ab2bf479f34a1236215f298acc13cf506a0fd"} Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.644202 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-gzpxc" event={"ID":"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140","Type":"ContainerStarted","Data":"d6e6fc895d43407901a8b3e58e339275612213ef5303ec18cdfbd0a09a77936f"} Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.680323 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7985c5bcf4-b8zrw" Nov 24 21:42:36 crc kubenswrapper[4915]: I1124 21:42:36.702792 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-gzpxc" podStartSLOduration=5.702753026 podStartE2EDuration="5.702753026s" podCreationTimestamp="2025-11-24 21:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:42:36.685066959 +0000 UTC m=+1375.001319132" watchObservedRunningTime="2025-11-24 21:42:36.702753026 +0000 UTC m=+1375.019005219" Nov 24 21:42:37 crc kubenswrapper[4915]: I1124 21:42:37.243055 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7985c5bcf4-b8zrw"] Nov 24 21:42:37 crc kubenswrapper[4915]: I1124 21:42:37.420867 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-gzpxc" Nov 24 21:42:37 crc kubenswrapper[4915]: I1124 21:42:37.711022 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0a117d18-c25b-4817-a114-fdb48bf5151c","Type":"ContainerStarted","Data":"ebdd335570b3c39e2b275e04f1cdbcf87ba66ed6e99083be5a74665fba50e223"} Nov 24 21:42:37 crc kubenswrapper[4915]: I1124 21:42:37.748452 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7985c5bcf4-b8zrw" event={"ID":"35feb1a4-a62a-425e-a77f-48e60720b620","Type":"ContainerStarted","Data":"6a310f21437b3646cf71afea5f5b096c8dbc33dabeca8e9506f98ca9b2015bdc"} Nov 24 21:42:37 crc kubenswrapper[4915]: I1124 21:42:37.748496 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7985c5bcf4-b8zrw" event={"ID":"35feb1a4-a62a-425e-a77f-48e60720b620","Type":"ContainerStarted","Data":"19ac2498f90acbc88b1e0cfedd6e40714d56620a1ba2280fb68e46adc78df419"} Nov 24 21:42:37 crc kubenswrapper[4915]: I1124 21:42:37.768609 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8cddeb8d-6499-40ce-866b-6009ade95f6c","Type":"ContainerStarted","Data":"02a4b55630a569a5e17e835d5af62aa578681460f95278eb7ed660458ded7995"} Nov 24 21:42:37 crc kubenswrapper[4915]: I1124 21:42:37.772082 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b5434383-9f5d-41f5-a4c7-cf7931b42919","Type":"ContainerStarted","Data":"b637d49ae103fd99c05aea46e9f304e5146e296d0abf88178bf4549e22832bc9"} Nov 24 21:42:37 crc kubenswrapper[4915]: I1124 21:42:37.772212 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b5434383-9f5d-41f5-a4c7-cf7931b42919" containerName="cinder-api-log" containerID="cri-o://2e9a5a031b2a8399c5415d1b7e7093871d7397afd7826ed5f54c723693e6eaab" gracePeriod=30 Nov 24 21:42:37 crc kubenswrapper[4915]: I1124 21:42:37.772880 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b5434383-9f5d-41f5-a4c7-cf7931b42919" containerName="cinder-api" containerID="cri-o://b637d49ae103fd99c05aea46e9f304e5146e296d0abf88178bf4549e22832bc9" gracePeriod=30 Nov 24 21:42:37 crc kubenswrapper[4915]: I1124 21:42:37.835406 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.835386485 podStartE2EDuration="6.835386485s" podCreationTimestamp="2025-11-24 21:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:42:37.80661044 +0000 UTC m=+1376.122862613" watchObservedRunningTime="2025-11-24 21:42:37.835386485 +0000 UTC m=+1376.151638658" Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.564509 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-dcff88f94-kdzrd" podUID="bf0c5bda-da34-4d01-8806-6016a8a24a9a" containerName="barbican-api" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.649821 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.741376 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b5434383-9f5d-41f5-a4c7-cf7931b42919-config-data-custom\") pod \"b5434383-9f5d-41f5-a4c7-cf7931b42919\" (UID: \"b5434383-9f5d-41f5-a4c7-cf7931b42919\") " Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.741460 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5434383-9f5d-41f5-a4c7-cf7931b42919-combined-ca-bundle\") pod \"b5434383-9f5d-41f5-a4c7-cf7931b42919\" (UID: \"b5434383-9f5d-41f5-a4c7-cf7931b42919\") " Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.741498 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5434383-9f5d-41f5-a4c7-cf7931b42919-logs\") pod \"b5434383-9f5d-41f5-a4c7-cf7931b42919\" (UID: \"b5434383-9f5d-41f5-a4c7-cf7931b42919\") " Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.741551 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skjss\" (UniqueName: \"kubernetes.io/projected/b5434383-9f5d-41f5-a4c7-cf7931b42919-kube-api-access-skjss\") pod \"b5434383-9f5d-41f5-a4c7-cf7931b42919\" (UID: \"b5434383-9f5d-41f5-a4c7-cf7931b42919\") " Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.741665 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b5434383-9f5d-41f5-a4c7-cf7931b42919-etc-machine-id\") pod \"b5434383-9f5d-41f5-a4c7-cf7931b42919\" (UID: \"b5434383-9f5d-41f5-a4c7-cf7931b42919\") " Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.741691 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5434383-9f5d-41f5-a4c7-cf7931b42919-config-data\") pod \"b5434383-9f5d-41f5-a4c7-cf7931b42919\" (UID: \"b5434383-9f5d-41f5-a4c7-cf7931b42919\") " Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.741713 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5434383-9f5d-41f5-a4c7-cf7931b42919-scripts\") pod \"b5434383-9f5d-41f5-a4c7-cf7931b42919\" (UID: \"b5434383-9f5d-41f5-a4c7-cf7931b42919\") " Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.741751 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5434383-9f5d-41f5-a4c7-cf7931b42919-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b5434383-9f5d-41f5-a4c7-cf7931b42919" (UID: "b5434383-9f5d-41f5-a4c7-cf7931b42919"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.742155 4915 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b5434383-9f5d-41f5-a4c7-cf7931b42919-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.742522 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5434383-9f5d-41f5-a4c7-cf7931b42919-logs" (OuterVolumeSpecName: "logs") pod "b5434383-9f5d-41f5-a4c7-cf7931b42919" (UID: "b5434383-9f5d-41f5-a4c7-cf7931b42919"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.747545 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5434383-9f5d-41f5-a4c7-cf7931b42919-scripts" (OuterVolumeSpecName: "scripts") pod "b5434383-9f5d-41f5-a4c7-cf7931b42919" (UID: "b5434383-9f5d-41f5-a4c7-cf7931b42919"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.748814 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5434383-9f5d-41f5-a4c7-cf7931b42919-kube-api-access-skjss" (OuterVolumeSpecName: "kube-api-access-skjss") pod "b5434383-9f5d-41f5-a4c7-cf7931b42919" (UID: "b5434383-9f5d-41f5-a4c7-cf7931b42919"). InnerVolumeSpecName "kube-api-access-skjss". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.760396 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5434383-9f5d-41f5-a4c7-cf7931b42919-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b5434383-9f5d-41f5-a4c7-cf7931b42919" (UID: "b5434383-9f5d-41f5-a4c7-cf7931b42919"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.777203 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5434383-9f5d-41f5-a4c7-cf7931b42919-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5434383-9f5d-41f5-a4c7-cf7931b42919" (UID: "b5434383-9f5d-41f5-a4c7-cf7931b42919"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.788381 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7985c5bcf4-b8zrw" event={"ID":"35feb1a4-a62a-425e-a77f-48e60720b620","Type":"ContainerStarted","Data":"3f4aadc06a46e3de90ba229dd13e358d6075e63972e25c515932025a057290fb"} Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.788457 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7985c5bcf4-b8zrw" Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.788516 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7985c5bcf4-b8zrw" Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.793064 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8cddeb8d-6499-40ce-866b-6009ade95f6c","Type":"ContainerStarted","Data":"4d3eb1db0c501024a55b79378896576216e89f5ceb45900cdad7aa5e034a9e8a"} Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.795170 4915 generic.go:334] "Generic (PLEG): container finished" podID="b5434383-9f5d-41f5-a4c7-cf7931b42919" containerID="b637d49ae103fd99c05aea46e9f304e5146e296d0abf88178bf4549e22832bc9" exitCode=0 Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.795218 4915 generic.go:334] "Generic (PLEG): container finished" podID="b5434383-9f5d-41f5-a4c7-cf7931b42919" containerID="2e9a5a031b2a8399c5415d1b7e7093871d7397afd7826ed5f54c723693e6eaab" exitCode=143 Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.795264 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b5434383-9f5d-41f5-a4c7-cf7931b42919","Type":"ContainerDied","Data":"b637d49ae103fd99c05aea46e9f304e5146e296d0abf88178bf4549e22832bc9"} Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.795315 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b5434383-9f5d-41f5-a4c7-cf7931b42919","Type":"ContainerDied","Data":"2e9a5a031b2a8399c5415d1b7e7093871d7397afd7826ed5f54c723693e6eaab"} Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.795332 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b5434383-9f5d-41f5-a4c7-cf7931b42919","Type":"ContainerDied","Data":"ead3e7e9eec95deb19dcc8a2c9ecdd1642a3a2b0972b83558514b6d1876865a1"} Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.795351 4915 scope.go:117] "RemoveContainer" containerID="b637d49ae103fd99c05aea46e9f304e5146e296d0abf88178bf4549e22832bc9" Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.795626 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.805255 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0a117d18-c25b-4817-a114-fdb48bf5151c","Type":"ContainerStarted","Data":"bc94bee23ccc267471861487814026af25cebe2ecb665437a6a66118d3efa8b3"} Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.832511 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5434383-9f5d-41f5-a4c7-cf7931b42919-config-data" (OuterVolumeSpecName: "config-data") pod "b5434383-9f5d-41f5-a4c7-cf7931b42919" (UID: "b5434383-9f5d-41f5-a4c7-cf7931b42919"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.833835 4915 scope.go:117] "RemoveContainer" containerID="2e9a5a031b2a8399c5415d1b7e7093871d7397afd7826ed5f54c723693e6eaab" Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.843771 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7985c5bcf4-b8zrw" podStartSLOduration=2.843751504 podStartE2EDuration="2.843751504s" podCreationTimestamp="2025-11-24 21:42:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:42:38.817812914 +0000 UTC m=+1377.134065097" watchObservedRunningTime="2025-11-24 21:42:38.843751504 +0000 UTC m=+1377.160003677" Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.844025 4915 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b5434383-9f5d-41f5-a4c7-cf7931b42919-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.844064 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5434383-9f5d-41f5-a4c7-cf7931b42919-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.844078 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5434383-9f5d-41f5-a4c7-cf7931b42919-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.844096 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skjss\" (UniqueName: \"kubernetes.io/projected/b5434383-9f5d-41f5-a4c7-cf7931b42919-kube-api-access-skjss\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.844114 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5434383-9f5d-41f5-a4c7-cf7931b42919-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.844127 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5434383-9f5d-41f5-a4c7-cf7931b42919-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.848515 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.900495751 podStartE2EDuration="7.848505551s" podCreationTimestamp="2025-11-24 21:42:31 +0000 UTC" firstStartedPulling="2025-11-24 21:42:34.97492584 +0000 UTC m=+1373.291178013" lastFinishedPulling="2025-11-24 21:42:35.92293564 +0000 UTC m=+1374.239187813" observedRunningTime="2025-11-24 21:42:38.839641762 +0000 UTC m=+1377.155893945" watchObservedRunningTime="2025-11-24 21:42:38.848505551 +0000 UTC m=+1377.164757724" Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.864223 4915 scope.go:117] "RemoveContainer" containerID="b637d49ae103fd99c05aea46e9f304e5146e296d0abf88178bf4549e22832bc9" Nov 24 21:42:38 crc kubenswrapper[4915]: E1124 21:42:38.864817 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b637d49ae103fd99c05aea46e9f304e5146e296d0abf88178bf4549e22832bc9\": container with ID starting with b637d49ae103fd99c05aea46e9f304e5146e296d0abf88178bf4549e22832bc9 not found: ID does not exist" containerID="b637d49ae103fd99c05aea46e9f304e5146e296d0abf88178bf4549e22832bc9" Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.864872 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b637d49ae103fd99c05aea46e9f304e5146e296d0abf88178bf4549e22832bc9"} err="failed to get container status \"b637d49ae103fd99c05aea46e9f304e5146e296d0abf88178bf4549e22832bc9\": rpc error: code = NotFound desc = could not find container \"b637d49ae103fd99c05aea46e9f304e5146e296d0abf88178bf4549e22832bc9\": container with ID starting with b637d49ae103fd99c05aea46e9f304e5146e296d0abf88178bf4549e22832bc9 not found: ID does not exist" Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.864925 4915 scope.go:117] "RemoveContainer" containerID="2e9a5a031b2a8399c5415d1b7e7093871d7397afd7826ed5f54c723693e6eaab" Nov 24 21:42:38 crc kubenswrapper[4915]: E1124 21:42:38.865375 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e9a5a031b2a8399c5415d1b7e7093871d7397afd7826ed5f54c723693e6eaab\": container with ID starting with 2e9a5a031b2a8399c5415d1b7e7093871d7397afd7826ed5f54c723693e6eaab not found: ID does not exist" containerID="2e9a5a031b2a8399c5415d1b7e7093871d7397afd7826ed5f54c723693e6eaab" Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.865427 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e9a5a031b2a8399c5415d1b7e7093871d7397afd7826ed5f54c723693e6eaab"} err="failed to get container status \"2e9a5a031b2a8399c5415d1b7e7093871d7397afd7826ed5f54c723693e6eaab\": rpc error: code = NotFound desc = could not find container \"2e9a5a031b2a8399c5415d1b7e7093871d7397afd7826ed5f54c723693e6eaab\": container with ID starting with 2e9a5a031b2a8399c5415d1b7e7093871d7397afd7826ed5f54c723693e6eaab not found: ID does not exist" Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.865460 4915 scope.go:117] "RemoveContainer" containerID="b637d49ae103fd99c05aea46e9f304e5146e296d0abf88178bf4549e22832bc9" Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.865802 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b637d49ae103fd99c05aea46e9f304e5146e296d0abf88178bf4549e22832bc9"} err="failed to get container status \"b637d49ae103fd99c05aea46e9f304e5146e296d0abf88178bf4549e22832bc9\": rpc error: code = NotFound desc = could not find container \"b637d49ae103fd99c05aea46e9f304e5146e296d0abf88178bf4549e22832bc9\": container with ID starting with b637d49ae103fd99c05aea46e9f304e5146e296d0abf88178bf4549e22832bc9 not found: ID does not exist" Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.865827 4915 scope.go:117] "RemoveContainer" containerID="2e9a5a031b2a8399c5415d1b7e7093871d7397afd7826ed5f54c723693e6eaab" Nov 24 21:42:38 crc kubenswrapper[4915]: I1124 21:42:38.866233 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e9a5a031b2a8399c5415d1b7e7093871d7397afd7826ed5f54c723693e6eaab"} err="failed to get container status \"2e9a5a031b2a8399c5415d1b7e7093871d7397afd7826ed5f54c723693e6eaab\": rpc error: code = NotFound desc = could not find container \"2e9a5a031b2a8399c5415d1b7e7093871d7397afd7826ed5f54c723693e6eaab\": container with ID starting with 2e9a5a031b2a8399c5415d1b7e7093871d7397afd7826ed5f54c723693e6eaab not found: ID does not exist" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.142577 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.158893 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.183128 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 24 21:42:39 crc kubenswrapper[4915]: E1124 21:42:39.183650 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5434383-9f5d-41f5-a4c7-cf7931b42919" containerName="cinder-api" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.183669 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5434383-9f5d-41f5-a4c7-cf7931b42919" containerName="cinder-api" Nov 24 21:42:39 crc kubenswrapper[4915]: E1124 21:42:39.183688 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5434383-9f5d-41f5-a4c7-cf7931b42919" containerName="cinder-api-log" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.183695 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5434383-9f5d-41f5-a4c7-cf7931b42919" containerName="cinder-api-log" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.183940 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5434383-9f5d-41f5-a4c7-cf7931b42919" containerName="cinder-api" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.183975 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5434383-9f5d-41f5-a4c7-cf7931b42919" containerName="cinder-api-log" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.196646 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.197521 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.199592 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.199817 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.199952 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.356754 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6clmw\" (UniqueName: \"kubernetes.io/projected/848690e8-3b39-4e42-b420-ab4cc3d251be-kube-api-access-6clmw\") pod \"cinder-api-0\" (UID: \"848690e8-3b39-4e42-b420-ab4cc3d251be\") " pod="openstack/cinder-api-0" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.356864 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/848690e8-3b39-4e42-b420-ab4cc3d251be-scripts\") pod \"cinder-api-0\" (UID: \"848690e8-3b39-4e42-b420-ab4cc3d251be\") " pod="openstack/cinder-api-0" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.356915 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/848690e8-3b39-4e42-b420-ab4cc3d251be-config-data-custom\") pod \"cinder-api-0\" (UID: \"848690e8-3b39-4e42-b420-ab4cc3d251be\") " pod="openstack/cinder-api-0" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.356954 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/848690e8-3b39-4e42-b420-ab4cc3d251be-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"848690e8-3b39-4e42-b420-ab4cc3d251be\") " pod="openstack/cinder-api-0" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.356975 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/848690e8-3b39-4e42-b420-ab4cc3d251be-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"848690e8-3b39-4e42-b420-ab4cc3d251be\") " pod="openstack/cinder-api-0" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.357117 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/848690e8-3b39-4e42-b420-ab4cc3d251be-etc-machine-id\") pod \"cinder-api-0\" (UID: \"848690e8-3b39-4e42-b420-ab4cc3d251be\") " pod="openstack/cinder-api-0" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.357159 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/848690e8-3b39-4e42-b420-ab4cc3d251be-public-tls-certs\") pod \"cinder-api-0\" (UID: \"848690e8-3b39-4e42-b420-ab4cc3d251be\") " pod="openstack/cinder-api-0" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.357176 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/848690e8-3b39-4e42-b420-ab4cc3d251be-config-data\") pod \"cinder-api-0\" (UID: \"848690e8-3b39-4e42-b420-ab4cc3d251be\") " pod="openstack/cinder-api-0" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.357202 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/848690e8-3b39-4e42-b420-ab4cc3d251be-logs\") pod \"cinder-api-0\" (UID: \"848690e8-3b39-4e42-b420-ab4cc3d251be\") " pod="openstack/cinder-api-0" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.459531 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/848690e8-3b39-4e42-b420-ab4cc3d251be-public-tls-certs\") pod \"cinder-api-0\" (UID: \"848690e8-3b39-4e42-b420-ab4cc3d251be\") " pod="openstack/cinder-api-0" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.459801 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/848690e8-3b39-4e42-b420-ab4cc3d251be-config-data\") pod \"cinder-api-0\" (UID: \"848690e8-3b39-4e42-b420-ab4cc3d251be\") " pod="openstack/cinder-api-0" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.459839 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/848690e8-3b39-4e42-b420-ab4cc3d251be-logs\") pod \"cinder-api-0\" (UID: \"848690e8-3b39-4e42-b420-ab4cc3d251be\") " pod="openstack/cinder-api-0" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.459897 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6clmw\" (UniqueName: \"kubernetes.io/projected/848690e8-3b39-4e42-b420-ab4cc3d251be-kube-api-access-6clmw\") pod \"cinder-api-0\" (UID: \"848690e8-3b39-4e42-b420-ab4cc3d251be\") " pod="openstack/cinder-api-0" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.459940 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/848690e8-3b39-4e42-b420-ab4cc3d251be-scripts\") pod \"cinder-api-0\" (UID: \"848690e8-3b39-4e42-b420-ab4cc3d251be\") " pod="openstack/cinder-api-0" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.459966 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/848690e8-3b39-4e42-b420-ab4cc3d251be-config-data-custom\") pod \"cinder-api-0\" (UID: \"848690e8-3b39-4e42-b420-ab4cc3d251be\") " pod="openstack/cinder-api-0" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.460000 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/848690e8-3b39-4e42-b420-ab4cc3d251be-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"848690e8-3b39-4e42-b420-ab4cc3d251be\") " pod="openstack/cinder-api-0" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.460311 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/848690e8-3b39-4e42-b420-ab4cc3d251be-logs\") pod \"cinder-api-0\" (UID: \"848690e8-3b39-4e42-b420-ab4cc3d251be\") " pod="openstack/cinder-api-0" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.460676 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/848690e8-3b39-4e42-b420-ab4cc3d251be-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"848690e8-3b39-4e42-b420-ab4cc3d251be\") " pod="openstack/cinder-api-0" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.460802 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/848690e8-3b39-4e42-b420-ab4cc3d251be-etc-machine-id\") pod \"cinder-api-0\" (UID: \"848690e8-3b39-4e42-b420-ab4cc3d251be\") " pod="openstack/cinder-api-0" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.460898 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/848690e8-3b39-4e42-b420-ab4cc3d251be-etc-machine-id\") pod \"cinder-api-0\" (UID: \"848690e8-3b39-4e42-b420-ab4cc3d251be\") " pod="openstack/cinder-api-0" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.464082 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/848690e8-3b39-4e42-b420-ab4cc3d251be-scripts\") pod \"cinder-api-0\" (UID: \"848690e8-3b39-4e42-b420-ab4cc3d251be\") " pod="openstack/cinder-api-0" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.464202 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/848690e8-3b39-4e42-b420-ab4cc3d251be-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"848690e8-3b39-4e42-b420-ab4cc3d251be\") " pod="openstack/cinder-api-0" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.464431 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/848690e8-3b39-4e42-b420-ab4cc3d251be-config-data-custom\") pod \"cinder-api-0\" (UID: \"848690e8-3b39-4e42-b420-ab4cc3d251be\") " pod="openstack/cinder-api-0" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.466011 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/848690e8-3b39-4e42-b420-ab4cc3d251be-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"848690e8-3b39-4e42-b420-ab4cc3d251be\") " pod="openstack/cinder-api-0" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.473757 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-598d4b5ccb-wjxn7" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.475361 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/848690e8-3b39-4e42-b420-ab4cc3d251be-config-data\") pod \"cinder-api-0\" (UID: \"848690e8-3b39-4e42-b420-ab4cc3d251be\") " pod="openstack/cinder-api-0" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.480430 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6clmw\" (UniqueName: \"kubernetes.io/projected/848690e8-3b39-4e42-b420-ab4cc3d251be-kube-api-access-6clmw\") pod \"cinder-api-0\" (UID: \"848690e8-3b39-4e42-b420-ab4cc3d251be\") " pod="openstack/cinder-api-0" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.489352 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/848690e8-3b39-4e42-b420-ab4cc3d251be-public-tls-certs\") pod \"cinder-api-0\" (UID: \"848690e8-3b39-4e42-b420-ab4cc3d251be\") " pod="openstack/cinder-api-0" Nov 24 21:42:39 crc kubenswrapper[4915]: I1124 21:42:39.517821 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 24 21:42:40 crc kubenswrapper[4915]: I1124 21:42:40.133698 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 24 21:42:40 crc kubenswrapper[4915]: I1124 21:42:40.445005 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5434383-9f5d-41f5-a4c7-cf7931b42919" path="/var/lib/kubelet/pods/b5434383-9f5d-41f5-a4c7-cf7931b42919/volumes" Nov 24 21:42:40 crc kubenswrapper[4915]: I1124 21:42:40.841507 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8cddeb8d-6499-40ce-866b-6009ade95f6c","Type":"ContainerStarted","Data":"de28da0b8d5d47e29d3a7c9417ee3bf491088f5807102c18164da015e0d3d1fb"} Nov 24 21:42:40 crc kubenswrapper[4915]: I1124 21:42:40.842516 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 21:42:40 crc kubenswrapper[4915]: I1124 21:42:40.843901 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"848690e8-3b39-4e42-b420-ab4cc3d251be","Type":"ContainerStarted","Data":"537d5f9499894b41b4ccc000cfa39e4c1ab0883fd35db2710322cdece95e8ddd"} Nov 24 21:42:40 crc kubenswrapper[4915]: I1124 21:42:40.843946 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"848690e8-3b39-4e42-b420-ab4cc3d251be","Type":"ContainerStarted","Data":"f104d452a3918769484fde17b0b4bd72d71a4bdb7653235c01515707d2fa8cb9"} Nov 24 21:42:40 crc kubenswrapper[4915]: I1124 21:42:40.879207 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.807327752 podStartE2EDuration="9.879187354s" podCreationTimestamp="2025-11-24 21:42:31 +0000 UTC" firstStartedPulling="2025-11-24 21:42:34.970959922 +0000 UTC m=+1373.287212095" lastFinishedPulling="2025-11-24 21:42:40.042819524 +0000 UTC m=+1378.359071697" observedRunningTime="2025-11-24 21:42:40.871116887 +0000 UTC m=+1379.187369060" watchObservedRunningTime="2025-11-24 21:42:40.879187354 +0000 UTC m=+1379.195439537" Nov 24 21:42:41 crc kubenswrapper[4915]: I1124 21:42:41.552467 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-dcff88f94-kdzrd" Nov 24 21:42:41 crc kubenswrapper[4915]: I1124 21:42:41.751211 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-dcff88f94-kdzrd" Nov 24 21:42:41 crc kubenswrapper[4915]: I1124 21:42:41.860165 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"848690e8-3b39-4e42-b420-ab4cc3d251be","Type":"ContainerStarted","Data":"8aea5b81cb65831d6f321a1b905d20b65bce09e78f28d34017d6a74afec8a632"} Nov 24 21:42:41 crc kubenswrapper[4915]: I1124 21:42:41.860902 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 24 21:42:41 crc kubenswrapper[4915]: I1124 21:42:41.891113 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.891085649 podStartE2EDuration="2.891085649s" podCreationTimestamp="2025-11-24 21:42:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:42:41.881804458 +0000 UTC m=+1380.198056641" watchObservedRunningTime="2025-11-24 21:42:41.891085649 +0000 UTC m=+1380.207337822" Nov 24 21:42:42 crc kubenswrapper[4915]: I1124 21:42:42.026678 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 24 21:42:42 crc kubenswrapper[4915]: I1124 21:42:42.137419 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-855b85d565-grqmb" Nov 24 21:42:42 crc kubenswrapper[4915]: I1124 21:42:42.213858 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-598d4b5ccb-wjxn7"] Nov 24 21:42:42 crc kubenswrapper[4915]: I1124 21:42:42.214168 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-598d4b5ccb-wjxn7" podUID="521647b8-476f-4319-aa06-f9e5af5fdbe9" containerName="neutron-api" containerID="cri-o://b369783a695a576c024bc7c7bcc2dc1c591e4f7d02b5a1e7c832103d1cbde7d3" gracePeriod=30 Nov 24 21:42:42 crc kubenswrapper[4915]: I1124 21:42:42.214360 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-598d4b5ccb-wjxn7" podUID="521647b8-476f-4319-aa06-f9e5af5fdbe9" containerName="neutron-httpd" containerID="cri-o://36528c4fd166c46acc50b734dbc2700ef96a38c050fd1120d8ea510c6977f90e" gracePeriod=30 Nov 24 21:42:42 crc kubenswrapper[4915]: I1124 21:42:42.263394 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 24 21:42:42 crc kubenswrapper[4915]: I1124 21:42:42.422908 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-gzpxc" Nov 24 21:42:42 crc kubenswrapper[4915]: I1124 21:42:42.562422 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-ml86b"] Nov 24 21:42:42 crc kubenswrapper[4915]: I1124 21:42:42.562720 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-ml86b" podUID="9e5395dd-1fa1-461c-b0eb-edc5c817955c" containerName="dnsmasq-dns" containerID="cri-o://a2f806c8ed4bac045164780c23846ef30c72f1cc469575104ff81f9ce88ae615" gracePeriod=10 Nov 24 21:42:42 crc kubenswrapper[4915]: I1124 21:42:42.879044 4915 generic.go:334] "Generic (PLEG): container finished" podID="521647b8-476f-4319-aa06-f9e5af5fdbe9" containerID="36528c4fd166c46acc50b734dbc2700ef96a38c050fd1120d8ea510c6977f90e" exitCode=0 Nov 24 21:42:42 crc kubenswrapper[4915]: I1124 21:42:42.879477 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-598d4b5ccb-wjxn7" event={"ID":"521647b8-476f-4319-aa06-f9e5af5fdbe9","Type":"ContainerDied","Data":"36528c4fd166c46acc50b734dbc2700ef96a38c050fd1120d8ea510c6977f90e"} Nov 24 21:42:42 crc kubenswrapper[4915]: I1124 21:42:42.886641 4915 generic.go:334] "Generic (PLEG): container finished" podID="9e5395dd-1fa1-461c-b0eb-edc5c817955c" containerID="a2f806c8ed4bac045164780c23846ef30c72f1cc469575104ff81f9ce88ae615" exitCode=0 Nov 24 21:42:42 crc kubenswrapper[4915]: I1124 21:42:42.887714 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-ml86b" event={"ID":"9e5395dd-1fa1-461c-b0eb-edc5c817955c","Type":"ContainerDied","Data":"a2f806c8ed4bac045164780c23846ef30c72f1cc469575104ff81f9ce88ae615"} Nov 24 21:42:42 crc kubenswrapper[4915]: I1124 21:42:42.954082 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 21:42:43 crc kubenswrapper[4915]: I1124 21:42:43.244566 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-ml86b" Nov 24 21:42:43 crc kubenswrapper[4915]: I1124 21:42:43.361931 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e5395dd-1fa1-461c-b0eb-edc5c817955c-dns-svc\") pod \"9e5395dd-1fa1-461c-b0eb-edc5c817955c\" (UID: \"9e5395dd-1fa1-461c-b0eb-edc5c817955c\") " Nov 24 21:42:43 crc kubenswrapper[4915]: I1124 21:42:43.362484 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e5395dd-1fa1-461c-b0eb-edc5c817955c-dns-swift-storage-0\") pod \"9e5395dd-1fa1-461c-b0eb-edc5c817955c\" (UID: \"9e5395dd-1fa1-461c-b0eb-edc5c817955c\") " Nov 24 21:42:43 crc kubenswrapper[4915]: I1124 21:42:43.362533 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e5395dd-1fa1-461c-b0eb-edc5c817955c-config\") pod \"9e5395dd-1fa1-461c-b0eb-edc5c817955c\" (UID: \"9e5395dd-1fa1-461c-b0eb-edc5c817955c\") " Nov 24 21:42:43 crc kubenswrapper[4915]: I1124 21:42:43.362566 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd4wm\" (UniqueName: \"kubernetes.io/projected/9e5395dd-1fa1-461c-b0eb-edc5c817955c-kube-api-access-kd4wm\") pod \"9e5395dd-1fa1-461c-b0eb-edc5c817955c\" (UID: \"9e5395dd-1fa1-461c-b0eb-edc5c817955c\") " Nov 24 21:42:43 crc kubenswrapper[4915]: I1124 21:42:43.362622 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e5395dd-1fa1-461c-b0eb-edc5c817955c-ovsdbserver-sb\") pod \"9e5395dd-1fa1-461c-b0eb-edc5c817955c\" (UID: \"9e5395dd-1fa1-461c-b0eb-edc5c817955c\") " Nov 24 21:42:43 crc kubenswrapper[4915]: I1124 21:42:43.362696 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e5395dd-1fa1-461c-b0eb-edc5c817955c-ovsdbserver-nb\") pod \"9e5395dd-1fa1-461c-b0eb-edc5c817955c\" (UID: \"9e5395dd-1fa1-461c-b0eb-edc5c817955c\") " Nov 24 21:42:43 crc kubenswrapper[4915]: I1124 21:42:43.372227 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e5395dd-1fa1-461c-b0eb-edc5c817955c-kube-api-access-kd4wm" (OuterVolumeSpecName: "kube-api-access-kd4wm") pod "9e5395dd-1fa1-461c-b0eb-edc5c817955c" (UID: "9e5395dd-1fa1-461c-b0eb-edc5c817955c"). InnerVolumeSpecName "kube-api-access-kd4wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:42:43 crc kubenswrapper[4915]: I1124 21:42:43.438852 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e5395dd-1fa1-461c-b0eb-edc5c817955c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9e5395dd-1fa1-461c-b0eb-edc5c817955c" (UID: "9e5395dd-1fa1-461c-b0eb-edc5c817955c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:42:43 crc kubenswrapper[4915]: I1124 21:42:43.442389 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e5395dd-1fa1-461c-b0eb-edc5c817955c-config" (OuterVolumeSpecName: "config") pod "9e5395dd-1fa1-461c-b0eb-edc5c817955c" (UID: "9e5395dd-1fa1-461c-b0eb-edc5c817955c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:42:43 crc kubenswrapper[4915]: I1124 21:42:43.456619 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e5395dd-1fa1-461c-b0eb-edc5c817955c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9e5395dd-1fa1-461c-b0eb-edc5c817955c" (UID: "9e5395dd-1fa1-461c-b0eb-edc5c817955c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:42:43 crc kubenswrapper[4915]: I1124 21:42:43.463895 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e5395dd-1fa1-461c-b0eb-edc5c817955c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9e5395dd-1fa1-461c-b0eb-edc5c817955c" (UID: "9e5395dd-1fa1-461c-b0eb-edc5c817955c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:42:43 crc kubenswrapper[4915]: I1124 21:42:43.466905 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e5395dd-1fa1-461c-b0eb-edc5c817955c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:43 crc kubenswrapper[4915]: I1124 21:42:43.466967 4915 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e5395dd-1fa1-461c-b0eb-edc5c817955c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:43 crc kubenswrapper[4915]: I1124 21:42:43.466979 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e5395dd-1fa1-461c-b0eb-edc5c817955c-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:43 crc kubenswrapper[4915]: I1124 21:42:43.466989 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kd4wm\" (UniqueName: \"kubernetes.io/projected/9e5395dd-1fa1-461c-b0eb-edc5c817955c-kube-api-access-kd4wm\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:43 crc kubenswrapper[4915]: I1124 21:42:43.466999 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e5395dd-1fa1-461c-b0eb-edc5c817955c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:43 crc kubenswrapper[4915]: I1124 21:42:43.518053 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e5395dd-1fa1-461c-b0eb-edc5c817955c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9e5395dd-1fa1-461c-b0eb-edc5c817955c" (UID: "9e5395dd-1fa1-461c-b0eb-edc5c817955c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:42:43 crc kubenswrapper[4915]: I1124 21:42:43.570293 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e5395dd-1fa1-461c-b0eb-edc5c817955c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:43 crc kubenswrapper[4915]: I1124 21:42:43.898716 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-ml86b" event={"ID":"9e5395dd-1fa1-461c-b0eb-edc5c817955c","Type":"ContainerDied","Data":"074a66bac4a424746932f022f11e4175a17b0a6866b6cff997ed8eb6dfea7c3d"} Nov 24 21:42:43 crc kubenswrapper[4915]: I1124 21:42:43.898797 4915 scope.go:117] "RemoveContainer" containerID="a2f806c8ed4bac045164780c23846ef30c72f1cc469575104ff81f9ce88ae615" Nov 24 21:42:43 crc kubenswrapper[4915]: I1124 21:42:43.898834 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-ml86b" Nov 24 21:42:43 crc kubenswrapper[4915]: I1124 21:42:43.898922 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0a117d18-c25b-4817-a114-fdb48bf5151c" containerName="cinder-scheduler" containerID="cri-o://ebdd335570b3c39e2b275e04f1cdbcf87ba66ed6e99083be5a74665fba50e223" gracePeriod=30 Nov 24 21:42:43 crc kubenswrapper[4915]: I1124 21:42:43.899156 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0a117d18-c25b-4817-a114-fdb48bf5151c" containerName="probe" containerID="cri-o://bc94bee23ccc267471861487814026af25cebe2ecb665437a6a66118d3efa8b3" gracePeriod=30 Nov 24 21:42:43 crc kubenswrapper[4915]: I1124 21:42:43.943963 4915 scope.go:117] "RemoveContainer" containerID="c0b5d949dd930184037b43f20b1a2e9299a00f8fe33bffa547431335eaea539b" Nov 24 21:42:43 crc kubenswrapper[4915]: I1124 21:42:43.960408 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-ml86b"] Nov 24 21:42:43 crc kubenswrapper[4915]: I1124 21:42:43.969915 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-ml86b"] Nov 24 21:42:44 crc kubenswrapper[4915]: I1124 21:42:44.441824 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e5395dd-1fa1-461c-b0eb-edc5c817955c" path="/var/lib/kubelet/pods/9e5395dd-1fa1-461c-b0eb-edc5c817955c/volumes" Nov 24 21:42:44 crc kubenswrapper[4915]: I1124 21:42:44.912180 4915 generic.go:334] "Generic (PLEG): container finished" podID="0a117d18-c25b-4817-a114-fdb48bf5151c" containerID="bc94bee23ccc267471861487814026af25cebe2ecb665437a6a66118d3efa8b3" exitCode=0 Nov 24 21:42:44 crc kubenswrapper[4915]: I1124 21:42:44.912273 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0a117d18-c25b-4817-a114-fdb48bf5151c","Type":"ContainerDied","Data":"bc94bee23ccc267471861487814026af25cebe2ecb665437a6a66118d3efa8b3"} Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.517912 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-598d4b5ccb-wjxn7" Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.617202 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/521647b8-476f-4319-aa06-f9e5af5fdbe9-ovndb-tls-certs\") pod \"521647b8-476f-4319-aa06-f9e5af5fdbe9\" (UID: \"521647b8-476f-4319-aa06-f9e5af5fdbe9\") " Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.617280 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/521647b8-476f-4319-aa06-f9e5af5fdbe9-config\") pod \"521647b8-476f-4319-aa06-f9e5af5fdbe9\" (UID: \"521647b8-476f-4319-aa06-f9e5af5fdbe9\") " Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.617300 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521647b8-476f-4319-aa06-f9e5af5fdbe9-combined-ca-bundle\") pod \"521647b8-476f-4319-aa06-f9e5af5fdbe9\" (UID: \"521647b8-476f-4319-aa06-f9e5af5fdbe9\") " Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.617348 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7cjb\" (UniqueName: \"kubernetes.io/projected/521647b8-476f-4319-aa06-f9e5af5fdbe9-kube-api-access-b7cjb\") pod \"521647b8-476f-4319-aa06-f9e5af5fdbe9\" (UID: \"521647b8-476f-4319-aa06-f9e5af5fdbe9\") " Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.617383 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/521647b8-476f-4319-aa06-f9e5af5fdbe9-httpd-config\") pod \"521647b8-476f-4319-aa06-f9e5af5fdbe9\" (UID: \"521647b8-476f-4319-aa06-f9e5af5fdbe9\") " Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.632077 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/521647b8-476f-4319-aa06-f9e5af5fdbe9-kube-api-access-b7cjb" (OuterVolumeSpecName: "kube-api-access-b7cjb") pod "521647b8-476f-4319-aa06-f9e5af5fdbe9" (UID: "521647b8-476f-4319-aa06-f9e5af5fdbe9"). InnerVolumeSpecName "kube-api-access-b7cjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.632588 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/521647b8-476f-4319-aa06-f9e5af5fdbe9-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "521647b8-476f-4319-aa06-f9e5af5fdbe9" (UID: "521647b8-476f-4319-aa06-f9e5af5fdbe9"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.708557 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/521647b8-476f-4319-aa06-f9e5af5fdbe9-config" (OuterVolumeSpecName: "config") pod "521647b8-476f-4319-aa06-f9e5af5fdbe9" (UID: "521647b8-476f-4319-aa06-f9e5af5fdbe9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.715883 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/521647b8-476f-4319-aa06-f9e5af5fdbe9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "521647b8-476f-4319-aa06-f9e5af5fdbe9" (UID: "521647b8-476f-4319-aa06-f9e5af5fdbe9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.719944 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/521647b8-476f-4319-aa06-f9e5af5fdbe9-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.720056 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/521647b8-476f-4319-aa06-f9e5af5fdbe9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.720070 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7cjb\" (UniqueName: \"kubernetes.io/projected/521647b8-476f-4319-aa06-f9e5af5fdbe9-kube-api-access-b7cjb\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.720085 4915 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/521647b8-476f-4319-aa06-f9e5af5fdbe9-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.730097 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/521647b8-476f-4319-aa06-f9e5af5fdbe9-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "521647b8-476f-4319-aa06-f9e5af5fdbe9" (UID: "521647b8-476f-4319-aa06-f9e5af5fdbe9"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.809861 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.822500 4915 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/521647b8-476f-4319-aa06-f9e5af5fdbe9-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.923293 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a117d18-c25b-4817-a114-fdb48bf5151c-config-data-custom\") pod \"0a117d18-c25b-4817-a114-fdb48bf5151c\" (UID: \"0a117d18-c25b-4817-a114-fdb48bf5151c\") " Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.923414 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a117d18-c25b-4817-a114-fdb48bf5151c-combined-ca-bundle\") pod \"0a117d18-c25b-4817-a114-fdb48bf5151c\" (UID: \"0a117d18-c25b-4817-a114-fdb48bf5151c\") " Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.923885 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a117d18-c25b-4817-a114-fdb48bf5151c-config-data\") pod \"0a117d18-c25b-4817-a114-fdb48bf5151c\" (UID: \"0a117d18-c25b-4817-a114-fdb48bf5151c\") " Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.923957 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a117d18-c25b-4817-a114-fdb48bf5151c-etc-machine-id\") pod \"0a117d18-c25b-4817-a114-fdb48bf5151c\" (UID: \"0a117d18-c25b-4817-a114-fdb48bf5151c\") " Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.924004 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a117d18-c25b-4817-a114-fdb48bf5151c-scripts\") pod \"0a117d18-c25b-4817-a114-fdb48bf5151c\" (UID: \"0a117d18-c25b-4817-a114-fdb48bf5151c\") " Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.924125 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfh4n\" (UniqueName: \"kubernetes.io/projected/0a117d18-c25b-4817-a114-fdb48bf5151c-kube-api-access-kfh4n\") pod \"0a117d18-c25b-4817-a114-fdb48bf5151c\" (UID: \"0a117d18-c25b-4817-a114-fdb48bf5151c\") " Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.924302 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a117d18-c25b-4817-a114-fdb48bf5151c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0a117d18-c25b-4817-a114-fdb48bf5151c" (UID: "0a117d18-c25b-4817-a114-fdb48bf5151c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.925374 4915 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a117d18-c25b-4817-a114-fdb48bf5151c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.928660 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a117d18-c25b-4817-a114-fdb48bf5151c-kube-api-access-kfh4n" (OuterVolumeSpecName: "kube-api-access-kfh4n") pod "0a117d18-c25b-4817-a114-fdb48bf5151c" (UID: "0a117d18-c25b-4817-a114-fdb48bf5151c"). InnerVolumeSpecName "kube-api-access-kfh4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.929824 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a117d18-c25b-4817-a114-fdb48bf5151c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0a117d18-c25b-4817-a114-fdb48bf5151c" (UID: "0a117d18-c25b-4817-a114-fdb48bf5151c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.932155 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a117d18-c25b-4817-a114-fdb48bf5151c-scripts" (OuterVolumeSpecName: "scripts") pod "0a117d18-c25b-4817-a114-fdb48bf5151c" (UID: "0a117d18-c25b-4817-a114-fdb48bf5151c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.932230 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.932228 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0a117d18-c25b-4817-a114-fdb48bf5151c","Type":"ContainerDied","Data":"ebdd335570b3c39e2b275e04f1cdbcf87ba66ed6e99083be5a74665fba50e223"} Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.932394 4915 scope.go:117] "RemoveContainer" containerID="bc94bee23ccc267471861487814026af25cebe2ecb665437a6a66118d3efa8b3" Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.932172 4915 generic.go:334] "Generic (PLEG): container finished" podID="0a117d18-c25b-4817-a114-fdb48bf5151c" containerID="ebdd335570b3c39e2b275e04f1cdbcf87ba66ed6e99083be5a74665fba50e223" exitCode=0 Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.932619 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0a117d18-c25b-4817-a114-fdb48bf5151c","Type":"ContainerDied","Data":"491ba205407d35b276c95b2f9a229a455fef1daec816a0066aa733119bf84f29"} Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.943567 4915 generic.go:334] "Generic (PLEG): container finished" podID="521647b8-476f-4319-aa06-f9e5af5fdbe9" containerID="b369783a695a576c024bc7c7bcc2dc1c591e4f7d02b5a1e7c832103d1cbde7d3" exitCode=0 Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.943672 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-598d4b5ccb-wjxn7" Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.943689 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-598d4b5ccb-wjxn7" event={"ID":"521647b8-476f-4319-aa06-f9e5af5fdbe9","Type":"ContainerDied","Data":"b369783a695a576c024bc7c7bcc2dc1c591e4f7d02b5a1e7c832103d1cbde7d3"} Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.944112 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-598d4b5ccb-wjxn7" event={"ID":"521647b8-476f-4319-aa06-f9e5af5fdbe9","Type":"ContainerDied","Data":"9054c5ac5f304c3fea0bc37a8cd1e95793b7cef8cb21b93541765749ed403568"} Nov 24 21:42:45 crc kubenswrapper[4915]: I1124 21:42:45.989471 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a117d18-c25b-4817-a114-fdb48bf5151c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a117d18-c25b-4817-a114-fdb48bf5151c" (UID: "0a117d18-c25b-4817-a114-fdb48bf5151c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.026891 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfh4n\" (UniqueName: \"kubernetes.io/projected/0a117d18-c25b-4817-a114-fdb48bf5151c-kube-api-access-kfh4n\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.026922 4915 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a117d18-c25b-4817-a114-fdb48bf5151c-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.026931 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a117d18-c25b-4817-a114-fdb48bf5151c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.026942 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a117d18-c25b-4817-a114-fdb48bf5151c-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.059987 4915 scope.go:117] "RemoveContainer" containerID="ebdd335570b3c39e2b275e04f1cdbcf87ba66ed6e99083be5a74665fba50e223" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.063485 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-598d4b5ccb-wjxn7"] Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.072747 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a117d18-c25b-4817-a114-fdb48bf5151c-config-data" (OuterVolumeSpecName: "config-data") pod "0a117d18-c25b-4817-a114-fdb48bf5151c" (UID: "0a117d18-c25b-4817-a114-fdb48bf5151c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.075104 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-598d4b5ccb-wjxn7"] Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.092372 4915 scope.go:117] "RemoveContainer" containerID="bc94bee23ccc267471861487814026af25cebe2ecb665437a6a66118d3efa8b3" Nov 24 21:42:46 crc kubenswrapper[4915]: E1124 21:42:46.095892 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc94bee23ccc267471861487814026af25cebe2ecb665437a6a66118d3efa8b3\": container with ID starting with bc94bee23ccc267471861487814026af25cebe2ecb665437a6a66118d3efa8b3 not found: ID does not exist" containerID="bc94bee23ccc267471861487814026af25cebe2ecb665437a6a66118d3efa8b3" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.095966 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc94bee23ccc267471861487814026af25cebe2ecb665437a6a66118d3efa8b3"} err="failed to get container status \"bc94bee23ccc267471861487814026af25cebe2ecb665437a6a66118d3efa8b3\": rpc error: code = NotFound desc = could not find container \"bc94bee23ccc267471861487814026af25cebe2ecb665437a6a66118d3efa8b3\": container with ID starting with bc94bee23ccc267471861487814026af25cebe2ecb665437a6a66118d3efa8b3 not found: ID does not exist" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.096005 4915 scope.go:117] "RemoveContainer" containerID="ebdd335570b3c39e2b275e04f1cdbcf87ba66ed6e99083be5a74665fba50e223" Nov 24 21:42:46 crc kubenswrapper[4915]: E1124 21:42:46.096415 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebdd335570b3c39e2b275e04f1cdbcf87ba66ed6e99083be5a74665fba50e223\": container with ID starting with ebdd335570b3c39e2b275e04f1cdbcf87ba66ed6e99083be5a74665fba50e223 not found: ID does not exist" containerID="ebdd335570b3c39e2b275e04f1cdbcf87ba66ed6e99083be5a74665fba50e223" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.096524 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebdd335570b3c39e2b275e04f1cdbcf87ba66ed6e99083be5a74665fba50e223"} err="failed to get container status \"ebdd335570b3c39e2b275e04f1cdbcf87ba66ed6e99083be5a74665fba50e223\": rpc error: code = NotFound desc = could not find container \"ebdd335570b3c39e2b275e04f1cdbcf87ba66ed6e99083be5a74665fba50e223\": container with ID starting with ebdd335570b3c39e2b275e04f1cdbcf87ba66ed6e99083be5a74665fba50e223 not found: ID does not exist" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.096607 4915 scope.go:117] "RemoveContainer" containerID="36528c4fd166c46acc50b734dbc2700ef96a38c050fd1120d8ea510c6977f90e" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.123529 4915 scope.go:117] "RemoveContainer" containerID="b369783a695a576c024bc7c7bcc2dc1c591e4f7d02b5a1e7c832103d1cbde7d3" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.129374 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a117d18-c25b-4817-a114-fdb48bf5151c-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.149108 4915 scope.go:117] "RemoveContainer" containerID="36528c4fd166c46acc50b734dbc2700ef96a38c050fd1120d8ea510c6977f90e" Nov 24 21:42:46 crc kubenswrapper[4915]: E1124 21:42:46.149515 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36528c4fd166c46acc50b734dbc2700ef96a38c050fd1120d8ea510c6977f90e\": container with ID starting with 36528c4fd166c46acc50b734dbc2700ef96a38c050fd1120d8ea510c6977f90e not found: ID does not exist" containerID="36528c4fd166c46acc50b734dbc2700ef96a38c050fd1120d8ea510c6977f90e" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.149550 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36528c4fd166c46acc50b734dbc2700ef96a38c050fd1120d8ea510c6977f90e"} err="failed to get container status \"36528c4fd166c46acc50b734dbc2700ef96a38c050fd1120d8ea510c6977f90e\": rpc error: code = NotFound desc = could not find container \"36528c4fd166c46acc50b734dbc2700ef96a38c050fd1120d8ea510c6977f90e\": container with ID starting with 36528c4fd166c46acc50b734dbc2700ef96a38c050fd1120d8ea510c6977f90e not found: ID does not exist" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.149574 4915 scope.go:117] "RemoveContainer" containerID="b369783a695a576c024bc7c7bcc2dc1c591e4f7d02b5a1e7c832103d1cbde7d3" Nov 24 21:42:46 crc kubenswrapper[4915]: E1124 21:42:46.150050 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b369783a695a576c024bc7c7bcc2dc1c591e4f7d02b5a1e7c832103d1cbde7d3\": container with ID starting with b369783a695a576c024bc7c7bcc2dc1c591e4f7d02b5a1e7c832103d1cbde7d3 not found: ID does not exist" containerID="b369783a695a576c024bc7c7bcc2dc1c591e4f7d02b5a1e7c832103d1cbde7d3" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.150071 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b369783a695a576c024bc7c7bcc2dc1c591e4f7d02b5a1e7c832103d1cbde7d3"} err="failed to get container status \"b369783a695a576c024bc7c7bcc2dc1c591e4f7d02b5a1e7c832103d1cbde7d3\": rpc error: code = NotFound desc = could not find container \"b369783a695a576c024bc7c7bcc2dc1c591e4f7d02b5a1e7c832103d1cbde7d3\": container with ID starting with b369783a695a576c024bc7c7bcc2dc1c591e4f7d02b5a1e7c832103d1cbde7d3 not found: ID does not exist" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.283808 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.304135 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.314446 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 21:42:46 crc kubenswrapper[4915]: E1124 21:42:46.315121 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="521647b8-476f-4319-aa06-f9e5af5fdbe9" containerName="neutron-httpd" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.315149 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="521647b8-476f-4319-aa06-f9e5af5fdbe9" containerName="neutron-httpd" Nov 24 21:42:46 crc kubenswrapper[4915]: E1124 21:42:46.315168 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e5395dd-1fa1-461c-b0eb-edc5c817955c" containerName="init" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.315177 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e5395dd-1fa1-461c-b0eb-edc5c817955c" containerName="init" Nov 24 21:42:46 crc kubenswrapper[4915]: E1124 21:42:46.315198 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="521647b8-476f-4319-aa06-f9e5af5fdbe9" containerName="neutron-api" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.315208 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="521647b8-476f-4319-aa06-f9e5af5fdbe9" containerName="neutron-api" Nov 24 21:42:46 crc kubenswrapper[4915]: E1124 21:42:46.315220 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a117d18-c25b-4817-a114-fdb48bf5151c" containerName="probe" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.315230 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a117d18-c25b-4817-a114-fdb48bf5151c" containerName="probe" Nov 24 21:42:46 crc kubenswrapper[4915]: E1124 21:42:46.315242 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a117d18-c25b-4817-a114-fdb48bf5151c" containerName="cinder-scheduler" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.315252 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a117d18-c25b-4817-a114-fdb48bf5151c" containerName="cinder-scheduler" Nov 24 21:42:46 crc kubenswrapper[4915]: E1124 21:42:46.315297 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e5395dd-1fa1-461c-b0eb-edc5c817955c" containerName="dnsmasq-dns" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.315306 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e5395dd-1fa1-461c-b0eb-edc5c817955c" containerName="dnsmasq-dns" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.315570 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a117d18-c25b-4817-a114-fdb48bf5151c" containerName="probe" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.315602 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="521647b8-476f-4319-aa06-f9e5af5fdbe9" containerName="neutron-api" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.315636 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="521647b8-476f-4319-aa06-f9e5af5fdbe9" containerName="neutron-httpd" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.315668 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e5395dd-1fa1-461c-b0eb-edc5c817955c" containerName="dnsmasq-dns" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.315696 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a117d18-c25b-4817-a114-fdb48bf5151c" containerName="cinder-scheduler" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.317369 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.324102 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.324328 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.451881 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a117d18-c25b-4817-a114-fdb48bf5151c" path="/var/lib/kubelet/pods/0a117d18-c25b-4817-a114-fdb48bf5151c/volumes" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.452710 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="521647b8-476f-4319-aa06-f9e5af5fdbe9" path="/var/lib/kubelet/pods/521647b8-476f-4319-aa06-f9e5af5fdbe9/volumes" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.454121 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nnsl\" (UniqueName: \"kubernetes.io/projected/e833f179-abc7-49e0-8cba-b50d7378ce5b-kube-api-access-2nnsl\") pod \"cinder-scheduler-0\" (UID: \"e833f179-abc7-49e0-8cba-b50d7378ce5b\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.454271 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e833f179-abc7-49e0-8cba-b50d7378ce5b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e833f179-abc7-49e0-8cba-b50d7378ce5b\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.454399 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e833f179-abc7-49e0-8cba-b50d7378ce5b-scripts\") pod \"cinder-scheduler-0\" (UID: \"e833f179-abc7-49e0-8cba-b50d7378ce5b\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.454552 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e833f179-abc7-49e0-8cba-b50d7378ce5b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e833f179-abc7-49e0-8cba-b50d7378ce5b\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.454720 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e833f179-abc7-49e0-8cba-b50d7378ce5b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e833f179-abc7-49e0-8cba-b50d7378ce5b\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.454876 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e833f179-abc7-49e0-8cba-b50d7378ce5b-config-data\") pod \"cinder-scheduler-0\" (UID: \"e833f179-abc7-49e0-8cba-b50d7378ce5b\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.557174 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e833f179-abc7-49e0-8cba-b50d7378ce5b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e833f179-abc7-49e0-8cba-b50d7378ce5b\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.557286 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e833f179-abc7-49e0-8cba-b50d7378ce5b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e833f179-abc7-49e0-8cba-b50d7378ce5b\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.557295 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e833f179-abc7-49e0-8cba-b50d7378ce5b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e833f179-abc7-49e0-8cba-b50d7378ce5b\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.557665 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e833f179-abc7-49e0-8cba-b50d7378ce5b-config-data\") pod \"cinder-scheduler-0\" (UID: \"e833f179-abc7-49e0-8cba-b50d7378ce5b\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.557941 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nnsl\" (UniqueName: \"kubernetes.io/projected/e833f179-abc7-49e0-8cba-b50d7378ce5b-kube-api-access-2nnsl\") pod \"cinder-scheduler-0\" (UID: \"e833f179-abc7-49e0-8cba-b50d7378ce5b\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.557977 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e833f179-abc7-49e0-8cba-b50d7378ce5b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e833f179-abc7-49e0-8cba-b50d7378ce5b\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.558046 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e833f179-abc7-49e0-8cba-b50d7378ce5b-scripts\") pod \"cinder-scheduler-0\" (UID: \"e833f179-abc7-49e0-8cba-b50d7378ce5b\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.561794 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e833f179-abc7-49e0-8cba-b50d7378ce5b-config-data\") pod \"cinder-scheduler-0\" (UID: \"e833f179-abc7-49e0-8cba-b50d7378ce5b\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.562048 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e833f179-abc7-49e0-8cba-b50d7378ce5b-scripts\") pod \"cinder-scheduler-0\" (UID: \"e833f179-abc7-49e0-8cba-b50d7378ce5b\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.565354 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e833f179-abc7-49e0-8cba-b50d7378ce5b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e833f179-abc7-49e0-8cba-b50d7378ce5b\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.565468 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e833f179-abc7-49e0-8cba-b50d7378ce5b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e833f179-abc7-49e0-8cba-b50d7378ce5b\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.583484 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nnsl\" (UniqueName: \"kubernetes.io/projected/e833f179-abc7-49e0-8cba-b50d7378ce5b-kube-api-access-2nnsl\") pod \"cinder-scheduler-0\" (UID: \"e833f179-abc7-49e0-8cba-b50d7378ce5b\") " pod="openstack/cinder-scheduler-0" Nov 24 21:42:46 crc kubenswrapper[4915]: I1124 21:42:46.655555 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 24 21:42:47 crc kubenswrapper[4915]: I1124 21:42:47.115593 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 24 21:42:47 crc kubenswrapper[4915]: I1124 21:42:47.979460 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e833f179-abc7-49e0-8cba-b50d7378ce5b","Type":"ContainerStarted","Data":"65caaa1ec2992800f402bea7f51277382f13ed227e2a0d1908e4d268ba4319ad"} Nov 24 21:42:47 crc kubenswrapper[4915]: I1124 21:42:47.980614 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e833f179-abc7-49e0-8cba-b50d7378ce5b","Type":"ContainerStarted","Data":"71f527a8c49dc9f671a51667584e0e0e418574d1ffdd5d4d6350bbaa9e57fdf5"} Nov 24 21:42:48 crc kubenswrapper[4915]: I1124 21:42:48.132277 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7985c5bcf4-b8zrw" Nov 24 21:42:48 crc kubenswrapper[4915]: I1124 21:42:48.142910 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7985c5bcf4-b8zrw" Nov 24 21:42:48 crc kubenswrapper[4915]: I1124 21:42:48.263477 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-dcff88f94-kdzrd"] Nov 24 21:42:48 crc kubenswrapper[4915]: I1124 21:42:48.271859 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-dcff88f94-kdzrd" podUID="bf0c5bda-da34-4d01-8806-6016a8a24a9a" containerName="barbican-api-log" containerID="cri-o://ebac6b5dee363c1910f1e0fd36a228828c61ac69b1c7dbea011d4bc67d3216b1" gracePeriod=30 Nov 24 21:42:48 crc kubenswrapper[4915]: I1124 21:42:48.272594 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-dcff88f94-kdzrd" podUID="bf0c5bda-da34-4d01-8806-6016a8a24a9a" containerName="barbican-api" containerID="cri-o://83decd64c460ab8530b49cf8004cdfc522963351a6ed7b0abb6a33421ab2bd0e" gracePeriod=30 Nov 24 21:42:48 crc kubenswrapper[4915]: E1124 21:42:48.355507 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf0c5bda_da34_4d01_8806_6016a8a24a9a.slice/crio-ebac6b5dee363c1910f1e0fd36a228828c61ac69b1c7dbea011d4bc67d3216b1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e5395dd_1fa1_461c_b0eb_edc5c817955c.slice/crio-conmon-a2f806c8ed4bac045164780c23846ef30c72f1cc469575104ff81f9ce88ae615.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf0c5bda_da34_4d01_8806_6016a8a24a9a.slice/crio-conmon-ebac6b5dee363c1910f1e0fd36a228828c61ac69b1c7dbea011d4bc67d3216b1.scope\": RecentStats: unable to find data in memory cache]" Nov 24 21:42:48 crc kubenswrapper[4915]: E1124 21:42:48.356175 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf0c5bda_da34_4d01_8806_6016a8a24a9a.slice/crio-ebac6b5dee363c1910f1e0fd36a228828c61ac69b1c7dbea011d4bc67d3216b1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf0c5bda_da34_4d01_8806_6016a8a24a9a.slice/crio-conmon-ebac6b5dee363c1910f1e0fd36a228828c61ac69b1c7dbea011d4bc67d3216b1.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e5395dd_1fa1_461c_b0eb_edc5c817955c.slice/crio-conmon-a2f806c8ed4bac045164780c23846ef30c72f1cc469575104ff81f9ce88ae615.scope\": RecentStats: unable to find data in memory cache]" Nov 24 21:42:48 crc kubenswrapper[4915]: I1124 21:42:48.992630 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e833f179-abc7-49e0-8cba-b50d7378ce5b","Type":"ContainerStarted","Data":"6e7edf90bf53421b886fc7714c9a9fab0ba57996e5ad21c67c0ce4df3c42f04f"} Nov 24 21:42:48 crc kubenswrapper[4915]: I1124 21:42:48.995113 4915 generic.go:334] "Generic (PLEG): container finished" podID="bf0c5bda-da34-4d01-8806-6016a8a24a9a" containerID="ebac6b5dee363c1910f1e0fd36a228828c61ac69b1c7dbea011d4bc67d3216b1" exitCode=143 Nov 24 21:42:48 crc kubenswrapper[4915]: I1124 21:42:48.996196 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-dcff88f94-kdzrd" event={"ID":"bf0c5bda-da34-4d01-8806-6016a8a24a9a","Type":"ContainerDied","Data":"ebac6b5dee363c1910f1e0fd36a228828c61ac69b1c7dbea011d4bc67d3216b1"} Nov 24 21:42:49 crc kubenswrapper[4915]: I1124 21:42:49.023631 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.023614592 podStartE2EDuration="3.023614592s" podCreationTimestamp="2025-11-24 21:42:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:42:49.011370211 +0000 UTC m=+1387.327622394" watchObservedRunningTime="2025-11-24 21:42:49.023614592 +0000 UTC m=+1387.339866765" Nov 24 21:42:50 crc kubenswrapper[4915]: I1124 21:42:50.861336 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6db74dbdbb-6ntbq" Nov 24 21:42:50 crc kubenswrapper[4915]: I1124 21:42:50.862603 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6db74dbdbb-6ntbq" Nov 24 21:42:51 crc kubenswrapper[4915]: E1124 21:42:51.331021 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e5395dd_1fa1_461c_b0eb_edc5c817955c.slice/crio-conmon-a2f806c8ed4bac045164780c23846ef30c72f1cc469575104ff81f9ce88ae615.scope\": RecentStats: unable to find data in memory cache]" Nov 24 21:42:51 crc kubenswrapper[4915]: I1124 21:42:51.450370 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-dcff88f94-kdzrd" podUID="bf0c5bda-da34-4d01-8806-6016a8a24a9a" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.198:9311/healthcheck\": read tcp 10.217.0.2:48286->10.217.0.198:9311: read: connection reset by peer" Nov 24 21:42:51 crc kubenswrapper[4915]: I1124 21:42:51.451069 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-dcff88f94-kdzrd" podUID="bf0c5bda-da34-4d01-8806-6016a8a24a9a" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.198:9311/healthcheck\": read tcp 10.217.0.2:48284->10.217.0.198:9311: read: connection reset by peer" Nov 24 21:42:51 crc kubenswrapper[4915]: I1124 21:42:51.657392 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 24 21:42:51 crc kubenswrapper[4915]: E1124 21:42:51.815568 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e5395dd_1fa1_461c_b0eb_edc5c817955c.slice/crio-conmon-a2f806c8ed4bac045164780c23846ef30c72f1cc469575104ff81f9ce88ae615.scope\": RecentStats: unable to find data in memory cache]" Nov 24 21:42:51 crc kubenswrapper[4915]: I1124 21:42:51.886642 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 24 21:42:51 crc kubenswrapper[4915]: I1124 21:42:51.914587 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-dcff88f94-kdzrd" Nov 24 21:42:51 crc kubenswrapper[4915]: I1124 21:42:51.938909 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6b68f549f-hk4pm" Nov 24 21:42:52 crc kubenswrapper[4915]: I1124 21:42:52.018512 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr2c4\" (UniqueName: \"kubernetes.io/projected/bf0c5bda-da34-4d01-8806-6016a8a24a9a-kube-api-access-sr2c4\") pod \"bf0c5bda-da34-4d01-8806-6016a8a24a9a\" (UID: \"bf0c5bda-da34-4d01-8806-6016a8a24a9a\") " Nov 24 21:42:52 crc kubenswrapper[4915]: I1124 21:42:52.018699 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf0c5bda-da34-4d01-8806-6016a8a24a9a-config-data\") pod \"bf0c5bda-da34-4d01-8806-6016a8a24a9a\" (UID: \"bf0c5bda-da34-4d01-8806-6016a8a24a9a\") " Nov 24 21:42:52 crc kubenswrapper[4915]: I1124 21:42:52.018876 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0c5bda-da34-4d01-8806-6016a8a24a9a-combined-ca-bundle\") pod \"bf0c5bda-da34-4d01-8806-6016a8a24a9a\" (UID: \"bf0c5bda-da34-4d01-8806-6016a8a24a9a\") " Nov 24 21:42:52 crc kubenswrapper[4915]: I1124 21:42:52.018907 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf0c5bda-da34-4d01-8806-6016a8a24a9a-config-data-custom\") pod \"bf0c5bda-da34-4d01-8806-6016a8a24a9a\" (UID: \"bf0c5bda-da34-4d01-8806-6016a8a24a9a\") " Nov 24 21:42:52 crc kubenswrapper[4915]: I1124 21:42:52.019024 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf0c5bda-da34-4d01-8806-6016a8a24a9a-logs\") pod \"bf0c5bda-da34-4d01-8806-6016a8a24a9a\" (UID: \"bf0c5bda-da34-4d01-8806-6016a8a24a9a\") " Nov 24 21:42:52 crc kubenswrapper[4915]: I1124 21:42:52.033082 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf0c5bda-da34-4d01-8806-6016a8a24a9a-logs" (OuterVolumeSpecName: "logs") pod "bf0c5bda-da34-4d01-8806-6016a8a24a9a" (UID: "bf0c5bda-da34-4d01-8806-6016a8a24a9a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:42:52 crc kubenswrapper[4915]: I1124 21:42:52.033385 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf0c5bda-da34-4d01-8806-6016a8a24a9a-kube-api-access-sr2c4" (OuterVolumeSpecName: "kube-api-access-sr2c4") pod "bf0c5bda-da34-4d01-8806-6016a8a24a9a" (UID: "bf0c5bda-da34-4d01-8806-6016a8a24a9a"). InnerVolumeSpecName "kube-api-access-sr2c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:42:52 crc kubenswrapper[4915]: I1124 21:42:52.038090 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf0c5bda-da34-4d01-8806-6016a8a24a9a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bf0c5bda-da34-4d01-8806-6016a8a24a9a" (UID: "bf0c5bda-da34-4d01-8806-6016a8a24a9a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:52 crc kubenswrapper[4915]: I1124 21:42:52.107086 4915 generic.go:334] "Generic (PLEG): container finished" podID="bf0c5bda-da34-4d01-8806-6016a8a24a9a" containerID="83decd64c460ab8530b49cf8004cdfc522963351a6ed7b0abb6a33421ab2bd0e" exitCode=0 Nov 24 21:42:52 crc kubenswrapper[4915]: I1124 21:42:52.107138 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-dcff88f94-kdzrd" event={"ID":"bf0c5bda-da34-4d01-8806-6016a8a24a9a","Type":"ContainerDied","Data":"83decd64c460ab8530b49cf8004cdfc522963351a6ed7b0abb6a33421ab2bd0e"} Nov 24 21:42:52 crc kubenswrapper[4915]: I1124 21:42:52.107171 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-dcff88f94-kdzrd" event={"ID":"bf0c5bda-da34-4d01-8806-6016a8a24a9a","Type":"ContainerDied","Data":"5cb4bd4e3cd768a0bf0f2ad90c645e5aaa25273151296d5371653394719b0bb4"} Nov 24 21:42:52 crc kubenswrapper[4915]: I1124 21:42:52.107191 4915 scope.go:117] "RemoveContainer" containerID="83decd64c460ab8530b49cf8004cdfc522963351a6ed7b0abb6a33421ab2bd0e" Nov 24 21:42:52 crc kubenswrapper[4915]: I1124 21:42:52.107377 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-dcff88f94-kdzrd" Nov 24 21:42:52 crc kubenswrapper[4915]: I1124 21:42:52.121523 4915 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf0c5bda-da34-4d01-8806-6016a8a24a9a-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:52 crc kubenswrapper[4915]: I1124 21:42:52.121574 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf0c5bda-da34-4d01-8806-6016a8a24a9a-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:52 crc kubenswrapper[4915]: I1124 21:42:52.121584 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sr2c4\" (UniqueName: \"kubernetes.io/projected/bf0c5bda-da34-4d01-8806-6016a8a24a9a-kube-api-access-sr2c4\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:52 crc kubenswrapper[4915]: I1124 21:42:52.144954 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf0c5bda-da34-4d01-8806-6016a8a24a9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf0c5bda-da34-4d01-8806-6016a8a24a9a" (UID: "bf0c5bda-da34-4d01-8806-6016a8a24a9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:52 crc kubenswrapper[4915]: I1124 21:42:52.175009 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf0c5bda-da34-4d01-8806-6016a8a24a9a-config-data" (OuterVolumeSpecName: "config-data") pod "bf0c5bda-da34-4d01-8806-6016a8a24a9a" (UID: "bf0c5bda-da34-4d01-8806-6016a8a24a9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:42:52 crc kubenswrapper[4915]: I1124 21:42:52.199254 4915 scope.go:117] "RemoveContainer" containerID="ebac6b5dee363c1910f1e0fd36a228828c61ac69b1c7dbea011d4bc67d3216b1" Nov 24 21:42:52 crc kubenswrapper[4915]: I1124 21:42:52.226064 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf0c5bda-da34-4d01-8806-6016a8a24a9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:52 crc kubenswrapper[4915]: I1124 21:42:52.226098 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf0c5bda-da34-4d01-8806-6016a8a24a9a-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:42:52 crc kubenswrapper[4915]: I1124 21:42:52.255747 4915 scope.go:117] "RemoveContainer" containerID="83decd64c460ab8530b49cf8004cdfc522963351a6ed7b0abb6a33421ab2bd0e" Nov 24 21:42:52 crc kubenswrapper[4915]: E1124 21:42:52.256348 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83decd64c460ab8530b49cf8004cdfc522963351a6ed7b0abb6a33421ab2bd0e\": container with ID starting with 83decd64c460ab8530b49cf8004cdfc522963351a6ed7b0abb6a33421ab2bd0e not found: ID does not exist" containerID="83decd64c460ab8530b49cf8004cdfc522963351a6ed7b0abb6a33421ab2bd0e" Nov 24 21:42:52 crc kubenswrapper[4915]: I1124 21:42:52.256404 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83decd64c460ab8530b49cf8004cdfc522963351a6ed7b0abb6a33421ab2bd0e"} err="failed to get container status \"83decd64c460ab8530b49cf8004cdfc522963351a6ed7b0abb6a33421ab2bd0e\": rpc error: code = NotFound desc = could not find container \"83decd64c460ab8530b49cf8004cdfc522963351a6ed7b0abb6a33421ab2bd0e\": container with ID starting with 83decd64c460ab8530b49cf8004cdfc522963351a6ed7b0abb6a33421ab2bd0e not found: ID does not exist" Nov 24 21:42:52 crc kubenswrapper[4915]: I1124 21:42:52.256430 4915 scope.go:117] "RemoveContainer" containerID="ebac6b5dee363c1910f1e0fd36a228828c61ac69b1c7dbea011d4bc67d3216b1" Nov 24 21:42:52 crc kubenswrapper[4915]: E1124 21:42:52.256875 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebac6b5dee363c1910f1e0fd36a228828c61ac69b1c7dbea011d4bc67d3216b1\": container with ID starting with ebac6b5dee363c1910f1e0fd36a228828c61ac69b1c7dbea011d4bc67d3216b1 not found: ID does not exist" containerID="ebac6b5dee363c1910f1e0fd36a228828c61ac69b1c7dbea011d4bc67d3216b1" Nov 24 21:42:52 crc kubenswrapper[4915]: I1124 21:42:52.256916 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebac6b5dee363c1910f1e0fd36a228828c61ac69b1c7dbea011d4bc67d3216b1"} err="failed to get container status \"ebac6b5dee363c1910f1e0fd36a228828c61ac69b1c7dbea011d4bc67d3216b1\": rpc error: code = NotFound desc = could not find container \"ebac6b5dee363c1910f1e0fd36a228828c61ac69b1c7dbea011d4bc67d3216b1\": container with ID starting with ebac6b5dee363c1910f1e0fd36a228828c61ac69b1c7dbea011d4bc67d3216b1 not found: ID does not exist" Nov 24 21:42:52 crc kubenswrapper[4915]: I1124 21:42:52.456259 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-dcff88f94-kdzrd"] Nov 24 21:42:52 crc kubenswrapper[4915]: I1124 21:42:52.464470 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-dcff88f94-kdzrd"] Nov 24 21:42:54 crc kubenswrapper[4915]: I1124 21:42:54.440721 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf0c5bda-da34-4d01-8806-6016a8a24a9a" path="/var/lib/kubelet/pods/bf0c5bda-da34-4d01-8806-6016a8a24a9a/volumes" Nov 24 21:42:55 crc kubenswrapper[4915]: I1124 21:42:55.518970 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 24 21:42:55 crc kubenswrapper[4915]: E1124 21:42:55.519680 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf0c5bda-da34-4d01-8806-6016a8a24a9a" containerName="barbican-api-log" Nov 24 21:42:55 crc kubenswrapper[4915]: I1124 21:42:55.519719 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf0c5bda-da34-4d01-8806-6016a8a24a9a" containerName="barbican-api-log" Nov 24 21:42:55 crc kubenswrapper[4915]: E1124 21:42:55.519864 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf0c5bda-da34-4d01-8806-6016a8a24a9a" containerName="barbican-api" Nov 24 21:42:55 crc kubenswrapper[4915]: I1124 21:42:55.519891 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf0c5bda-da34-4d01-8806-6016a8a24a9a" containerName="barbican-api" Nov 24 21:42:55 crc kubenswrapper[4915]: I1124 21:42:55.520322 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf0c5bda-da34-4d01-8806-6016a8a24a9a" containerName="barbican-api" Nov 24 21:42:55 crc kubenswrapper[4915]: I1124 21:42:55.520364 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf0c5bda-da34-4d01-8806-6016a8a24a9a" containerName="barbican-api-log" Nov 24 21:42:55 crc kubenswrapper[4915]: I1124 21:42:55.521749 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 21:42:55 crc kubenswrapper[4915]: I1124 21:42:55.522078 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 21:42:55 crc kubenswrapper[4915]: I1124 21:42:55.524240 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 24 21:42:55 crc kubenswrapper[4915]: I1124 21:42:55.524497 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 24 21:42:55 crc kubenswrapper[4915]: I1124 21:42:55.524659 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-8zzds" Nov 24 21:42:55 crc kubenswrapper[4915]: I1124 21:42:55.603032 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8274e484-f997-4150-a496-f8f1a68ccfae-openstack-config-secret\") pod \"openstackclient\" (UID: \"8274e484-f997-4150-a496-f8f1a68ccfae\") " pod="openstack/openstackclient" Nov 24 21:42:55 crc kubenswrapper[4915]: I1124 21:42:55.603529 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8274e484-f997-4150-a496-f8f1a68ccfae-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8274e484-f997-4150-a496-f8f1a68ccfae\") " pod="openstack/openstackclient" Nov 24 21:42:55 crc kubenswrapper[4915]: I1124 21:42:55.603858 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28l7c\" (UniqueName: \"kubernetes.io/projected/8274e484-f997-4150-a496-f8f1a68ccfae-kube-api-access-28l7c\") pod \"openstackclient\" (UID: \"8274e484-f997-4150-a496-f8f1a68ccfae\") " pod="openstack/openstackclient" Nov 24 21:42:55 crc kubenswrapper[4915]: I1124 21:42:55.604090 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8274e484-f997-4150-a496-f8f1a68ccfae-openstack-config\") pod \"openstackclient\" (UID: \"8274e484-f997-4150-a496-f8f1a68ccfae\") " pod="openstack/openstackclient" Nov 24 21:42:55 crc kubenswrapper[4915]: I1124 21:42:55.706564 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8274e484-f997-4150-a496-f8f1a68ccfae-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8274e484-f997-4150-a496-f8f1a68ccfae\") " pod="openstack/openstackclient" Nov 24 21:42:55 crc kubenswrapper[4915]: I1124 21:42:55.706683 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28l7c\" (UniqueName: \"kubernetes.io/projected/8274e484-f997-4150-a496-f8f1a68ccfae-kube-api-access-28l7c\") pod \"openstackclient\" (UID: \"8274e484-f997-4150-a496-f8f1a68ccfae\") " pod="openstack/openstackclient" Nov 24 21:42:55 crc kubenswrapper[4915]: I1124 21:42:55.706731 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8274e484-f997-4150-a496-f8f1a68ccfae-openstack-config\") pod \"openstackclient\" (UID: \"8274e484-f997-4150-a496-f8f1a68ccfae\") " pod="openstack/openstackclient" Nov 24 21:42:55 crc kubenswrapper[4915]: I1124 21:42:55.706790 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8274e484-f997-4150-a496-f8f1a68ccfae-openstack-config-secret\") pod \"openstackclient\" (UID: \"8274e484-f997-4150-a496-f8f1a68ccfae\") " pod="openstack/openstackclient" Nov 24 21:42:55 crc kubenswrapper[4915]: I1124 21:42:55.707536 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8274e484-f997-4150-a496-f8f1a68ccfae-openstack-config\") pod \"openstackclient\" (UID: \"8274e484-f997-4150-a496-f8f1a68ccfae\") " pod="openstack/openstackclient" Nov 24 21:42:55 crc kubenswrapper[4915]: I1124 21:42:55.712516 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8274e484-f997-4150-a496-f8f1a68ccfae-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8274e484-f997-4150-a496-f8f1a68ccfae\") " pod="openstack/openstackclient" Nov 24 21:42:55 crc kubenswrapper[4915]: I1124 21:42:55.713150 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8274e484-f997-4150-a496-f8f1a68ccfae-openstack-config-secret\") pod \"openstackclient\" (UID: \"8274e484-f997-4150-a496-f8f1a68ccfae\") " pod="openstack/openstackclient" Nov 24 21:42:55 crc kubenswrapper[4915]: I1124 21:42:55.726291 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28l7c\" (UniqueName: \"kubernetes.io/projected/8274e484-f997-4150-a496-f8f1a68ccfae-kube-api-access-28l7c\") pod \"openstackclient\" (UID: \"8274e484-f997-4150-a496-f8f1a68ccfae\") " pod="openstack/openstackclient" Nov 24 21:42:55 crc kubenswrapper[4915]: I1124 21:42:55.857572 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 24 21:42:56 crc kubenswrapper[4915]: W1124 21:42:56.357661 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8274e484_f997_4150_a496_f8f1a68ccfae.slice/crio-25a3e0ce2bd4ffd4ecd791ef2587c80d47d37b25f8973b345dac554b858fb365 WatchSource:0}: Error finding container 25a3e0ce2bd4ffd4ecd791ef2587c80d47d37b25f8973b345dac554b858fb365: Status 404 returned error can't find the container with id 25a3e0ce2bd4ffd4ecd791ef2587c80d47d37b25f8973b345dac554b858fb365 Nov 24 21:42:56 crc kubenswrapper[4915]: I1124 21:42:56.358716 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 24 21:42:56 crc kubenswrapper[4915]: I1124 21:42:56.869506 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 24 21:42:57 crc kubenswrapper[4915]: I1124 21:42:57.180565 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8274e484-f997-4150-a496-f8f1a68ccfae","Type":"ContainerStarted","Data":"25a3e0ce2bd4ffd4ecd791ef2587c80d47d37b25f8973b345dac554b858fb365"} Nov 24 21:42:58 crc kubenswrapper[4915]: I1124 21:42:58.926405 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:42:58 crc kubenswrapper[4915]: I1124 21:42:58.926972 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8cddeb8d-6499-40ce-866b-6009ade95f6c" containerName="ceilometer-central-agent" containerID="cri-o://af69cdeddcb763ac67d6fb42617ab2bf479f34a1236215f298acc13cf506a0fd" gracePeriod=30 Nov 24 21:42:58 crc kubenswrapper[4915]: I1124 21:42:58.927406 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8cddeb8d-6499-40ce-866b-6009ade95f6c" containerName="proxy-httpd" containerID="cri-o://de28da0b8d5d47e29d3a7c9417ee3bf491088f5807102c18164da015e0d3d1fb" gracePeriod=30 Nov 24 21:42:58 crc kubenswrapper[4915]: I1124 21:42:58.927453 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8cddeb8d-6499-40ce-866b-6009ade95f6c" containerName="sg-core" containerID="cri-o://4d3eb1db0c501024a55b79378896576216e89f5ceb45900cdad7aa5e034a9e8a" gracePeriod=30 Nov 24 21:42:58 crc kubenswrapper[4915]: I1124 21:42:58.927490 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8cddeb8d-6499-40ce-866b-6009ade95f6c" containerName="ceilometer-notification-agent" containerID="cri-o://02a4b55630a569a5e17e835d5af62aa578681460f95278eb7ed660458ded7995" gracePeriod=30 Nov 24 21:42:59 crc kubenswrapper[4915]: I1124 21:42:59.144055 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="8cddeb8d-6499-40ce-866b-6009ade95f6c" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.199:3000/\": EOF" Nov 24 21:42:59 crc kubenswrapper[4915]: I1124 21:42:59.220634 4915 generic.go:334] "Generic (PLEG): container finished" podID="8cddeb8d-6499-40ce-866b-6009ade95f6c" containerID="de28da0b8d5d47e29d3a7c9417ee3bf491088f5807102c18164da015e0d3d1fb" exitCode=0 Nov 24 21:42:59 crc kubenswrapper[4915]: I1124 21:42:59.220675 4915 generic.go:334] "Generic (PLEG): container finished" podID="8cddeb8d-6499-40ce-866b-6009ade95f6c" containerID="4d3eb1db0c501024a55b79378896576216e89f5ceb45900cdad7aa5e034a9e8a" exitCode=2 Nov 24 21:42:59 crc kubenswrapper[4915]: I1124 21:42:59.220700 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8cddeb8d-6499-40ce-866b-6009ade95f6c","Type":"ContainerDied","Data":"de28da0b8d5d47e29d3a7c9417ee3bf491088f5807102c18164da015e0d3d1fb"} Nov 24 21:42:59 crc kubenswrapper[4915]: I1124 21:42:59.220733 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8cddeb8d-6499-40ce-866b-6009ade95f6c","Type":"ContainerDied","Data":"4d3eb1db0c501024a55b79378896576216e89f5ceb45900cdad7aa5e034a9e8a"} Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.027692 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-xkncx"] Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.034177 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xkncx" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.046886 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-xkncx"] Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.121515 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncc8w\" (UniqueName: \"kubernetes.io/projected/c50a58e9-839d-4c0a-9aea-e27e9b892db0-kube-api-access-ncc8w\") pod \"nova-api-db-create-xkncx\" (UID: \"c50a58e9-839d-4c0a-9aea-e27e9b892db0\") " pod="openstack/nova-api-db-create-xkncx" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.121854 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c50a58e9-839d-4c0a-9aea-e27e9b892db0-operator-scripts\") pod \"nova-api-db-create-xkncx\" (UID: \"c50a58e9-839d-4c0a-9aea-e27e9b892db0\") " pod="openstack/nova-api-db-create-xkncx" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.134677 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-541f-account-create-6xjb9"] Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.136225 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-541f-account-create-6xjb9" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.139004 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.145902 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-541f-account-create-6xjb9"] Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.224153 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ea7f227-3e0f-4e4f-9136-22a74b0f3057-operator-scripts\") pod \"nova-api-541f-account-create-6xjb9\" (UID: \"5ea7f227-3e0f-4e4f-9136-22a74b0f3057\") " pod="openstack/nova-api-541f-account-create-6xjb9" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.224269 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncc8w\" (UniqueName: \"kubernetes.io/projected/c50a58e9-839d-4c0a-9aea-e27e9b892db0-kube-api-access-ncc8w\") pod \"nova-api-db-create-xkncx\" (UID: \"c50a58e9-839d-4c0a-9aea-e27e9b892db0\") " pod="openstack/nova-api-db-create-xkncx" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.224435 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqlp2\" (UniqueName: \"kubernetes.io/projected/5ea7f227-3e0f-4e4f-9136-22a74b0f3057-kube-api-access-cqlp2\") pod \"nova-api-541f-account-create-6xjb9\" (UID: \"5ea7f227-3e0f-4e4f-9136-22a74b0f3057\") " pod="openstack/nova-api-541f-account-create-6xjb9" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.224555 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c50a58e9-839d-4c0a-9aea-e27e9b892db0-operator-scripts\") pod \"nova-api-db-create-xkncx\" (UID: \"c50a58e9-839d-4c0a-9aea-e27e9b892db0\") " pod="openstack/nova-api-db-create-xkncx" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.225467 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c50a58e9-839d-4c0a-9aea-e27e9b892db0-operator-scripts\") pod \"nova-api-db-create-xkncx\" (UID: \"c50a58e9-839d-4c0a-9aea-e27e9b892db0\") " pod="openstack/nova-api-db-create-xkncx" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.242307 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-2688t"] Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.244482 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2688t" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.245076 4915 generic.go:334] "Generic (PLEG): container finished" podID="8cddeb8d-6499-40ce-866b-6009ade95f6c" containerID="af69cdeddcb763ac67d6fb42617ab2bf479f34a1236215f298acc13cf506a0fd" exitCode=0 Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.245120 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8cddeb8d-6499-40ce-866b-6009ade95f6c","Type":"ContainerDied","Data":"af69cdeddcb763ac67d6fb42617ab2bf479f34a1236215f298acc13cf506a0fd"} Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.249418 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncc8w\" (UniqueName: \"kubernetes.io/projected/c50a58e9-839d-4c0a-9aea-e27e9b892db0-kube-api-access-ncc8w\") pod \"nova-api-db-create-xkncx\" (UID: \"c50a58e9-839d-4c0a-9aea-e27e9b892db0\") " pod="openstack/nova-api-db-create-xkncx" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.254554 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-2688t"] Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.330468 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqlp2\" (UniqueName: \"kubernetes.io/projected/5ea7f227-3e0f-4e4f-9136-22a74b0f3057-kube-api-access-cqlp2\") pod \"nova-api-541f-account-create-6xjb9\" (UID: \"5ea7f227-3e0f-4e4f-9136-22a74b0f3057\") " pod="openstack/nova-api-541f-account-create-6xjb9" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.330535 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrs6b\" (UniqueName: \"kubernetes.io/projected/0ca21771-e57e-49fc-a4c0-a8fea17fd2ea-kube-api-access-nrs6b\") pod \"nova-cell0-db-create-2688t\" (UID: \"0ca21771-e57e-49fc-a4c0-a8fea17fd2ea\") " pod="openstack/nova-cell0-db-create-2688t" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.330636 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ca21771-e57e-49fc-a4c0-a8fea17fd2ea-operator-scripts\") pod \"nova-cell0-db-create-2688t\" (UID: \"0ca21771-e57e-49fc-a4c0-a8fea17fd2ea\") " pod="openstack/nova-cell0-db-create-2688t" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.330740 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ea7f227-3e0f-4e4f-9136-22a74b0f3057-operator-scripts\") pod \"nova-api-541f-account-create-6xjb9\" (UID: \"5ea7f227-3e0f-4e4f-9136-22a74b0f3057\") " pod="openstack/nova-api-541f-account-create-6xjb9" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.332474 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ea7f227-3e0f-4e4f-9136-22a74b0f3057-operator-scripts\") pod \"nova-api-541f-account-create-6xjb9\" (UID: \"5ea7f227-3e0f-4e4f-9136-22a74b0f3057\") " pod="openstack/nova-api-541f-account-create-6xjb9" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.353572 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-678kk"] Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.355679 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-678kk" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.356443 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqlp2\" (UniqueName: \"kubernetes.io/projected/5ea7f227-3e0f-4e4f-9136-22a74b0f3057-kube-api-access-cqlp2\") pod \"nova-api-541f-account-create-6xjb9\" (UID: \"5ea7f227-3e0f-4e4f-9136-22a74b0f3057\") " pod="openstack/nova-api-541f-account-create-6xjb9" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.370764 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xkncx" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.390966 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-05c3-account-create-87xhn"] Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.393241 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-05c3-account-create-87xhn" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.394837 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.404735 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-678kk"] Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.432698 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrs6b\" (UniqueName: \"kubernetes.io/projected/0ca21771-e57e-49fc-a4c0-a8fea17fd2ea-kube-api-access-nrs6b\") pod \"nova-cell0-db-create-2688t\" (UID: \"0ca21771-e57e-49fc-a4c0-a8fea17fd2ea\") " pod="openstack/nova-cell0-db-create-2688t" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.432749 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4btrz\" (UniqueName: \"kubernetes.io/projected/789abde1-5e42-4032-8b87-e3c475c709e8-kube-api-access-4btrz\") pod \"nova-cell1-db-create-678kk\" (UID: \"789abde1-5e42-4032-8b87-e3c475c709e8\") " pod="openstack/nova-cell1-db-create-678kk" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.432856 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ca21771-e57e-49fc-a4c0-a8fea17fd2ea-operator-scripts\") pod \"nova-cell0-db-create-2688t\" (UID: \"0ca21771-e57e-49fc-a4c0-a8fea17fd2ea\") " pod="openstack/nova-cell0-db-create-2688t" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.432893 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/789abde1-5e42-4032-8b87-e3c475c709e8-operator-scripts\") pod \"nova-cell1-db-create-678kk\" (UID: \"789abde1-5e42-4032-8b87-e3c475c709e8\") " pod="openstack/nova-cell1-db-create-678kk" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.433601 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ca21771-e57e-49fc-a4c0-a8fea17fd2ea-operator-scripts\") pod \"nova-cell0-db-create-2688t\" (UID: \"0ca21771-e57e-49fc-a4c0-a8fea17fd2ea\") " pod="openstack/nova-cell0-db-create-2688t" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.449644 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrs6b\" (UniqueName: \"kubernetes.io/projected/0ca21771-e57e-49fc-a4c0-a8fea17fd2ea-kube-api-access-nrs6b\") pod \"nova-cell0-db-create-2688t\" (UID: \"0ca21771-e57e-49fc-a4c0-a8fea17fd2ea\") " pod="openstack/nova-cell0-db-create-2688t" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.455739 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-05c3-account-create-87xhn"] Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.473324 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-541f-account-create-6xjb9" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.534965 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4btrz\" (UniqueName: \"kubernetes.io/projected/789abde1-5e42-4032-8b87-e3c475c709e8-kube-api-access-4btrz\") pod \"nova-cell1-db-create-678kk\" (UID: \"789abde1-5e42-4032-8b87-e3c475c709e8\") " pod="openstack/nova-cell1-db-create-678kk" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.535041 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1089ced-cc42-419b-a130-e71479e51b16-operator-scripts\") pod \"nova-cell0-05c3-account-create-87xhn\" (UID: \"f1089ced-cc42-419b-a130-e71479e51b16\") " pod="openstack/nova-cell0-05c3-account-create-87xhn" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.535069 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr4mh\" (UniqueName: \"kubernetes.io/projected/f1089ced-cc42-419b-a130-e71479e51b16-kube-api-access-pr4mh\") pod \"nova-cell0-05c3-account-create-87xhn\" (UID: \"f1089ced-cc42-419b-a130-e71479e51b16\") " pod="openstack/nova-cell0-05c3-account-create-87xhn" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.535095 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/789abde1-5e42-4032-8b87-e3c475c709e8-operator-scripts\") pod \"nova-cell1-db-create-678kk\" (UID: \"789abde1-5e42-4032-8b87-e3c475c709e8\") " pod="openstack/nova-cell1-db-create-678kk" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.544086 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/789abde1-5e42-4032-8b87-e3c475c709e8-operator-scripts\") pod \"nova-cell1-db-create-678kk\" (UID: \"789abde1-5e42-4032-8b87-e3c475c709e8\") " pod="openstack/nova-cell1-db-create-678kk" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.579754 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4btrz\" (UniqueName: \"kubernetes.io/projected/789abde1-5e42-4032-8b87-e3c475c709e8-kube-api-access-4btrz\") pod \"nova-cell1-db-create-678kk\" (UID: \"789abde1-5e42-4032-8b87-e3c475c709e8\") " pod="openstack/nova-cell1-db-create-678kk" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.580093 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-1fec-account-create-c2mcb"] Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.582613 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1fec-account-create-c2mcb" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.586229 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.601239 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1fec-account-create-c2mcb"] Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.620547 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-678kk" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.624323 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2688t" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.639063 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4387d1ab-ac63-465a-a164-1fb5f78352da-operator-scripts\") pod \"nova-cell1-1fec-account-create-c2mcb\" (UID: \"4387d1ab-ac63-465a-a164-1fb5f78352da\") " pod="openstack/nova-cell1-1fec-account-create-c2mcb" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.639133 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8vbv\" (UniqueName: \"kubernetes.io/projected/4387d1ab-ac63-465a-a164-1fb5f78352da-kube-api-access-t8vbv\") pod \"nova-cell1-1fec-account-create-c2mcb\" (UID: \"4387d1ab-ac63-465a-a164-1fb5f78352da\") " pod="openstack/nova-cell1-1fec-account-create-c2mcb" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.639268 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1089ced-cc42-419b-a130-e71479e51b16-operator-scripts\") pod \"nova-cell0-05c3-account-create-87xhn\" (UID: \"f1089ced-cc42-419b-a130-e71479e51b16\") " pod="openstack/nova-cell0-05c3-account-create-87xhn" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.639294 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr4mh\" (UniqueName: \"kubernetes.io/projected/f1089ced-cc42-419b-a130-e71479e51b16-kube-api-access-pr4mh\") pod \"nova-cell0-05c3-account-create-87xhn\" (UID: \"f1089ced-cc42-419b-a130-e71479e51b16\") " pod="openstack/nova-cell0-05c3-account-create-87xhn" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.639965 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1089ced-cc42-419b-a130-e71479e51b16-operator-scripts\") pod \"nova-cell0-05c3-account-create-87xhn\" (UID: \"f1089ced-cc42-419b-a130-e71479e51b16\") " pod="openstack/nova-cell0-05c3-account-create-87xhn" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.669879 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr4mh\" (UniqueName: \"kubernetes.io/projected/f1089ced-cc42-419b-a130-e71479e51b16-kube-api-access-pr4mh\") pod \"nova-cell0-05c3-account-create-87xhn\" (UID: \"f1089ced-cc42-419b-a130-e71479e51b16\") " pod="openstack/nova-cell0-05c3-account-create-87xhn" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.744178 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4387d1ab-ac63-465a-a164-1fb5f78352da-operator-scripts\") pod \"nova-cell1-1fec-account-create-c2mcb\" (UID: \"4387d1ab-ac63-465a-a164-1fb5f78352da\") " pod="openstack/nova-cell1-1fec-account-create-c2mcb" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.744255 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8vbv\" (UniqueName: \"kubernetes.io/projected/4387d1ab-ac63-465a-a164-1fb5f78352da-kube-api-access-t8vbv\") pod \"nova-cell1-1fec-account-create-c2mcb\" (UID: \"4387d1ab-ac63-465a-a164-1fb5f78352da\") " pod="openstack/nova-cell1-1fec-account-create-c2mcb" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.745124 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4387d1ab-ac63-465a-a164-1fb5f78352da-operator-scripts\") pod \"nova-cell1-1fec-account-create-c2mcb\" (UID: \"4387d1ab-ac63-465a-a164-1fb5f78352da\") " pod="openstack/nova-cell1-1fec-account-create-c2mcb" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.761601 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8vbv\" (UniqueName: \"kubernetes.io/projected/4387d1ab-ac63-465a-a164-1fb5f78352da-kube-api-access-t8vbv\") pod \"nova-cell1-1fec-account-create-c2mcb\" (UID: \"4387d1ab-ac63-465a-a164-1fb5f78352da\") " pod="openstack/nova-cell1-1fec-account-create-c2mcb" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.933984 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-05c3-account-create-87xhn" Nov 24 21:43:00 crc kubenswrapper[4915]: I1124 21:43:00.970597 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1fec-account-create-c2mcb" Nov 24 21:43:01 crc kubenswrapper[4915]: I1124 21:43:01.013819 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-xkncx"] Nov 24 21:43:01 crc kubenswrapper[4915]: E1124 21:43:01.670992 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e5395dd_1fa1_461c_b0eb_edc5c817955c.slice/crio-conmon-a2f806c8ed4bac045164780c23846ef30c72f1cc469575104ff81f9ce88ae615.scope\": RecentStats: unable to find data in memory cache]" Nov 24 21:43:01 crc kubenswrapper[4915]: I1124 21:43:01.932253 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="8cddeb8d-6499-40ce-866b-6009ade95f6c" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.199:3000/\": dial tcp 10.217.0.199:3000: connect: connection refused" Nov 24 21:43:03 crc kubenswrapper[4915]: I1124 21:43:03.213591 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-968b48459-dfqg8"] Nov 24 21:43:03 crc kubenswrapper[4915]: I1124 21:43:03.220366 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-968b48459-dfqg8" Nov 24 21:43:03 crc kubenswrapper[4915]: I1124 21:43:03.223738 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 24 21:43:03 crc kubenswrapper[4915]: I1124 21:43:03.224010 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 24 21:43:03 crc kubenswrapper[4915]: I1124 21:43:03.224129 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 24 21:43:03 crc kubenswrapper[4915]: I1124 21:43:03.227859 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-968b48459-dfqg8"] Nov 24 21:43:03 crc kubenswrapper[4915]: I1124 21:43:03.330234 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdff3fcc-adf5-4e04-9cea-b12c43b4f025-run-httpd\") pod \"swift-proxy-968b48459-dfqg8\" (UID: \"cdff3fcc-adf5-4e04-9cea-b12c43b4f025\") " pod="openstack/swift-proxy-968b48459-dfqg8" Nov 24 21:43:03 crc kubenswrapper[4915]: I1124 21:43:03.330334 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdff3fcc-adf5-4e04-9cea-b12c43b4f025-combined-ca-bundle\") pod \"swift-proxy-968b48459-dfqg8\" (UID: \"cdff3fcc-adf5-4e04-9cea-b12c43b4f025\") " pod="openstack/swift-proxy-968b48459-dfqg8" Nov 24 21:43:03 crc kubenswrapper[4915]: I1124 21:43:03.330651 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5fnp\" (UniqueName: \"kubernetes.io/projected/cdff3fcc-adf5-4e04-9cea-b12c43b4f025-kube-api-access-z5fnp\") pod \"swift-proxy-968b48459-dfqg8\" (UID: \"cdff3fcc-adf5-4e04-9cea-b12c43b4f025\") " pod="openstack/swift-proxy-968b48459-dfqg8" Nov 24 21:43:03 crc kubenswrapper[4915]: I1124 21:43:03.330769 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdff3fcc-adf5-4e04-9cea-b12c43b4f025-public-tls-certs\") pod \"swift-proxy-968b48459-dfqg8\" (UID: \"cdff3fcc-adf5-4e04-9cea-b12c43b4f025\") " pod="openstack/swift-proxy-968b48459-dfqg8" Nov 24 21:43:03 crc kubenswrapper[4915]: I1124 21:43:03.330849 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdff3fcc-adf5-4e04-9cea-b12c43b4f025-internal-tls-certs\") pod \"swift-proxy-968b48459-dfqg8\" (UID: \"cdff3fcc-adf5-4e04-9cea-b12c43b4f025\") " pod="openstack/swift-proxy-968b48459-dfqg8" Nov 24 21:43:03 crc kubenswrapper[4915]: I1124 21:43:03.331087 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdff3fcc-adf5-4e04-9cea-b12c43b4f025-log-httpd\") pod \"swift-proxy-968b48459-dfqg8\" (UID: \"cdff3fcc-adf5-4e04-9cea-b12c43b4f025\") " pod="openstack/swift-proxy-968b48459-dfqg8" Nov 24 21:43:03 crc kubenswrapper[4915]: I1124 21:43:03.331128 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdff3fcc-adf5-4e04-9cea-b12c43b4f025-config-data\") pod \"swift-proxy-968b48459-dfqg8\" (UID: \"cdff3fcc-adf5-4e04-9cea-b12c43b4f025\") " pod="openstack/swift-proxy-968b48459-dfqg8" Nov 24 21:43:03 crc kubenswrapper[4915]: I1124 21:43:03.331188 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdff3fcc-adf5-4e04-9cea-b12c43b4f025-etc-swift\") pod \"swift-proxy-968b48459-dfqg8\" (UID: \"cdff3fcc-adf5-4e04-9cea-b12c43b4f025\") " pod="openstack/swift-proxy-968b48459-dfqg8" Nov 24 21:43:03 crc kubenswrapper[4915]: I1124 21:43:03.433721 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5fnp\" (UniqueName: \"kubernetes.io/projected/cdff3fcc-adf5-4e04-9cea-b12c43b4f025-kube-api-access-z5fnp\") pod \"swift-proxy-968b48459-dfqg8\" (UID: \"cdff3fcc-adf5-4e04-9cea-b12c43b4f025\") " pod="openstack/swift-proxy-968b48459-dfqg8" Nov 24 21:43:03 crc kubenswrapper[4915]: I1124 21:43:03.433853 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdff3fcc-adf5-4e04-9cea-b12c43b4f025-public-tls-certs\") pod \"swift-proxy-968b48459-dfqg8\" (UID: \"cdff3fcc-adf5-4e04-9cea-b12c43b4f025\") " pod="openstack/swift-proxy-968b48459-dfqg8" Nov 24 21:43:03 crc kubenswrapper[4915]: I1124 21:43:03.433883 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdff3fcc-adf5-4e04-9cea-b12c43b4f025-internal-tls-certs\") pod \"swift-proxy-968b48459-dfqg8\" (UID: \"cdff3fcc-adf5-4e04-9cea-b12c43b4f025\") " pod="openstack/swift-proxy-968b48459-dfqg8" Nov 24 21:43:03 crc kubenswrapper[4915]: I1124 21:43:03.433938 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdff3fcc-adf5-4e04-9cea-b12c43b4f025-log-httpd\") pod \"swift-proxy-968b48459-dfqg8\" (UID: \"cdff3fcc-adf5-4e04-9cea-b12c43b4f025\") " pod="openstack/swift-proxy-968b48459-dfqg8" Nov 24 21:43:03 crc kubenswrapper[4915]: I1124 21:43:03.433959 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdff3fcc-adf5-4e04-9cea-b12c43b4f025-config-data\") pod \"swift-proxy-968b48459-dfqg8\" (UID: \"cdff3fcc-adf5-4e04-9cea-b12c43b4f025\") " pod="openstack/swift-proxy-968b48459-dfqg8" Nov 24 21:43:03 crc kubenswrapper[4915]: I1124 21:43:03.433984 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdff3fcc-adf5-4e04-9cea-b12c43b4f025-etc-swift\") pod \"swift-proxy-968b48459-dfqg8\" (UID: \"cdff3fcc-adf5-4e04-9cea-b12c43b4f025\") " pod="openstack/swift-proxy-968b48459-dfqg8" Nov 24 21:43:03 crc kubenswrapper[4915]: I1124 21:43:03.434063 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdff3fcc-adf5-4e04-9cea-b12c43b4f025-run-httpd\") pod \"swift-proxy-968b48459-dfqg8\" (UID: \"cdff3fcc-adf5-4e04-9cea-b12c43b4f025\") " pod="openstack/swift-proxy-968b48459-dfqg8" Nov 24 21:43:03 crc kubenswrapper[4915]: I1124 21:43:03.434103 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdff3fcc-adf5-4e04-9cea-b12c43b4f025-combined-ca-bundle\") pod \"swift-proxy-968b48459-dfqg8\" (UID: \"cdff3fcc-adf5-4e04-9cea-b12c43b4f025\") " pod="openstack/swift-proxy-968b48459-dfqg8" Nov 24 21:43:03 crc kubenswrapper[4915]: I1124 21:43:03.435219 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdff3fcc-adf5-4e04-9cea-b12c43b4f025-run-httpd\") pod \"swift-proxy-968b48459-dfqg8\" (UID: \"cdff3fcc-adf5-4e04-9cea-b12c43b4f025\") " pod="openstack/swift-proxy-968b48459-dfqg8" Nov 24 21:43:03 crc kubenswrapper[4915]: I1124 21:43:03.435556 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cdff3fcc-adf5-4e04-9cea-b12c43b4f025-log-httpd\") pod \"swift-proxy-968b48459-dfqg8\" (UID: \"cdff3fcc-adf5-4e04-9cea-b12c43b4f025\") " pod="openstack/swift-proxy-968b48459-dfqg8" Nov 24 21:43:03 crc kubenswrapper[4915]: I1124 21:43:03.442286 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdff3fcc-adf5-4e04-9cea-b12c43b4f025-config-data\") pod \"swift-proxy-968b48459-dfqg8\" (UID: \"cdff3fcc-adf5-4e04-9cea-b12c43b4f025\") " pod="openstack/swift-proxy-968b48459-dfqg8" Nov 24 21:43:03 crc kubenswrapper[4915]: I1124 21:43:03.442930 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdff3fcc-adf5-4e04-9cea-b12c43b4f025-etc-swift\") pod \"swift-proxy-968b48459-dfqg8\" (UID: \"cdff3fcc-adf5-4e04-9cea-b12c43b4f025\") " pod="openstack/swift-proxy-968b48459-dfqg8" Nov 24 21:43:03 crc kubenswrapper[4915]: I1124 21:43:03.448334 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdff3fcc-adf5-4e04-9cea-b12c43b4f025-public-tls-certs\") pod \"swift-proxy-968b48459-dfqg8\" (UID: \"cdff3fcc-adf5-4e04-9cea-b12c43b4f025\") " pod="openstack/swift-proxy-968b48459-dfqg8" Nov 24 21:43:03 crc kubenswrapper[4915]: I1124 21:43:03.463361 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdff3fcc-adf5-4e04-9cea-b12c43b4f025-combined-ca-bundle\") pod \"swift-proxy-968b48459-dfqg8\" (UID: \"cdff3fcc-adf5-4e04-9cea-b12c43b4f025\") " pod="openstack/swift-proxy-968b48459-dfqg8" Nov 24 21:43:03 crc kubenswrapper[4915]: I1124 21:43:03.463627 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cdff3fcc-adf5-4e04-9cea-b12c43b4f025-internal-tls-certs\") pod \"swift-proxy-968b48459-dfqg8\" (UID: \"cdff3fcc-adf5-4e04-9cea-b12c43b4f025\") " pod="openstack/swift-proxy-968b48459-dfqg8" Nov 24 21:43:03 crc kubenswrapper[4915]: I1124 21:43:03.469502 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5fnp\" (UniqueName: \"kubernetes.io/projected/cdff3fcc-adf5-4e04-9cea-b12c43b4f025-kube-api-access-z5fnp\") pod \"swift-proxy-968b48459-dfqg8\" (UID: \"cdff3fcc-adf5-4e04-9cea-b12c43b4f025\") " pod="openstack/swift-proxy-968b48459-dfqg8" Nov 24 21:43:03 crc kubenswrapper[4915]: I1124 21:43:03.565403 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-968b48459-dfqg8" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.109524 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-8f669b545-6bqfj"] Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.116010 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-8f669b545-6bqfj" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.122638 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.122952 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.124406 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-7vhk4" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.154734 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a652c587-ebd5-4783-8397-9275b5a3b682-config-data\") pod \"heat-engine-8f669b545-6bqfj\" (UID: \"a652c587-ebd5-4783-8397-9275b5a3b682\") " pod="openstack/heat-engine-8f669b545-6bqfj" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.158239 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a652c587-ebd5-4783-8397-9275b5a3b682-config-data-custom\") pod \"heat-engine-8f669b545-6bqfj\" (UID: \"a652c587-ebd5-4783-8397-9275b5a3b682\") " pod="openstack/heat-engine-8f669b545-6bqfj" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.158611 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a652c587-ebd5-4783-8397-9275b5a3b682-combined-ca-bundle\") pod \"heat-engine-8f669b545-6bqfj\" (UID: \"a652c587-ebd5-4783-8397-9275b5a3b682\") " pod="openstack/heat-engine-8f669b545-6bqfj" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.158629 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbkqt\" (UniqueName: \"kubernetes.io/projected/a652c587-ebd5-4783-8397-9275b5a3b682-kube-api-access-fbkqt\") pod \"heat-engine-8f669b545-6bqfj\" (UID: \"a652c587-ebd5-4783-8397-9275b5a3b682\") " pod="openstack/heat-engine-8f669b545-6bqfj" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.159945 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-8f669b545-6bqfj"] Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.260456 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a652c587-ebd5-4783-8397-9275b5a3b682-config-data\") pod \"heat-engine-8f669b545-6bqfj\" (UID: \"a652c587-ebd5-4783-8397-9275b5a3b682\") " pod="openstack/heat-engine-8f669b545-6bqfj" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.260567 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a652c587-ebd5-4783-8397-9275b5a3b682-config-data-custom\") pod \"heat-engine-8f669b545-6bqfj\" (UID: \"a652c587-ebd5-4783-8397-9275b5a3b682\") " pod="openstack/heat-engine-8f669b545-6bqfj" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.260705 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a652c587-ebd5-4783-8397-9275b5a3b682-combined-ca-bundle\") pod \"heat-engine-8f669b545-6bqfj\" (UID: \"a652c587-ebd5-4783-8397-9275b5a3b682\") " pod="openstack/heat-engine-8f669b545-6bqfj" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.260729 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbkqt\" (UniqueName: \"kubernetes.io/projected/a652c587-ebd5-4783-8397-9275b5a3b682-kube-api-access-fbkqt\") pod \"heat-engine-8f669b545-6bqfj\" (UID: \"a652c587-ebd5-4783-8397-9275b5a3b682\") " pod="openstack/heat-engine-8f669b545-6bqfj" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.265173 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-ncs8x"] Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.267239 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.288006 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a652c587-ebd5-4783-8397-9275b5a3b682-config-data\") pod \"heat-engine-8f669b545-6bqfj\" (UID: \"a652c587-ebd5-4783-8397-9275b5a3b682\") " pod="openstack/heat-engine-8f669b545-6bqfj" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.288962 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a652c587-ebd5-4783-8397-9275b5a3b682-combined-ca-bundle\") pod \"heat-engine-8f669b545-6bqfj\" (UID: \"a652c587-ebd5-4783-8397-9275b5a3b682\") " pod="openstack/heat-engine-8f669b545-6bqfj" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.294652 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-ncs8x"] Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.300613 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a652c587-ebd5-4783-8397-9275b5a3b682-config-data-custom\") pod \"heat-engine-8f669b545-6bqfj\" (UID: \"a652c587-ebd5-4783-8397-9275b5a3b682\") " pod="openstack/heat-engine-8f669b545-6bqfj" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.319395 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7b6fc899f8-tjzhx"] Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.321179 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b6fc899f8-tjzhx" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.324176 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.331676 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbkqt\" (UniqueName: \"kubernetes.io/projected/a652c587-ebd5-4783-8397-9275b5a3b682-kube-api-access-fbkqt\") pod \"heat-engine-8f669b545-6bqfj\" (UID: \"a652c587-ebd5-4783-8397-9275b5a3b682\") " pod="openstack/heat-engine-8f669b545-6bqfj" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.345187 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7b6fc899f8-tjzhx"] Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.363403 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57nz6\" (UniqueName: \"kubernetes.io/projected/cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f-kube-api-access-57nz6\") pod \"heat-cfnapi-7b6fc899f8-tjzhx\" (UID: \"cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f\") " pod="openstack/heat-cfnapi-7b6fc899f8-tjzhx" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.363499 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0327c0c2-dd19-48b6-946d-3b1460a7a260-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-ncs8x\" (UID: \"0327c0c2-dd19-48b6-946d-3b1460a7a260\") " pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.363537 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0327c0c2-dd19-48b6-946d-3b1460a7a260-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-ncs8x\" (UID: \"0327c0c2-dd19-48b6-946d-3b1460a7a260\") " pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.363595 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0327c0c2-dd19-48b6-946d-3b1460a7a260-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-ncs8x\" (UID: \"0327c0c2-dd19-48b6-946d-3b1460a7a260\") " pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.363620 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f-config-data\") pod \"heat-cfnapi-7b6fc899f8-tjzhx\" (UID: \"cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f\") " pod="openstack/heat-cfnapi-7b6fc899f8-tjzhx" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.363656 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0327c0c2-dd19-48b6-946d-3b1460a7a260-config\") pod \"dnsmasq-dns-688b9f5b49-ncs8x\" (UID: \"0327c0c2-dd19-48b6-946d-3b1460a7a260\") " pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.363679 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwvng\" (UniqueName: \"kubernetes.io/projected/0327c0c2-dd19-48b6-946d-3b1460a7a260-kube-api-access-dwvng\") pod \"dnsmasq-dns-688b9f5b49-ncs8x\" (UID: \"0327c0c2-dd19-48b6-946d-3b1460a7a260\") " pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.363743 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f-config-data-custom\") pod \"heat-cfnapi-7b6fc899f8-tjzhx\" (UID: \"cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f\") " pod="openstack/heat-cfnapi-7b6fc899f8-tjzhx" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.363803 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f-combined-ca-bundle\") pod \"heat-cfnapi-7b6fc899f8-tjzhx\" (UID: \"cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f\") " pod="openstack/heat-cfnapi-7b6fc899f8-tjzhx" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.363861 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0327c0c2-dd19-48b6-946d-3b1460a7a260-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-ncs8x\" (UID: \"0327c0c2-dd19-48b6-946d-3b1460a7a260\") " pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.407694 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-54cc46598c-928hf"] Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.409307 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-54cc46598c-928hf" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.411249 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.466096 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0327c0c2-dd19-48b6-946d-3b1460a7a260-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-ncs8x\" (UID: \"0327c0c2-dd19-48b6-946d-3b1460a7a260\") " pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.466185 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57nz6\" (UniqueName: \"kubernetes.io/projected/cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f-kube-api-access-57nz6\") pod \"heat-cfnapi-7b6fc899f8-tjzhx\" (UID: \"cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f\") " pod="openstack/heat-cfnapi-7b6fc899f8-tjzhx" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.466243 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0327c0c2-dd19-48b6-946d-3b1460a7a260-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-ncs8x\" (UID: \"0327c0c2-dd19-48b6-946d-3b1460a7a260\") " pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.466284 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0327c0c2-dd19-48b6-946d-3b1460a7a260-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-ncs8x\" (UID: \"0327c0c2-dd19-48b6-946d-3b1460a7a260\") " pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.466326 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0327c0c2-dd19-48b6-946d-3b1460a7a260-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-ncs8x\" (UID: \"0327c0c2-dd19-48b6-946d-3b1460a7a260\") " pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.466366 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f-config-data\") pod \"heat-cfnapi-7b6fc899f8-tjzhx\" (UID: \"cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f\") " pod="openstack/heat-cfnapi-7b6fc899f8-tjzhx" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.466406 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0327c0c2-dd19-48b6-946d-3b1460a7a260-config\") pod \"dnsmasq-dns-688b9f5b49-ncs8x\" (UID: \"0327c0c2-dd19-48b6-946d-3b1460a7a260\") " pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.466450 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwvng\" (UniqueName: \"kubernetes.io/projected/0327c0c2-dd19-48b6-946d-3b1460a7a260-kube-api-access-dwvng\") pod \"dnsmasq-dns-688b9f5b49-ncs8x\" (UID: \"0327c0c2-dd19-48b6-946d-3b1460a7a260\") " pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.466537 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f-config-data-custom\") pod \"heat-cfnapi-7b6fc899f8-tjzhx\" (UID: \"cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f\") " pod="openstack/heat-cfnapi-7b6fc899f8-tjzhx" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.466581 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f-combined-ca-bundle\") pod \"heat-cfnapi-7b6fc899f8-tjzhx\" (UID: \"cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f\") " pod="openstack/heat-cfnapi-7b6fc899f8-tjzhx" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.478672 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-8f669b545-6bqfj" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.479049 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0327c0c2-dd19-48b6-946d-3b1460a7a260-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-ncs8x\" (UID: \"0327c0c2-dd19-48b6-946d-3b1460a7a260\") " pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.480044 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0327c0c2-dd19-48b6-946d-3b1460a7a260-config\") pod \"dnsmasq-dns-688b9f5b49-ncs8x\" (UID: \"0327c0c2-dd19-48b6-946d-3b1460a7a260\") " pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.480560 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-54cc46598c-928hf"] Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.481979 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0327c0c2-dd19-48b6-946d-3b1460a7a260-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-ncs8x\" (UID: \"0327c0c2-dd19-48b6-946d-3b1460a7a260\") " pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.482224 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0327c0c2-dd19-48b6-946d-3b1460a7a260-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-ncs8x\" (UID: \"0327c0c2-dd19-48b6-946d-3b1460a7a260\") " pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.483288 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f-config-data-custom\") pod \"heat-cfnapi-7b6fc899f8-tjzhx\" (UID: \"cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f\") " pod="openstack/heat-cfnapi-7b6fc899f8-tjzhx" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.485666 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0327c0c2-dd19-48b6-946d-3b1460a7a260-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-ncs8x\" (UID: \"0327c0c2-dd19-48b6-946d-3b1460a7a260\") " pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.485673 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f-config-data\") pod \"heat-cfnapi-7b6fc899f8-tjzhx\" (UID: \"cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f\") " pod="openstack/heat-cfnapi-7b6fc899f8-tjzhx" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.487638 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f-combined-ca-bundle\") pod \"heat-cfnapi-7b6fc899f8-tjzhx\" (UID: \"cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f\") " pod="openstack/heat-cfnapi-7b6fc899f8-tjzhx" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.493801 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwvng\" (UniqueName: \"kubernetes.io/projected/0327c0c2-dd19-48b6-946d-3b1460a7a260-kube-api-access-dwvng\") pod \"dnsmasq-dns-688b9f5b49-ncs8x\" (UID: \"0327c0c2-dd19-48b6-946d-3b1460a7a260\") " pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.507619 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57nz6\" (UniqueName: \"kubernetes.io/projected/cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f-kube-api-access-57nz6\") pod \"heat-cfnapi-7b6fc899f8-tjzhx\" (UID: \"cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f\") " pod="openstack/heat-cfnapi-7b6fc899f8-tjzhx" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.568707 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72a33116-8ea9-489b-819c-e69372202e03-combined-ca-bundle\") pod \"heat-api-54cc46598c-928hf\" (UID: \"72a33116-8ea9-489b-819c-e69372202e03\") " pod="openstack/heat-api-54cc46598c-928hf" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.568986 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72a33116-8ea9-489b-819c-e69372202e03-config-data\") pod \"heat-api-54cc46598c-928hf\" (UID: \"72a33116-8ea9-489b-819c-e69372202e03\") " pod="openstack/heat-api-54cc46598c-928hf" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.569075 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpx92\" (UniqueName: \"kubernetes.io/projected/72a33116-8ea9-489b-819c-e69372202e03-kube-api-access-hpx92\") pod \"heat-api-54cc46598c-928hf\" (UID: \"72a33116-8ea9-489b-819c-e69372202e03\") " pod="openstack/heat-api-54cc46598c-928hf" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.569232 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72a33116-8ea9-489b-819c-e69372202e03-config-data-custom\") pod \"heat-api-54cc46598c-928hf\" (UID: \"72a33116-8ea9-489b-819c-e69372202e03\") " pod="openstack/heat-api-54cc46598c-928hf" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.671629 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpx92\" (UniqueName: \"kubernetes.io/projected/72a33116-8ea9-489b-819c-e69372202e03-kube-api-access-hpx92\") pod \"heat-api-54cc46598c-928hf\" (UID: \"72a33116-8ea9-489b-819c-e69372202e03\") " pod="openstack/heat-api-54cc46598c-928hf" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.672171 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72a33116-8ea9-489b-819c-e69372202e03-config-data-custom\") pod \"heat-api-54cc46598c-928hf\" (UID: \"72a33116-8ea9-489b-819c-e69372202e03\") " pod="openstack/heat-api-54cc46598c-928hf" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.672397 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72a33116-8ea9-489b-819c-e69372202e03-combined-ca-bundle\") pod \"heat-api-54cc46598c-928hf\" (UID: \"72a33116-8ea9-489b-819c-e69372202e03\") " pod="openstack/heat-api-54cc46598c-928hf" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.672529 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72a33116-8ea9-489b-819c-e69372202e03-config-data\") pod \"heat-api-54cc46598c-928hf\" (UID: \"72a33116-8ea9-489b-819c-e69372202e03\") " pod="openstack/heat-api-54cc46598c-928hf" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.677325 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72a33116-8ea9-489b-819c-e69372202e03-config-data\") pod \"heat-api-54cc46598c-928hf\" (UID: \"72a33116-8ea9-489b-819c-e69372202e03\") " pod="openstack/heat-api-54cc46598c-928hf" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.679326 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72a33116-8ea9-489b-819c-e69372202e03-combined-ca-bundle\") pod \"heat-api-54cc46598c-928hf\" (UID: \"72a33116-8ea9-489b-819c-e69372202e03\") " pod="openstack/heat-api-54cc46598c-928hf" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.697516 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpx92\" (UniqueName: \"kubernetes.io/projected/72a33116-8ea9-489b-819c-e69372202e03-kube-api-access-hpx92\") pod \"heat-api-54cc46598c-928hf\" (UID: \"72a33116-8ea9-489b-819c-e69372202e03\") " pod="openstack/heat-api-54cc46598c-928hf" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.698440 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72a33116-8ea9-489b-819c-e69372202e03-config-data-custom\") pod \"heat-api-54cc46598c-928hf\" (UID: \"72a33116-8ea9-489b-819c-e69372202e03\") " pod="openstack/heat-api-54cc46598c-928hf" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.730228 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.741443 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b6fc899f8-tjzhx" Nov 24 21:43:04 crc kubenswrapper[4915]: I1124 21:43:04.752636 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-54cc46598c-928hf" Nov 24 21:43:05 crc kubenswrapper[4915]: I1124 21:43:05.345331 4915 generic.go:334] "Generic (PLEG): container finished" podID="8cddeb8d-6499-40ce-866b-6009ade95f6c" containerID="02a4b55630a569a5e17e835d5af62aa578681460f95278eb7ed660458ded7995" exitCode=0 Nov 24 21:43:05 crc kubenswrapper[4915]: I1124 21:43:05.345375 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8cddeb8d-6499-40ce-866b-6009ade95f6c","Type":"ContainerDied","Data":"02a4b55630a569a5e17e835d5af62aa578681460f95278eb7ed660458ded7995"} Nov 24 21:43:06 crc kubenswrapper[4915]: E1124 21:43:06.811661 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e5395dd_1fa1_461c_b0eb_edc5c817955c.slice/crio-conmon-a2f806c8ed4bac045164780c23846ef30c72f1cc469575104ff81f9ce88ae615.scope\": RecentStats: unable to find data in memory cache]" Nov 24 21:43:09 crc kubenswrapper[4915]: W1124 21:43:09.190967 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc50a58e9_839d_4c0a_9aea_e27e9b892db0.slice/crio-105991a0c83c8ce7015fe9ea988a8e3613ace80864bb2c23437bf80e7028a2e9 WatchSource:0}: Error finding container 105991a0c83c8ce7015fe9ea988a8e3613ace80864bb2c23437bf80e7028a2e9: Status 404 returned error can't find the container with id 105991a0c83c8ce7015fe9ea988a8e3613ace80864bb2c23437bf80e7028a2e9 Nov 24 21:43:09 crc kubenswrapper[4915]: I1124 21:43:09.417073 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xkncx" event={"ID":"c50a58e9-839d-4c0a-9aea-e27e9b892db0","Type":"ContainerStarted","Data":"105991a0c83c8ce7015fe9ea988a8e3613ace80864bb2c23437bf80e7028a2e9"} Nov 24 21:43:09 crc kubenswrapper[4915]: I1124 21:43:09.928453 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.016949 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cddeb8d-6499-40ce-866b-6009ade95f6c-combined-ca-bundle\") pod \"8cddeb8d-6499-40ce-866b-6009ade95f6c\" (UID: \"8cddeb8d-6499-40ce-866b-6009ade95f6c\") " Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.017074 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cddeb8d-6499-40ce-866b-6009ade95f6c-run-httpd\") pod \"8cddeb8d-6499-40ce-866b-6009ade95f6c\" (UID: \"8cddeb8d-6499-40ce-866b-6009ade95f6c\") " Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.017111 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cddeb8d-6499-40ce-866b-6009ade95f6c-scripts\") pod \"8cddeb8d-6499-40ce-866b-6009ade95f6c\" (UID: \"8cddeb8d-6499-40ce-866b-6009ade95f6c\") " Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.017165 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8cddeb8d-6499-40ce-866b-6009ade95f6c-sg-core-conf-yaml\") pod \"8cddeb8d-6499-40ce-866b-6009ade95f6c\" (UID: \"8cddeb8d-6499-40ce-866b-6009ade95f6c\") " Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.017207 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cddeb8d-6499-40ce-866b-6009ade95f6c-log-httpd\") pod \"8cddeb8d-6499-40ce-866b-6009ade95f6c\" (UID: \"8cddeb8d-6499-40ce-866b-6009ade95f6c\") " Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.017343 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hpw9\" (UniqueName: \"kubernetes.io/projected/8cddeb8d-6499-40ce-866b-6009ade95f6c-kube-api-access-7hpw9\") pod \"8cddeb8d-6499-40ce-866b-6009ade95f6c\" (UID: \"8cddeb8d-6499-40ce-866b-6009ade95f6c\") " Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.017411 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cddeb8d-6499-40ce-866b-6009ade95f6c-config-data\") pod \"8cddeb8d-6499-40ce-866b-6009ade95f6c\" (UID: \"8cddeb8d-6499-40ce-866b-6009ade95f6c\") " Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.017620 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cddeb8d-6499-40ce-866b-6009ade95f6c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8cddeb8d-6499-40ce-866b-6009ade95f6c" (UID: "8cddeb8d-6499-40ce-866b-6009ade95f6c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.018291 4915 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cddeb8d-6499-40ce-866b-6009ade95f6c-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.019448 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cddeb8d-6499-40ce-866b-6009ade95f6c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8cddeb8d-6499-40ce-866b-6009ade95f6c" (UID: "8cddeb8d-6499-40ce-866b-6009ade95f6c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.030796 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cddeb8d-6499-40ce-866b-6009ade95f6c-scripts" (OuterVolumeSpecName: "scripts") pod "8cddeb8d-6499-40ce-866b-6009ade95f6c" (UID: "8cddeb8d-6499-40ce-866b-6009ade95f6c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.034386 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cddeb8d-6499-40ce-866b-6009ade95f6c-kube-api-access-7hpw9" (OuterVolumeSpecName: "kube-api-access-7hpw9") pod "8cddeb8d-6499-40ce-866b-6009ade95f6c" (UID: "8cddeb8d-6499-40ce-866b-6009ade95f6c"). InnerVolumeSpecName "kube-api-access-7hpw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.124768 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8cddeb8d-6499-40ce-866b-6009ade95f6c-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.124799 4915 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8cddeb8d-6499-40ce-866b-6009ade95f6c-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.124809 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hpw9\" (UniqueName: \"kubernetes.io/projected/8cddeb8d-6499-40ce-866b-6009ade95f6c-kube-api-access-7hpw9\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.139347 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cddeb8d-6499-40ce-866b-6009ade95f6c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8cddeb8d-6499-40ce-866b-6009ade95f6c" (UID: "8cddeb8d-6499-40ce-866b-6009ade95f6c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.197944 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cddeb8d-6499-40ce-866b-6009ade95f6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8cddeb8d-6499-40ce-866b-6009ade95f6c" (UID: "8cddeb8d-6499-40ce-866b-6009ade95f6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.227680 4915 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8cddeb8d-6499-40ce-866b-6009ade95f6c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.227716 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cddeb8d-6499-40ce-866b-6009ade95f6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.292834 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-05c3-account-create-87xhn"] Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.308914 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-541f-account-create-6xjb9"] Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.355964 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cddeb8d-6499-40ce-866b-6009ade95f6c-config-data" (OuterVolumeSpecName: "config-data") pod "8cddeb8d-6499-40ce-866b-6009ade95f6c" (UID: "8cddeb8d-6499-40ce-866b-6009ade95f6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.432427 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cddeb8d-6499-40ce-866b-6009ade95f6c-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.446112 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-05c3-account-create-87xhn" event={"ID":"f1089ced-cc42-419b-a130-e71479e51b16","Type":"ContainerStarted","Data":"a744fc8cf8b7554154c063565e70bea946e0290c40c28645d2a9bd843afee68f"} Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.446169 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8274e484-f997-4150-a496-f8f1a68ccfae","Type":"ContainerStarted","Data":"c95792a831a9330e99fdcac9021a52e2f45e0ff384f214c739717bc74a506187"} Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.456281 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-541f-account-create-6xjb9" event={"ID":"5ea7f227-3e0f-4e4f-9136-22a74b0f3057","Type":"ContainerStarted","Data":"3df1642d1188bd06b65e56358f4cae733b5c1286c0b249ea901da849a9458583"} Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.476688 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.366241491 podStartE2EDuration="15.476672296s" podCreationTimestamp="2025-11-24 21:42:55 +0000 UTC" firstStartedPulling="2025-11-24 21:42:56.360462233 +0000 UTC m=+1394.676714426" lastFinishedPulling="2025-11-24 21:43:09.470893058 +0000 UTC m=+1407.787145231" observedRunningTime="2025-11-24 21:43:10.475810072 +0000 UTC m=+1408.792062255" watchObservedRunningTime="2025-11-24 21:43:10.476672296 +0000 UTC m=+1408.792924469" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.488036 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8cddeb8d-6499-40ce-866b-6009ade95f6c","Type":"ContainerDied","Data":"4d729a7333656b811ad4eb39a6bbfe1f469f09151172abe19d052f424403fa8e"} Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.488168 4915 scope.go:117] "RemoveContainer" containerID="de28da0b8d5d47e29d3a7c9417ee3bf491088f5807102c18164da015e0d3d1fb" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.488375 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.496863 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xkncx" event={"ID":"c50a58e9-839d-4c0a-9aea-e27e9b892db0","Type":"ContainerStarted","Data":"37e689d3a38e76cdd258398372e5c5923d5c12e3a83ba02761d37b5ffabc9724"} Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.516591 4915 scope.go:117] "RemoveContainer" containerID="4d3eb1db0c501024a55b79378896576216e89f5ceb45900cdad7aa5e034a9e8a" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.526647 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-xkncx" podStartSLOduration=10.526628592 podStartE2EDuration="10.526628592s" podCreationTimestamp="2025-11-24 21:43:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:43:10.523818397 +0000 UTC m=+1408.840070570" watchObservedRunningTime="2025-11-24 21:43:10.526628592 +0000 UTC m=+1408.842880765" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.569193 4915 scope.go:117] "RemoveContainer" containerID="02a4b55630a569a5e17e835d5af62aa578681460f95278eb7ed660458ded7995" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.581761 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.598932 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.616810 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:43:10 crc kubenswrapper[4915]: E1124 21:43:10.617466 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cddeb8d-6499-40ce-866b-6009ade95f6c" containerName="ceilometer-central-agent" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.617481 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cddeb8d-6499-40ce-866b-6009ade95f6c" containerName="ceilometer-central-agent" Nov 24 21:43:10 crc kubenswrapper[4915]: E1124 21:43:10.617489 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cddeb8d-6499-40ce-866b-6009ade95f6c" containerName="proxy-httpd" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.617494 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cddeb8d-6499-40ce-866b-6009ade95f6c" containerName="proxy-httpd" Nov 24 21:43:10 crc kubenswrapper[4915]: E1124 21:43:10.617539 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cddeb8d-6499-40ce-866b-6009ade95f6c" containerName="sg-core" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.617546 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cddeb8d-6499-40ce-866b-6009ade95f6c" containerName="sg-core" Nov 24 21:43:10 crc kubenswrapper[4915]: E1124 21:43:10.617553 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cddeb8d-6499-40ce-866b-6009ade95f6c" containerName="ceilometer-notification-agent" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.617560 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cddeb8d-6499-40ce-866b-6009ade95f6c" containerName="ceilometer-notification-agent" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.617747 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cddeb8d-6499-40ce-866b-6009ade95f6c" containerName="sg-core" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.617761 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cddeb8d-6499-40ce-866b-6009ade95f6c" containerName="ceilometer-central-agent" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.617796 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cddeb8d-6499-40ce-866b-6009ade95f6c" containerName="proxy-httpd" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.617802 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cddeb8d-6499-40ce-866b-6009ade95f6c" containerName="ceilometer-notification-agent" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.625229 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.627067 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.627075 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.630516 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.653509 4915 scope.go:117] "RemoveContainer" containerID="af69cdeddcb763ac67d6fb42617ab2bf479f34a1236215f298acc13cf506a0fd" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.759595 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-scripts\") pod \"ceilometer-0\" (UID: \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\") " pod="openstack/ceilometer-0" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.759661 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-config-data\") pod \"ceilometer-0\" (UID: \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\") " pod="openstack/ceilometer-0" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.759959 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\") " pod="openstack/ceilometer-0" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.760082 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\") " pod="openstack/ceilometer-0" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.760194 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-log-httpd\") pod \"ceilometer-0\" (UID: \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\") " pod="openstack/ceilometer-0" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.760226 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-run-httpd\") pod \"ceilometer-0\" (UID: \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\") " pod="openstack/ceilometer-0" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.760288 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzh4t\" (UniqueName: \"kubernetes.io/projected/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-kube-api-access-nzh4t\") pod \"ceilometer-0\" (UID: \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\") " pod="openstack/ceilometer-0" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.863610 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-config-data\") pod \"ceilometer-0\" (UID: \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\") " pod="openstack/ceilometer-0" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.863794 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\") " pod="openstack/ceilometer-0" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.863894 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\") " pod="openstack/ceilometer-0" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.863993 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-log-httpd\") pod \"ceilometer-0\" (UID: \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\") " pod="openstack/ceilometer-0" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.864020 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-run-httpd\") pod \"ceilometer-0\" (UID: \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\") " pod="openstack/ceilometer-0" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.864057 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzh4t\" (UniqueName: \"kubernetes.io/projected/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-kube-api-access-nzh4t\") pod \"ceilometer-0\" (UID: \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\") " pod="openstack/ceilometer-0" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.864144 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-scripts\") pod \"ceilometer-0\" (UID: \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\") " pod="openstack/ceilometer-0" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.868125 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-log-httpd\") pod \"ceilometer-0\" (UID: \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\") " pod="openstack/ceilometer-0" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.868203 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-run-httpd\") pod \"ceilometer-0\" (UID: \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\") " pod="openstack/ceilometer-0" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.869298 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-scripts\") pod \"ceilometer-0\" (UID: \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\") " pod="openstack/ceilometer-0" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.870911 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\") " pod="openstack/ceilometer-0" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.881134 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-config-data\") pod \"ceilometer-0\" (UID: \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\") " pod="openstack/ceilometer-0" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.884619 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\") " pod="openstack/ceilometer-0" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.888890 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzh4t\" (UniqueName: \"kubernetes.io/projected/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-kube-api-access-nzh4t\") pod \"ceilometer-0\" (UID: \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\") " pod="openstack/ceilometer-0" Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.951621 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-54cc46598c-928hf"] Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.980823 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-2688t"] Nov 24 21:43:10 crc kubenswrapper[4915]: I1124 21:43:10.987374 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:43:11 crc kubenswrapper[4915]: I1124 21:43:11.002562 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7b6fc899f8-tjzhx"] Nov 24 21:43:11 crc kubenswrapper[4915]: I1124 21:43:11.027110 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-ncs8x"] Nov 24 21:43:11 crc kubenswrapper[4915]: I1124 21:43:11.106800 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-678kk"] Nov 24 21:43:11 crc kubenswrapper[4915]: I1124 21:43:11.134632 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-968b48459-dfqg8"] Nov 24 21:43:11 crc kubenswrapper[4915]: I1124 21:43:11.210516 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-1fec-account-create-c2mcb"] Nov 24 21:43:11 crc kubenswrapper[4915]: I1124 21:43:11.323140 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-8f669b545-6bqfj"] Nov 24 21:43:11 crc kubenswrapper[4915]: I1124 21:43:11.582205 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-968b48459-dfqg8" event={"ID":"cdff3fcc-adf5-4e04-9cea-b12c43b4f025","Type":"ContainerStarted","Data":"706e36cf0ff63fcb57d625ec3dcb9df6edc0f4702736f58eafac44c52d9278b2"} Nov 24 21:43:11 crc kubenswrapper[4915]: I1124 21:43:11.599584 4915 generic.go:334] "Generic (PLEG): container finished" podID="5ea7f227-3e0f-4e4f-9136-22a74b0f3057" containerID="7ee3b3c6520d78d154dabb0b74611526ce9d58814d358c68ad6b5f8de5f292d6" exitCode=0 Nov 24 21:43:11 crc kubenswrapper[4915]: I1124 21:43:11.599655 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-541f-account-create-6xjb9" event={"ID":"5ea7f227-3e0f-4e4f-9136-22a74b0f3057","Type":"ContainerDied","Data":"7ee3b3c6520d78d154dabb0b74611526ce9d58814d358c68ad6b5f8de5f292d6"} Nov 24 21:43:11 crc kubenswrapper[4915]: I1124 21:43:11.639814 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1fec-account-create-c2mcb" event={"ID":"4387d1ab-ac63-465a-a164-1fb5f78352da","Type":"ContainerStarted","Data":"862770451bdb9aca38edc08c277201b374ae35a6bf10cbf3d43dbac3a308b97a"} Nov 24 21:43:11 crc kubenswrapper[4915]: I1124 21:43:11.674643 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-678kk" event={"ID":"789abde1-5e42-4032-8b87-e3c475c709e8","Type":"ContainerStarted","Data":"ef07e57689b9203f15deb02cf8580b576220c881bf5c46611fbefbea60f08c5c"} Nov 24 21:43:11 crc kubenswrapper[4915]: I1124 21:43:11.687616 4915 generic.go:334] "Generic (PLEG): container finished" podID="c50a58e9-839d-4c0a-9aea-e27e9b892db0" containerID="37e689d3a38e76cdd258398372e5c5923d5c12e3a83ba02761d37b5ffabc9724" exitCode=0 Nov 24 21:43:11 crc kubenswrapper[4915]: I1124 21:43:11.687674 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xkncx" event={"ID":"c50a58e9-839d-4c0a-9aea-e27e9b892db0","Type":"ContainerDied","Data":"37e689d3a38e76cdd258398372e5c5923d5c12e3a83ba02761d37b5ffabc9724"} Nov 24 21:43:11 crc kubenswrapper[4915]: I1124 21:43:11.690591 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-8f669b545-6bqfj" event={"ID":"a652c587-ebd5-4783-8397-9275b5a3b682","Type":"ContainerStarted","Data":"9071d048339ffb1d7b8fb8fcc71ff3c2970d8956a73b7f831a09eb0de54314e4"} Nov 24 21:43:11 crc kubenswrapper[4915]: I1124 21:43:11.693226 4915 generic.go:334] "Generic (PLEG): container finished" podID="f1089ced-cc42-419b-a130-e71479e51b16" containerID="91fad0a063c9f8f10c3f275798bfc0c0c2923894ca886bc2b316a82457e87abe" exitCode=0 Nov 24 21:43:11 crc kubenswrapper[4915]: I1124 21:43:11.693281 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-05c3-account-create-87xhn" event={"ID":"f1089ced-cc42-419b-a130-e71479e51b16","Type":"ContainerDied","Data":"91fad0a063c9f8f10c3f275798bfc0c0c2923894ca886bc2b316a82457e87abe"} Nov 24 21:43:11 crc kubenswrapper[4915]: I1124 21:43:11.696765 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b6fc899f8-tjzhx" event={"ID":"cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f","Type":"ContainerStarted","Data":"fab474f06a11bd8ebf7af56b10b625f09a4227ae86f8acf3f5cb7a813445f396"} Nov 24 21:43:11 crc kubenswrapper[4915]: I1124 21:43:11.701539 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" event={"ID":"0327c0c2-dd19-48b6-946d-3b1460a7a260","Type":"ContainerStarted","Data":"70029c7897c4504f32eecdac7817e1f47056764739158d6bef3583e7e50326f8"} Nov 24 21:43:11 crc kubenswrapper[4915]: I1124 21:43:11.743125 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-54cc46598c-928hf" event={"ID":"72a33116-8ea9-489b-819c-e69372202e03","Type":"ContainerStarted","Data":"c415c7509f481afaf281f225c9382ec832b946f338f621cc6b7b75cc4e8e905e"} Nov 24 21:43:11 crc kubenswrapper[4915]: I1124 21:43:11.750109 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2688t" event={"ID":"0ca21771-e57e-49fc-a4c0-a8fea17fd2ea","Type":"ContainerStarted","Data":"a8aad2628319ddb95af7553c944d2bc090aba2ee9b8daa9abf00cece68f80e5a"} Nov 24 21:43:11 crc kubenswrapper[4915]: I1124 21:43:11.927357 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-c6db6bd6d-wbhkr"] Nov 24 21:43:11 crc kubenswrapper[4915]: I1124 21:43:11.929224 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-c6db6bd6d-wbhkr" Nov 24 21:43:11 crc kubenswrapper[4915]: I1124 21:43:11.975457 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-c6db6bd6d-wbhkr"] Nov 24 21:43:11 crc kubenswrapper[4915]: I1124 21:43:11.987316 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-788f8c6c87-bw87v"] Nov 24 21:43:11 crc kubenswrapper[4915]: I1124 21:43:11.990533 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-788f8c6c87-bw87v" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.004802 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-788f8c6c87-bw87v"] Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.025215 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-64fdcf7c87-z5n75"] Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.026787 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-64fdcf7c87-z5n75" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.032118 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjxqq\" (UniqueName: \"kubernetes.io/projected/4f28548e-b6b5-4a05-8493-a3c1896e4c6d-kube-api-access-gjxqq\") pod \"heat-engine-c6db6bd6d-wbhkr\" (UID: \"4f28548e-b6b5-4a05-8493-a3c1896e4c6d\") " pod="openstack/heat-engine-c6db6bd6d-wbhkr" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.032258 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f28548e-b6b5-4a05-8493-a3c1896e4c6d-config-data-custom\") pod \"heat-engine-c6db6bd6d-wbhkr\" (UID: \"4f28548e-b6b5-4a05-8493-a3c1896e4c6d\") " pod="openstack/heat-engine-c6db6bd6d-wbhkr" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.032290 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f28548e-b6b5-4a05-8493-a3c1896e4c6d-combined-ca-bundle\") pod \"heat-engine-c6db6bd6d-wbhkr\" (UID: \"4f28548e-b6b5-4a05-8493-a3c1896e4c6d\") " pod="openstack/heat-engine-c6db6bd6d-wbhkr" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.032342 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f28548e-b6b5-4a05-8493-a3c1896e4c6d-config-data\") pod \"heat-engine-c6db6bd6d-wbhkr\" (UID: \"4f28548e-b6b5-4a05-8493-a3c1896e4c6d\") " pod="openstack/heat-engine-c6db6bd6d-wbhkr" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.098967 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-64fdcf7c87-z5n75"] Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.116727 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.135307 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60efd5e8-d968-4c42-bca3-5890cb8b3ce9-config-data-custom\") pod \"heat-api-64fdcf7c87-z5n75\" (UID: \"60efd5e8-d968-4c42-bca3-5890cb8b3ce9\") " pod="openstack/heat-api-64fdcf7c87-z5n75" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.135630 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn2n7\" (UniqueName: \"kubernetes.io/projected/afef51f6-3303-49ea-945d-a5e98d263877-kube-api-access-hn2n7\") pod \"heat-cfnapi-788f8c6c87-bw87v\" (UID: \"afef51f6-3303-49ea-945d-a5e98d263877\") " pod="openstack/heat-cfnapi-788f8c6c87-bw87v" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.135767 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/afef51f6-3303-49ea-945d-a5e98d263877-config-data-custom\") pod \"heat-cfnapi-788f8c6c87-bw87v\" (UID: \"afef51f6-3303-49ea-945d-a5e98d263877\") " pod="openstack/heat-cfnapi-788f8c6c87-bw87v" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.135924 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60efd5e8-d968-4c42-bca3-5890cb8b3ce9-config-data\") pod \"heat-api-64fdcf7c87-z5n75\" (UID: \"60efd5e8-d968-4c42-bca3-5890cb8b3ce9\") " pod="openstack/heat-api-64fdcf7c87-z5n75" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.136026 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz6p8\" (UniqueName: \"kubernetes.io/projected/60efd5e8-d968-4c42-bca3-5890cb8b3ce9-kube-api-access-xz6p8\") pod \"heat-api-64fdcf7c87-z5n75\" (UID: \"60efd5e8-d968-4c42-bca3-5890cb8b3ce9\") " pod="openstack/heat-api-64fdcf7c87-z5n75" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.136235 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afef51f6-3303-49ea-945d-a5e98d263877-combined-ca-bundle\") pod \"heat-cfnapi-788f8c6c87-bw87v\" (UID: \"afef51f6-3303-49ea-945d-a5e98d263877\") " pod="openstack/heat-cfnapi-788f8c6c87-bw87v" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.136360 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjxqq\" (UniqueName: \"kubernetes.io/projected/4f28548e-b6b5-4a05-8493-a3c1896e4c6d-kube-api-access-gjxqq\") pod \"heat-engine-c6db6bd6d-wbhkr\" (UID: \"4f28548e-b6b5-4a05-8493-a3c1896e4c6d\") " pod="openstack/heat-engine-c6db6bd6d-wbhkr" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.136497 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f28548e-b6b5-4a05-8493-a3c1896e4c6d-config-data-custom\") pod \"heat-engine-c6db6bd6d-wbhkr\" (UID: \"4f28548e-b6b5-4a05-8493-a3c1896e4c6d\") " pod="openstack/heat-engine-c6db6bd6d-wbhkr" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.136578 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f28548e-b6b5-4a05-8493-a3c1896e4c6d-combined-ca-bundle\") pod \"heat-engine-c6db6bd6d-wbhkr\" (UID: \"4f28548e-b6b5-4a05-8493-a3c1896e4c6d\") " pod="openstack/heat-engine-c6db6bd6d-wbhkr" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.136669 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afef51f6-3303-49ea-945d-a5e98d263877-config-data\") pod \"heat-cfnapi-788f8c6c87-bw87v\" (UID: \"afef51f6-3303-49ea-945d-a5e98d263877\") " pod="openstack/heat-cfnapi-788f8c6c87-bw87v" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.136806 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f28548e-b6b5-4a05-8493-a3c1896e4c6d-config-data\") pod \"heat-engine-c6db6bd6d-wbhkr\" (UID: \"4f28548e-b6b5-4a05-8493-a3c1896e4c6d\") " pod="openstack/heat-engine-c6db6bd6d-wbhkr" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.137258 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60efd5e8-d968-4c42-bca3-5890cb8b3ce9-combined-ca-bundle\") pod \"heat-api-64fdcf7c87-z5n75\" (UID: \"60efd5e8-d968-4c42-bca3-5890cb8b3ce9\") " pod="openstack/heat-api-64fdcf7c87-z5n75" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.142249 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f28548e-b6b5-4a05-8493-a3c1896e4c6d-config-data-custom\") pod \"heat-engine-c6db6bd6d-wbhkr\" (UID: \"4f28548e-b6b5-4a05-8493-a3c1896e4c6d\") " pod="openstack/heat-engine-c6db6bd6d-wbhkr" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.143463 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f28548e-b6b5-4a05-8493-a3c1896e4c6d-combined-ca-bundle\") pod \"heat-engine-c6db6bd6d-wbhkr\" (UID: \"4f28548e-b6b5-4a05-8493-a3c1896e4c6d\") " pod="openstack/heat-engine-c6db6bd6d-wbhkr" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.145949 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f28548e-b6b5-4a05-8493-a3c1896e4c6d-config-data\") pod \"heat-engine-c6db6bd6d-wbhkr\" (UID: \"4f28548e-b6b5-4a05-8493-a3c1896e4c6d\") " pod="openstack/heat-engine-c6db6bd6d-wbhkr" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.163508 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjxqq\" (UniqueName: \"kubernetes.io/projected/4f28548e-b6b5-4a05-8493-a3c1896e4c6d-kube-api-access-gjxqq\") pod \"heat-engine-c6db6bd6d-wbhkr\" (UID: \"4f28548e-b6b5-4a05-8493-a3c1896e4c6d\") " pod="openstack/heat-engine-c6db6bd6d-wbhkr" Nov 24 21:43:12 crc kubenswrapper[4915]: E1124 21:43:12.175098 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e5395dd_1fa1_461c_b0eb_edc5c817955c.slice/crio-conmon-a2f806c8ed4bac045164780c23846ef30c72f1cc469575104ff81f9ce88ae615.scope\": RecentStats: unable to find data in memory cache]" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.239473 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60efd5e8-d968-4c42-bca3-5890cb8b3ce9-combined-ca-bundle\") pod \"heat-api-64fdcf7c87-z5n75\" (UID: \"60efd5e8-d968-4c42-bca3-5890cb8b3ce9\") " pod="openstack/heat-api-64fdcf7c87-z5n75" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.239687 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60efd5e8-d968-4c42-bca3-5890cb8b3ce9-config-data-custom\") pod \"heat-api-64fdcf7c87-z5n75\" (UID: \"60efd5e8-d968-4c42-bca3-5890cb8b3ce9\") " pod="openstack/heat-api-64fdcf7c87-z5n75" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.239804 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn2n7\" (UniqueName: \"kubernetes.io/projected/afef51f6-3303-49ea-945d-a5e98d263877-kube-api-access-hn2n7\") pod \"heat-cfnapi-788f8c6c87-bw87v\" (UID: \"afef51f6-3303-49ea-945d-a5e98d263877\") " pod="openstack/heat-cfnapi-788f8c6c87-bw87v" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.239836 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/afef51f6-3303-49ea-945d-a5e98d263877-config-data-custom\") pod \"heat-cfnapi-788f8c6c87-bw87v\" (UID: \"afef51f6-3303-49ea-945d-a5e98d263877\") " pod="openstack/heat-cfnapi-788f8c6c87-bw87v" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.239868 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60efd5e8-d968-4c42-bca3-5890cb8b3ce9-config-data\") pod \"heat-api-64fdcf7c87-z5n75\" (UID: \"60efd5e8-d968-4c42-bca3-5890cb8b3ce9\") " pod="openstack/heat-api-64fdcf7c87-z5n75" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.239902 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz6p8\" (UniqueName: \"kubernetes.io/projected/60efd5e8-d968-4c42-bca3-5890cb8b3ce9-kube-api-access-xz6p8\") pod \"heat-api-64fdcf7c87-z5n75\" (UID: \"60efd5e8-d968-4c42-bca3-5890cb8b3ce9\") " pod="openstack/heat-api-64fdcf7c87-z5n75" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.239962 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afef51f6-3303-49ea-945d-a5e98d263877-combined-ca-bundle\") pod \"heat-cfnapi-788f8c6c87-bw87v\" (UID: \"afef51f6-3303-49ea-945d-a5e98d263877\") " pod="openstack/heat-cfnapi-788f8c6c87-bw87v" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.240044 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afef51f6-3303-49ea-945d-a5e98d263877-config-data\") pod \"heat-cfnapi-788f8c6c87-bw87v\" (UID: \"afef51f6-3303-49ea-945d-a5e98d263877\") " pod="openstack/heat-cfnapi-788f8c6c87-bw87v" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.249928 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60efd5e8-d968-4c42-bca3-5890cb8b3ce9-config-data-custom\") pod \"heat-api-64fdcf7c87-z5n75\" (UID: \"60efd5e8-d968-4c42-bca3-5890cb8b3ce9\") " pod="openstack/heat-api-64fdcf7c87-z5n75" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.250744 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/afef51f6-3303-49ea-945d-a5e98d263877-config-data-custom\") pod \"heat-cfnapi-788f8c6c87-bw87v\" (UID: \"afef51f6-3303-49ea-945d-a5e98d263877\") " pod="openstack/heat-cfnapi-788f8c6c87-bw87v" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.253340 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afef51f6-3303-49ea-945d-a5e98d263877-config-data\") pod \"heat-cfnapi-788f8c6c87-bw87v\" (UID: \"afef51f6-3303-49ea-945d-a5e98d263877\") " pod="openstack/heat-cfnapi-788f8c6c87-bw87v" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.254064 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afef51f6-3303-49ea-945d-a5e98d263877-combined-ca-bundle\") pod \"heat-cfnapi-788f8c6c87-bw87v\" (UID: \"afef51f6-3303-49ea-945d-a5e98d263877\") " pod="openstack/heat-cfnapi-788f8c6c87-bw87v" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.255831 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60efd5e8-d968-4c42-bca3-5890cb8b3ce9-config-data\") pod \"heat-api-64fdcf7c87-z5n75\" (UID: \"60efd5e8-d968-4c42-bca3-5890cb8b3ce9\") " pod="openstack/heat-api-64fdcf7c87-z5n75" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.262628 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn2n7\" (UniqueName: \"kubernetes.io/projected/afef51f6-3303-49ea-945d-a5e98d263877-kube-api-access-hn2n7\") pod \"heat-cfnapi-788f8c6c87-bw87v\" (UID: \"afef51f6-3303-49ea-945d-a5e98d263877\") " pod="openstack/heat-cfnapi-788f8c6c87-bw87v" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.264202 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz6p8\" (UniqueName: \"kubernetes.io/projected/60efd5e8-d968-4c42-bca3-5890cb8b3ce9-kube-api-access-xz6p8\") pod \"heat-api-64fdcf7c87-z5n75\" (UID: \"60efd5e8-d968-4c42-bca3-5890cb8b3ce9\") " pod="openstack/heat-api-64fdcf7c87-z5n75" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.264394 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60efd5e8-d968-4c42-bca3-5890cb8b3ce9-combined-ca-bundle\") pod \"heat-api-64fdcf7c87-z5n75\" (UID: \"60efd5e8-d968-4c42-bca3-5890cb8b3ce9\") " pod="openstack/heat-api-64fdcf7c87-z5n75" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.293985 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-c6db6bd6d-wbhkr" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.319578 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-788f8c6c87-bw87v" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.418893 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-64fdcf7c87-z5n75" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.477611 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cddeb8d-6499-40ce-866b-6009ade95f6c" path="/var/lib/kubelet/pods/8cddeb8d-6499-40ce-866b-6009ade95f6c/volumes" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.775154 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-8f669b545-6bqfj" event={"ID":"a652c587-ebd5-4783-8397-9275b5a3b682","Type":"ContainerStarted","Data":"758e36b26cc0fb5693068dcda64a28fd6b96b4d2a353c93a74e9837e318e396b"} Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.776758 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-8f669b545-6bqfj" Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.778549 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-968b48459-dfqg8" event={"ID":"cdff3fcc-adf5-4e04-9cea-b12c43b4f025","Type":"ContainerStarted","Data":"b98ee5ac19f3c03a04bf83a9efe4dd4974e2826109bd5110796bec3e0179db32"} Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.780169 4915 generic.go:334] "Generic (PLEG): container finished" podID="789abde1-5e42-4032-8b87-e3c475c709e8" containerID="1a4613153211c71d0c2870ad58288dd61754bc7b04500dc2b96683469fb627d4" exitCode=0 Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.780266 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-678kk" event={"ID":"789abde1-5e42-4032-8b87-e3c475c709e8","Type":"ContainerDied","Data":"1a4613153211c71d0c2870ad58288dd61754bc7b04500dc2b96683469fb627d4"} Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.782574 4915 generic.go:334] "Generic (PLEG): container finished" podID="0ca21771-e57e-49fc-a4c0-a8fea17fd2ea" containerID="8b8cbb70659d80e191530e7c40b2b924d401ef7433f8a2552d670ab31e15f985" exitCode=0 Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.782620 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2688t" event={"ID":"0ca21771-e57e-49fc-a4c0-a8fea17fd2ea","Type":"ContainerDied","Data":"8b8cbb70659d80e191530e7c40b2b924d401ef7433f8a2552d670ab31e15f985"} Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.785205 4915 generic.go:334] "Generic (PLEG): container finished" podID="4387d1ab-ac63-465a-a164-1fb5f78352da" containerID="af399af7155b788b197e6fe35e604ced994e0fcf1cbe057c53aa5c94798a105c" exitCode=0 Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.785268 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1fec-account-create-c2mcb" event={"ID":"4387d1ab-ac63-465a-a164-1fb5f78352da","Type":"ContainerDied","Data":"af399af7155b788b197e6fe35e604ced994e0fcf1cbe057c53aa5c94798a105c"} Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.793650 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb","Type":"ContainerStarted","Data":"3840af77be930601a442407ddc24873ca9dc6938e81e48577528d6e4047340e7"} Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.800217 4915 generic.go:334] "Generic (PLEG): container finished" podID="0327c0c2-dd19-48b6-946d-3b1460a7a260" containerID="45897b5946030f47e1af32af054c49a1f36709d46d1ca32aba0807ce65de1e7b" exitCode=0 Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.801658 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" event={"ID":"0327c0c2-dd19-48b6-946d-3b1460a7a260","Type":"ContainerDied","Data":"45897b5946030f47e1af32af054c49a1f36709d46d1ca32aba0807ce65de1e7b"} Nov 24 21:43:12 crc kubenswrapper[4915]: I1124 21:43:12.816511 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-8f669b545-6bqfj" podStartSLOduration=8.816494424 podStartE2EDuration="8.816494424s" podCreationTimestamp="2025-11-24 21:43:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:43:12.791963283 +0000 UTC m=+1411.108215456" watchObservedRunningTime="2025-11-24 21:43:12.816494424 +0000 UTC m=+1411.132746597" Nov 24 21:43:13 crc kubenswrapper[4915]: I1124 21:43:13.043290 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-c6db6bd6d-wbhkr"] Nov 24 21:43:13 crc kubenswrapper[4915]: I1124 21:43:13.061004 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-788f8c6c87-bw87v"] Nov 24 21:43:13 crc kubenswrapper[4915]: I1124 21:43:13.667386 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-64fdcf7c87-z5n75"] Nov 24 21:43:13 crc kubenswrapper[4915]: I1124 21:43:13.839487 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" event={"ID":"0327c0c2-dd19-48b6-946d-3b1460a7a260","Type":"ContainerStarted","Data":"2edd553f1ef5fbadcc771a627b44cc68540fef7b1f99306b6b987931cf80c7b5"} Nov 24 21:43:13 crc kubenswrapper[4915]: I1124 21:43:13.841277 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" Nov 24 21:43:13 crc kubenswrapper[4915]: I1124 21:43:13.851086 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-968b48459-dfqg8" event={"ID":"cdff3fcc-adf5-4e04-9cea-b12c43b4f025","Type":"ContainerStarted","Data":"426d6b4689d40d6d384864fab9f485204fc82de3ece652e54a55dbe40e0b2f09"} Nov 24 21:43:13 crc kubenswrapper[4915]: I1124 21:43:13.851272 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-968b48459-dfqg8" Nov 24 21:43:13 crc kubenswrapper[4915]: I1124 21:43:13.851338 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-968b48459-dfqg8" Nov 24 21:43:13 crc kubenswrapper[4915]: I1124 21:43:13.854426 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb","Type":"ContainerStarted","Data":"9d8fcc49d4845bde8e7a10366404f976096c59219377177fe6698b3e29d863d8"} Nov 24 21:43:13 crc kubenswrapper[4915]: I1124 21:43:13.860639 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-c6db6bd6d-wbhkr" event={"ID":"4f28548e-b6b5-4a05-8493-a3c1896e4c6d","Type":"ContainerStarted","Data":"9d5782ad83c2168cb30c94182ed606a30443b13e950b8365a16dd745b742a3d9"} Nov 24 21:43:13 crc kubenswrapper[4915]: I1124 21:43:13.860684 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-c6db6bd6d-wbhkr" event={"ID":"4f28548e-b6b5-4a05-8493-a3c1896e4c6d","Type":"ContainerStarted","Data":"2582d57fa65ddabf86b88fdb705f4901acdf324e90302869471dadb9f4b6ac88"} Nov 24 21:43:13 crc kubenswrapper[4915]: I1124 21:43:13.860982 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-c6db6bd6d-wbhkr" Nov 24 21:43:13 crc kubenswrapper[4915]: I1124 21:43:13.861858 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" podStartSLOduration=9.86183814 podStartE2EDuration="9.86183814s" podCreationTimestamp="2025-11-24 21:43:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:43:13.860221296 +0000 UTC m=+1412.176473469" watchObservedRunningTime="2025-11-24 21:43:13.86183814 +0000 UTC m=+1412.178090313" Nov 24 21:43:13 crc kubenswrapper[4915]: I1124 21:43:13.870684 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-788f8c6c87-bw87v" event={"ID":"afef51f6-3303-49ea-945d-a5e98d263877","Type":"ContainerStarted","Data":"130b4fd63d2e2f7878f3a96cafd82d0e9c248eee7c5d3efb6f8ce13e37282789"} Nov 24 21:43:13 crc kubenswrapper[4915]: I1124 21:43:13.907422 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-968b48459-dfqg8" podStartSLOduration=10.907400258 podStartE2EDuration="10.907400258s" podCreationTimestamp="2025-11-24 21:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:43:13.896683869 +0000 UTC m=+1412.212936062" watchObservedRunningTime="2025-11-24 21:43:13.907400258 +0000 UTC m=+1412.223652431" Nov 24 21:43:13 crc kubenswrapper[4915]: I1124 21:43:13.920319 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-c6db6bd6d-wbhkr" podStartSLOduration=2.9202959059999998 podStartE2EDuration="2.920295906s" podCreationTimestamp="2025-11-24 21:43:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:43:13.909816403 +0000 UTC m=+1412.226068576" watchObservedRunningTime="2025-11-24 21:43:13.920295906 +0000 UTC m=+1412.236548089" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.389817 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-54cc46598c-928hf"] Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.483829 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6d86bfd46c-5v69x"] Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.490898 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7b6fc899f8-tjzhx"] Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.491071 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d86bfd46c-5v69x" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.506670 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.518127 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.519087 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6d86bfd46c-5v69x"] Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.528984 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6f8894bc87-v9t9l"] Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.533677 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f8894bc87-v9t9l" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.541312 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.541563 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.567542 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6f8894bc87-v9t9l"] Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.657246 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdplf\" (UniqueName: \"kubernetes.io/projected/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-kube-api-access-vdplf\") pod \"heat-cfnapi-6f8894bc87-v9t9l\" (UID: \"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4\") " pod="openstack/heat-cfnapi-6f8894bc87-v9t9l" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.657315 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-combined-ca-bundle\") pod \"heat-cfnapi-6f8894bc87-v9t9l\" (UID: \"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4\") " pod="openstack/heat-cfnapi-6f8894bc87-v9t9l" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.657357 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-combined-ca-bundle\") pod \"heat-api-6d86bfd46c-5v69x\" (UID: \"cb4f24a4-d3ad-4bc2-bbed-47a35479c826\") " pod="openstack/heat-api-6d86bfd46c-5v69x" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.657406 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-config-data\") pod \"heat-cfnapi-6f8894bc87-v9t9l\" (UID: \"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4\") " pod="openstack/heat-cfnapi-6f8894bc87-v9t9l" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.657431 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-public-tls-certs\") pod \"heat-api-6d86bfd46c-5v69x\" (UID: \"cb4f24a4-d3ad-4bc2-bbed-47a35479c826\") " pod="openstack/heat-api-6d86bfd46c-5v69x" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.657501 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-internal-tls-certs\") pod \"heat-api-6d86bfd46c-5v69x\" (UID: \"cb4f24a4-d3ad-4bc2-bbed-47a35479c826\") " pod="openstack/heat-api-6d86bfd46c-5v69x" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.657529 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-internal-tls-certs\") pod \"heat-cfnapi-6f8894bc87-v9t9l\" (UID: \"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4\") " pod="openstack/heat-cfnapi-6f8894bc87-v9t9l" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.657562 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-config-data-custom\") pod \"heat-cfnapi-6f8894bc87-v9t9l\" (UID: \"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4\") " pod="openstack/heat-cfnapi-6f8894bc87-v9t9l" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.657613 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l47zh\" (UniqueName: \"kubernetes.io/projected/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-kube-api-access-l47zh\") pod \"heat-api-6d86bfd46c-5v69x\" (UID: \"cb4f24a4-d3ad-4bc2-bbed-47a35479c826\") " pod="openstack/heat-api-6d86bfd46c-5v69x" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.657656 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-config-data-custom\") pod \"heat-api-6d86bfd46c-5v69x\" (UID: \"cb4f24a4-d3ad-4bc2-bbed-47a35479c826\") " pod="openstack/heat-api-6d86bfd46c-5v69x" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.657716 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-public-tls-certs\") pod \"heat-cfnapi-6f8894bc87-v9t9l\" (UID: \"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4\") " pod="openstack/heat-cfnapi-6f8894bc87-v9t9l" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.657771 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-config-data\") pod \"heat-api-6d86bfd46c-5v69x\" (UID: \"cb4f24a4-d3ad-4bc2-bbed-47a35479c826\") " pod="openstack/heat-api-6d86bfd46c-5v69x" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.763942 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdplf\" (UniqueName: \"kubernetes.io/projected/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-kube-api-access-vdplf\") pod \"heat-cfnapi-6f8894bc87-v9t9l\" (UID: \"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4\") " pod="openstack/heat-cfnapi-6f8894bc87-v9t9l" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.764001 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-combined-ca-bundle\") pod \"heat-cfnapi-6f8894bc87-v9t9l\" (UID: \"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4\") " pod="openstack/heat-cfnapi-6f8894bc87-v9t9l" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.764027 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-combined-ca-bundle\") pod \"heat-api-6d86bfd46c-5v69x\" (UID: \"cb4f24a4-d3ad-4bc2-bbed-47a35479c826\") " pod="openstack/heat-api-6d86bfd46c-5v69x" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.764060 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-config-data\") pod \"heat-cfnapi-6f8894bc87-v9t9l\" (UID: \"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4\") " pod="openstack/heat-cfnapi-6f8894bc87-v9t9l" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.764079 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-public-tls-certs\") pod \"heat-api-6d86bfd46c-5v69x\" (UID: \"cb4f24a4-d3ad-4bc2-bbed-47a35479c826\") " pod="openstack/heat-api-6d86bfd46c-5v69x" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.764121 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-internal-tls-certs\") pod \"heat-api-6d86bfd46c-5v69x\" (UID: \"cb4f24a4-d3ad-4bc2-bbed-47a35479c826\") " pod="openstack/heat-api-6d86bfd46c-5v69x" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.764141 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-internal-tls-certs\") pod \"heat-cfnapi-6f8894bc87-v9t9l\" (UID: \"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4\") " pod="openstack/heat-cfnapi-6f8894bc87-v9t9l" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.764164 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-config-data-custom\") pod \"heat-cfnapi-6f8894bc87-v9t9l\" (UID: \"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4\") " pod="openstack/heat-cfnapi-6f8894bc87-v9t9l" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.764194 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l47zh\" (UniqueName: \"kubernetes.io/projected/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-kube-api-access-l47zh\") pod \"heat-api-6d86bfd46c-5v69x\" (UID: \"cb4f24a4-d3ad-4bc2-bbed-47a35479c826\") " pod="openstack/heat-api-6d86bfd46c-5v69x" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.764221 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-config-data-custom\") pod \"heat-api-6d86bfd46c-5v69x\" (UID: \"cb4f24a4-d3ad-4bc2-bbed-47a35479c826\") " pod="openstack/heat-api-6d86bfd46c-5v69x" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.764256 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-public-tls-certs\") pod \"heat-cfnapi-6f8894bc87-v9t9l\" (UID: \"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4\") " pod="openstack/heat-cfnapi-6f8894bc87-v9t9l" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.764288 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-config-data\") pod \"heat-api-6d86bfd46c-5v69x\" (UID: \"cb4f24a4-d3ad-4bc2-bbed-47a35479c826\") " pod="openstack/heat-api-6d86bfd46c-5v69x" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.770061 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-internal-tls-certs\") pod \"heat-api-6d86bfd46c-5v69x\" (UID: \"cb4f24a4-d3ad-4bc2-bbed-47a35479c826\") " pod="openstack/heat-api-6d86bfd46c-5v69x" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.776672 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-internal-tls-certs\") pod \"heat-cfnapi-6f8894bc87-v9t9l\" (UID: \"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4\") " pod="openstack/heat-cfnapi-6f8894bc87-v9t9l" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.779699 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-combined-ca-bundle\") pod \"heat-cfnapi-6f8894bc87-v9t9l\" (UID: \"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4\") " pod="openstack/heat-cfnapi-6f8894bc87-v9t9l" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.779760 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-config-data-custom\") pod \"heat-api-6d86bfd46c-5v69x\" (UID: \"cb4f24a4-d3ad-4bc2-bbed-47a35479c826\") " pod="openstack/heat-api-6d86bfd46c-5v69x" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.780804 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-public-tls-certs\") pod \"heat-cfnapi-6f8894bc87-v9t9l\" (UID: \"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4\") " pod="openstack/heat-cfnapi-6f8894bc87-v9t9l" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.781964 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.786617 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-config-data\") pod \"heat-api-6d86bfd46c-5v69x\" (UID: \"cb4f24a4-d3ad-4bc2-bbed-47a35479c826\") " pod="openstack/heat-api-6d86bfd46c-5v69x" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.787950 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-public-tls-certs\") pod \"heat-api-6d86bfd46c-5v69x\" (UID: \"cb4f24a4-d3ad-4bc2-bbed-47a35479c826\") " pod="openstack/heat-api-6d86bfd46c-5v69x" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.792230 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-config-data-custom\") pod \"heat-cfnapi-6f8894bc87-v9t9l\" (UID: \"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4\") " pod="openstack/heat-cfnapi-6f8894bc87-v9t9l" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.804001 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-combined-ca-bundle\") pod \"heat-api-6d86bfd46c-5v69x\" (UID: \"cb4f24a4-d3ad-4bc2-bbed-47a35479c826\") " pod="openstack/heat-api-6d86bfd46c-5v69x" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.806468 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l47zh\" (UniqueName: \"kubernetes.io/projected/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-kube-api-access-l47zh\") pod \"heat-api-6d86bfd46c-5v69x\" (UID: \"cb4f24a4-d3ad-4bc2-bbed-47a35479c826\") " pod="openstack/heat-api-6d86bfd46c-5v69x" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.807763 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdplf\" (UniqueName: \"kubernetes.io/projected/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-kube-api-access-vdplf\") pod \"heat-cfnapi-6f8894bc87-v9t9l\" (UID: \"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4\") " pod="openstack/heat-cfnapi-6f8894bc87-v9t9l" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.813983 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-config-data\") pod \"heat-cfnapi-6f8894bc87-v9t9l\" (UID: \"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4\") " pod="openstack/heat-cfnapi-6f8894bc87-v9t9l" Nov 24 21:43:14 crc kubenswrapper[4915]: I1124 21:43:14.937298 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d86bfd46c-5v69x" Nov 24 21:43:15 crc kubenswrapper[4915]: I1124 21:43:15.060493 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f8894bc87-v9t9l" Nov 24 21:43:15 crc kubenswrapper[4915]: I1124 21:43:15.125169 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xkncx" Nov 24 21:43:15 crc kubenswrapper[4915]: I1124 21:43:15.178861 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncc8w\" (UniqueName: \"kubernetes.io/projected/c50a58e9-839d-4c0a-9aea-e27e9b892db0-kube-api-access-ncc8w\") pod \"c50a58e9-839d-4c0a-9aea-e27e9b892db0\" (UID: \"c50a58e9-839d-4c0a-9aea-e27e9b892db0\") " Nov 24 21:43:15 crc kubenswrapper[4915]: I1124 21:43:15.179047 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c50a58e9-839d-4c0a-9aea-e27e9b892db0-operator-scripts\") pod \"c50a58e9-839d-4c0a-9aea-e27e9b892db0\" (UID: \"c50a58e9-839d-4c0a-9aea-e27e9b892db0\") " Nov 24 21:43:15 crc kubenswrapper[4915]: I1124 21:43:15.179421 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c50a58e9-839d-4c0a-9aea-e27e9b892db0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c50a58e9-839d-4c0a-9aea-e27e9b892db0" (UID: "c50a58e9-839d-4c0a-9aea-e27e9b892db0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:43:15 crc kubenswrapper[4915]: I1124 21:43:15.179836 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c50a58e9-839d-4c0a-9aea-e27e9b892db0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:15 crc kubenswrapper[4915]: I1124 21:43:15.183838 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c50a58e9-839d-4c0a-9aea-e27e9b892db0-kube-api-access-ncc8w" (OuterVolumeSpecName: "kube-api-access-ncc8w") pod "c50a58e9-839d-4c0a-9aea-e27e9b892db0" (UID: "c50a58e9-839d-4c0a-9aea-e27e9b892db0"). InnerVolumeSpecName "kube-api-access-ncc8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:43:15 crc kubenswrapper[4915]: I1124 21:43:15.282474 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncc8w\" (UniqueName: \"kubernetes.io/projected/c50a58e9-839d-4c0a-9aea-e27e9b892db0-kube-api-access-ncc8w\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:15 crc kubenswrapper[4915]: I1124 21:43:15.898196 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-2688t" event={"ID":"0ca21771-e57e-49fc-a4c0-a8fea17fd2ea","Type":"ContainerDied","Data":"a8aad2628319ddb95af7553c944d2bc090aba2ee9b8daa9abf00cece68f80e5a"} Nov 24 21:43:15 crc kubenswrapper[4915]: I1124 21:43:15.898438 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8aad2628319ddb95af7553c944d2bc090aba2ee9b8daa9abf00cece68f80e5a" Nov 24 21:43:15 crc kubenswrapper[4915]: I1124 21:43:15.902130 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-541f-account-create-6xjb9" event={"ID":"5ea7f227-3e0f-4e4f-9136-22a74b0f3057","Type":"ContainerDied","Data":"3df1642d1188bd06b65e56358f4cae733b5c1286c0b249ea901da849a9458583"} Nov 24 21:43:15 crc kubenswrapper[4915]: I1124 21:43:15.902165 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3df1642d1188bd06b65e56358f4cae733b5c1286c0b249ea901da849a9458583" Nov 24 21:43:15 crc kubenswrapper[4915]: I1124 21:43:15.906121 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-1fec-account-create-c2mcb" event={"ID":"4387d1ab-ac63-465a-a164-1fb5f78352da","Type":"ContainerDied","Data":"862770451bdb9aca38edc08c277201b374ae35a6bf10cbf3d43dbac3a308b97a"} Nov 24 21:43:15 crc kubenswrapper[4915]: I1124 21:43:15.906155 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="862770451bdb9aca38edc08c277201b374ae35a6bf10cbf3d43dbac3a308b97a" Nov 24 21:43:15 crc kubenswrapper[4915]: I1124 21:43:15.907742 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-64fdcf7c87-z5n75" event={"ID":"60efd5e8-d968-4c42-bca3-5890cb8b3ce9","Type":"ContainerStarted","Data":"5607af2a44bb18d7f0b7131908b4ce41f0035482da47ea84a3ae8aa5728c062e"} Nov 24 21:43:15 crc kubenswrapper[4915]: I1124 21:43:15.916898 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-678kk" event={"ID":"789abde1-5e42-4032-8b87-e3c475c709e8","Type":"ContainerDied","Data":"ef07e57689b9203f15deb02cf8580b576220c881bf5c46611fbefbea60f08c5c"} Nov 24 21:43:15 crc kubenswrapper[4915]: I1124 21:43:15.916934 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef07e57689b9203f15deb02cf8580b576220c881bf5c46611fbefbea60f08c5c" Nov 24 21:43:15 crc kubenswrapper[4915]: I1124 21:43:15.919253 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xkncx" event={"ID":"c50a58e9-839d-4c0a-9aea-e27e9b892db0","Type":"ContainerDied","Data":"105991a0c83c8ce7015fe9ea988a8e3613ace80864bb2c23437bf80e7028a2e9"} Nov 24 21:43:15 crc kubenswrapper[4915]: I1124 21:43:15.919332 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="105991a0c83c8ce7015fe9ea988a8e3613ace80864bb2c23437bf80e7028a2e9" Nov 24 21:43:15 crc kubenswrapper[4915]: I1124 21:43:15.919280 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xkncx" Nov 24 21:43:15 crc kubenswrapper[4915]: I1124 21:43:15.921268 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-05c3-account-create-87xhn" event={"ID":"f1089ced-cc42-419b-a130-e71479e51b16","Type":"ContainerDied","Data":"a744fc8cf8b7554154c063565e70bea946e0290c40c28645d2a9bd843afee68f"} Nov 24 21:43:15 crc kubenswrapper[4915]: I1124 21:43:15.921303 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a744fc8cf8b7554154c063565e70bea946e0290c40c28645d2a9bd843afee68f" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.167925 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1fec-account-create-c2mcb" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.193402 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-05c3-account-create-87xhn" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.195090 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-678kk" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.201477 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4btrz\" (UniqueName: \"kubernetes.io/projected/789abde1-5e42-4032-8b87-e3c475c709e8-kube-api-access-4btrz\") pod \"789abde1-5e42-4032-8b87-e3c475c709e8\" (UID: \"789abde1-5e42-4032-8b87-e3c475c709e8\") " Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.201524 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr4mh\" (UniqueName: \"kubernetes.io/projected/f1089ced-cc42-419b-a130-e71479e51b16-kube-api-access-pr4mh\") pod \"f1089ced-cc42-419b-a130-e71479e51b16\" (UID: \"f1089ced-cc42-419b-a130-e71479e51b16\") " Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.201622 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/789abde1-5e42-4032-8b87-e3c475c709e8-operator-scripts\") pod \"789abde1-5e42-4032-8b87-e3c475c709e8\" (UID: \"789abde1-5e42-4032-8b87-e3c475c709e8\") " Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.201759 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8vbv\" (UniqueName: \"kubernetes.io/projected/4387d1ab-ac63-465a-a164-1fb5f78352da-kube-api-access-t8vbv\") pod \"4387d1ab-ac63-465a-a164-1fb5f78352da\" (UID: \"4387d1ab-ac63-465a-a164-1fb5f78352da\") " Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.201799 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1089ced-cc42-419b-a130-e71479e51b16-operator-scripts\") pod \"f1089ced-cc42-419b-a130-e71479e51b16\" (UID: \"f1089ced-cc42-419b-a130-e71479e51b16\") " Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.201860 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4387d1ab-ac63-465a-a164-1fb5f78352da-operator-scripts\") pod \"4387d1ab-ac63-465a-a164-1fb5f78352da\" (UID: \"4387d1ab-ac63-465a-a164-1fb5f78352da\") " Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.202622 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/789abde1-5e42-4032-8b87-e3c475c709e8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "789abde1-5e42-4032-8b87-e3c475c709e8" (UID: "789abde1-5e42-4032-8b87-e3c475c709e8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.202703 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4387d1ab-ac63-465a-a164-1fb5f78352da-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4387d1ab-ac63-465a-a164-1fb5f78352da" (UID: "4387d1ab-ac63-465a-a164-1fb5f78352da"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.202986 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1089ced-cc42-419b-a130-e71479e51b16-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f1089ced-cc42-419b-a130-e71479e51b16" (UID: "f1089ced-cc42-419b-a130-e71479e51b16"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.217063 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4387d1ab-ac63-465a-a164-1fb5f78352da-kube-api-access-t8vbv" (OuterVolumeSpecName: "kube-api-access-t8vbv") pod "4387d1ab-ac63-465a-a164-1fb5f78352da" (UID: "4387d1ab-ac63-465a-a164-1fb5f78352da"). InnerVolumeSpecName "kube-api-access-t8vbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.217218 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/789abde1-5e42-4032-8b87-e3c475c709e8-kube-api-access-4btrz" (OuterVolumeSpecName: "kube-api-access-4btrz") pod "789abde1-5e42-4032-8b87-e3c475c709e8" (UID: "789abde1-5e42-4032-8b87-e3c475c709e8"). InnerVolumeSpecName "kube-api-access-4btrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.219310 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1089ced-cc42-419b-a130-e71479e51b16-kube-api-access-pr4mh" (OuterVolumeSpecName: "kube-api-access-pr4mh") pod "f1089ced-cc42-419b-a130-e71479e51b16" (UID: "f1089ced-cc42-419b-a130-e71479e51b16"). InnerVolumeSpecName "kube-api-access-pr4mh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.259603 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-541f-account-create-6xjb9" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.296152 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2688t" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.303839 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ea7f227-3e0f-4e4f-9136-22a74b0f3057-operator-scripts\") pod \"5ea7f227-3e0f-4e4f-9136-22a74b0f3057\" (UID: \"5ea7f227-3e0f-4e4f-9136-22a74b0f3057\") " Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.304164 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqlp2\" (UniqueName: \"kubernetes.io/projected/5ea7f227-3e0f-4e4f-9136-22a74b0f3057-kube-api-access-cqlp2\") pod \"5ea7f227-3e0f-4e4f-9136-22a74b0f3057\" (UID: \"5ea7f227-3e0f-4e4f-9136-22a74b0f3057\") " Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.304243 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ca21771-e57e-49fc-a4c0-a8fea17fd2ea-operator-scripts\") pod \"0ca21771-e57e-49fc-a4c0-a8fea17fd2ea\" (UID: \"0ca21771-e57e-49fc-a4c0-a8fea17fd2ea\") " Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.304280 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrs6b\" (UniqueName: \"kubernetes.io/projected/0ca21771-e57e-49fc-a4c0-a8fea17fd2ea-kube-api-access-nrs6b\") pod \"0ca21771-e57e-49fc-a4c0-a8fea17fd2ea\" (UID: \"0ca21771-e57e-49fc-a4c0-a8fea17fd2ea\") " Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.304889 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4btrz\" (UniqueName: \"kubernetes.io/projected/789abde1-5e42-4032-8b87-e3c475c709e8-kube-api-access-4btrz\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.304907 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr4mh\" (UniqueName: \"kubernetes.io/projected/f1089ced-cc42-419b-a130-e71479e51b16-kube-api-access-pr4mh\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.304920 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/789abde1-5e42-4032-8b87-e3c475c709e8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.304929 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8vbv\" (UniqueName: \"kubernetes.io/projected/4387d1ab-ac63-465a-a164-1fb5f78352da-kube-api-access-t8vbv\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.304938 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f1089ced-cc42-419b-a130-e71479e51b16-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.304947 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4387d1ab-ac63-465a-a164-1fb5f78352da-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.306711 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ea7f227-3e0f-4e4f-9136-22a74b0f3057-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5ea7f227-3e0f-4e4f-9136-22a74b0f3057" (UID: "5ea7f227-3e0f-4e4f-9136-22a74b0f3057"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.307679 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ca21771-e57e-49fc-a4c0-a8fea17fd2ea-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ca21771-e57e-49fc-a4c0-a8fea17fd2ea" (UID: "0ca21771-e57e-49fc-a4c0-a8fea17fd2ea"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.327557 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ca21771-e57e-49fc-a4c0-a8fea17fd2ea-kube-api-access-nrs6b" (OuterVolumeSpecName: "kube-api-access-nrs6b") pod "0ca21771-e57e-49fc-a4c0-a8fea17fd2ea" (UID: "0ca21771-e57e-49fc-a4c0-a8fea17fd2ea"). InnerVolumeSpecName "kube-api-access-nrs6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.329447 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ea7f227-3e0f-4e4f-9136-22a74b0f3057-kube-api-access-cqlp2" (OuterVolumeSpecName: "kube-api-access-cqlp2") pod "5ea7f227-3e0f-4e4f-9136-22a74b0f3057" (UID: "5ea7f227-3e0f-4e4f-9136-22a74b0f3057"). InnerVolumeSpecName "kube-api-access-cqlp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.407400 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqlp2\" (UniqueName: \"kubernetes.io/projected/5ea7f227-3e0f-4e4f-9136-22a74b0f3057-kube-api-access-cqlp2\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.407429 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ca21771-e57e-49fc-a4c0-a8fea17fd2ea-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.407439 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrs6b\" (UniqueName: \"kubernetes.io/projected/0ca21771-e57e-49fc-a4c0-a8fea17fd2ea-kube-api-access-nrs6b\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.407447 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5ea7f227-3e0f-4e4f-9136-22a74b0f3057-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.501464 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6d86bfd46c-5v69x"] Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.595001 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6f8894bc87-v9t9l"] Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.934007 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb","Type":"ContainerStarted","Data":"cd01ce869c7a1ae18efa30bd50d95827ef58a4ae136ea90c6193c577308cd5c2"} Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.936845 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d86bfd46c-5v69x" event={"ID":"cb4f24a4-d3ad-4bc2-bbed-47a35479c826","Type":"ContainerStarted","Data":"cb8fff6986184c87fa6d6717c605d46266420a1bcac82e9c44ce5e61858e5d9d"} Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.936925 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d86bfd46c-5v69x" event={"ID":"cb4f24a4-d3ad-4bc2-bbed-47a35479c826","Type":"ContainerStarted","Data":"01aaebdff58298c9ee0eed04fc04096dc67da46fed61adb4148b5669e7f38e8a"} Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.937417 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6d86bfd46c-5v69x" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.938158 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f8894bc87-v9t9l" event={"ID":"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4","Type":"ContainerStarted","Data":"f5a8726cf154415ab859df4f0042e5eabffb3c20ce73bca4f2a8d310ca030b9c"} Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.941244 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-788f8c6c87-bw87v" event={"ID":"afef51f6-3303-49ea-945d-a5e98d263877","Type":"ContainerStarted","Data":"ed030739eab59cd74ff208fb8ea9a0c9aebebeaa83b0cacf9b63cb312048c293"} Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.941390 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-788f8c6c87-bw87v" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.943334 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-64fdcf7c87-z5n75" event={"ID":"60efd5e8-d968-4c42-bca3-5890cb8b3ce9","Type":"ContainerStarted","Data":"a6834d808011d59c43aa629a9039f7e6cf3f388d7f861ded3215aa997f625743"} Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.943477 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-64fdcf7c87-z5n75" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.951357 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-54cc46598c-928hf" event={"ID":"72a33116-8ea9-489b-819c-e69372202e03","Type":"ContainerStarted","Data":"e8c459a545949fde680ec3decd68590600bb768bc0ce898b901edef34d1ac4a0"} Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.951528 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-54cc46598c-928hf" podUID="72a33116-8ea9-489b-819c-e69372202e03" containerName="heat-api" containerID="cri-o://e8c459a545949fde680ec3decd68590600bb768bc0ce898b901edef34d1ac4a0" gracePeriod=60 Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.951824 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-54cc46598c-928hf" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.979962 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-541f-account-create-6xjb9" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.981444 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-7b6fc899f8-tjzhx" podUID="cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f" containerName="heat-cfnapi" containerID="cri-o://b28b879f3b20b16dbe0627e2fe2354877e944cd953073868451debaf3478b982" gracePeriod=60 Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.981550 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-05c3-account-create-87xhn" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.981633 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-2688t" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.981750 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-1fec-account-create-c2mcb" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.981788 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b6fc899f8-tjzhx" event={"ID":"cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f","Type":"ContainerStarted","Data":"b28b879f3b20b16dbe0627e2fe2354877e944cd953073868451debaf3478b982"} Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.981849 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-678kk" Nov 24 21:43:16 crc kubenswrapper[4915]: I1124 21:43:16.983615 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7b6fc899f8-tjzhx" Nov 24 21:43:17 crc kubenswrapper[4915]: I1124 21:43:17.024998 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6d86bfd46c-5v69x" podStartSLOduration=3.024969606 podStartE2EDuration="3.024969606s" podCreationTimestamp="2025-11-24 21:43:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:43:16.963443777 +0000 UTC m=+1415.279695970" watchObservedRunningTime="2025-11-24 21:43:17.024969606 +0000 UTC m=+1415.341221779" Nov 24 21:43:17 crc kubenswrapper[4915]: I1124 21:43:17.056861 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-788f8c6c87-bw87v" podStartSLOduration=3.094953665 podStartE2EDuration="6.056834676s" podCreationTimestamp="2025-11-24 21:43:11 +0000 UTC" firstStartedPulling="2025-11-24 21:43:13.075169419 +0000 UTC m=+1411.391421592" lastFinishedPulling="2025-11-24 21:43:16.03705043 +0000 UTC m=+1414.353302603" observedRunningTime="2025-11-24 21:43:16.982768909 +0000 UTC m=+1415.299021082" watchObservedRunningTime="2025-11-24 21:43:17.056834676 +0000 UTC m=+1415.373086859" Nov 24 21:43:17 crc kubenswrapper[4915]: I1124 21:43:17.065327 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-64fdcf7c87-z5n75" podStartSLOduration=5.613154374 podStartE2EDuration="6.065306645s" podCreationTimestamp="2025-11-24 21:43:11 +0000 UTC" firstStartedPulling="2025-11-24 21:43:15.87721682 +0000 UTC m=+1414.193468993" lastFinishedPulling="2025-11-24 21:43:16.329369091 +0000 UTC m=+1414.645621264" observedRunningTime="2025-11-24 21:43:17.007866165 +0000 UTC m=+1415.324118338" watchObservedRunningTime="2025-11-24 21:43:17.065306645 +0000 UTC m=+1415.381558818" Nov 24 21:43:17 crc kubenswrapper[4915]: I1124 21:43:17.117536 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-54cc46598c-928hf" podStartSLOduration=8.148673429 podStartE2EDuration="13.117518152s" podCreationTimestamp="2025-11-24 21:43:04 +0000 UTC" firstStartedPulling="2025-11-24 21:43:10.995327181 +0000 UTC m=+1409.311579364" lastFinishedPulling="2025-11-24 21:43:15.964171914 +0000 UTC m=+1414.280424087" observedRunningTime="2025-11-24 21:43:17.033511277 +0000 UTC m=+1415.349763450" watchObservedRunningTime="2025-11-24 21:43:17.117518152 +0000 UTC m=+1415.433770325" Nov 24 21:43:17 crc kubenswrapper[4915]: I1124 21:43:17.130453 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7b6fc899f8-tjzhx" podStartSLOduration=8.179379306 podStartE2EDuration="13.13043435s" podCreationTimestamp="2025-11-24 21:43:04 +0000 UTC" firstStartedPulling="2025-11-24 21:43:11.028977608 +0000 UTC m=+1409.345229781" lastFinishedPulling="2025-11-24 21:43:15.980032642 +0000 UTC m=+1414.296284825" observedRunningTime="2025-11-24 21:43:17.061296386 +0000 UTC m=+1415.377548559" watchObservedRunningTime="2025-11-24 21:43:17.13043435 +0000 UTC m=+1415.446686523" Nov 24 21:43:18 crc kubenswrapper[4915]: I1124 21:43:18.004184 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f8894bc87-v9t9l" event={"ID":"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4","Type":"ContainerStarted","Data":"9191af4d3005f3bdc44461f95a1bef9076d7444b9c91ac9a323f36e9afcd0eaa"} Nov 24 21:43:18 crc kubenswrapper[4915]: I1124 21:43:18.004885 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6f8894bc87-v9t9l" Nov 24 21:43:18 crc kubenswrapper[4915]: I1124 21:43:18.025232 4915 generic.go:334] "Generic (PLEG): container finished" podID="afef51f6-3303-49ea-945d-a5e98d263877" containerID="ed030739eab59cd74ff208fb8ea9a0c9aebebeaa83b0cacf9b63cb312048c293" exitCode=1 Nov 24 21:43:18 crc kubenswrapper[4915]: I1124 21:43:18.025340 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-788f8c6c87-bw87v" event={"ID":"afef51f6-3303-49ea-945d-a5e98d263877","Type":"ContainerDied","Data":"ed030739eab59cd74ff208fb8ea9a0c9aebebeaa83b0cacf9b63cb312048c293"} Nov 24 21:43:18 crc kubenswrapper[4915]: I1124 21:43:18.026169 4915 scope.go:117] "RemoveContainer" containerID="ed030739eab59cd74ff208fb8ea9a0c9aebebeaa83b0cacf9b63cb312048c293" Nov 24 21:43:18 crc kubenswrapper[4915]: I1124 21:43:18.044323 4915 generic.go:334] "Generic (PLEG): container finished" podID="60efd5e8-d968-4c42-bca3-5890cb8b3ce9" containerID="a6834d808011d59c43aa629a9039f7e6cf3f388d7f861ded3215aa997f625743" exitCode=1 Nov 24 21:43:18 crc kubenswrapper[4915]: I1124 21:43:18.044425 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-64fdcf7c87-z5n75" event={"ID":"60efd5e8-d968-4c42-bca3-5890cb8b3ce9","Type":"ContainerDied","Data":"a6834d808011d59c43aa629a9039f7e6cf3f388d7f861ded3215aa997f625743"} Nov 24 21:43:18 crc kubenswrapper[4915]: I1124 21:43:18.045241 4915 scope.go:117] "RemoveContainer" containerID="a6834d808011d59c43aa629a9039f7e6cf3f388d7f861ded3215aa997f625743" Nov 24 21:43:18 crc kubenswrapper[4915]: I1124 21:43:18.071139 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6f8894bc87-v9t9l" podStartSLOduration=4.071116574 podStartE2EDuration="4.071116574s" podCreationTimestamp="2025-11-24 21:43:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:43:18.06616904 +0000 UTC m=+1416.382421223" watchObservedRunningTime="2025-11-24 21:43:18.071116574 +0000 UTC m=+1416.387368767" Nov 24 21:43:18 crc kubenswrapper[4915]: I1124 21:43:18.133222 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb","Type":"ContainerStarted","Data":"9cacddc55197936037aabc9afb9efe5ecd8ae3e0b51843d17dadd249d3de5880"} Nov 24 21:43:18 crc kubenswrapper[4915]: I1124 21:43:18.585857 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-968b48459-dfqg8" Nov 24 21:43:18 crc kubenswrapper[4915]: I1124 21:43:18.594378 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-968b48459-dfqg8" Nov 24 21:43:19 crc kubenswrapper[4915]: I1124 21:43:19.145984 4915 generic.go:334] "Generic (PLEG): container finished" podID="afef51f6-3303-49ea-945d-a5e98d263877" containerID="b027c4567ed97c7853743d5ca0255d1d7ac52ddde67783d954d4b7d885a772ed" exitCode=1 Nov 24 21:43:19 crc kubenswrapper[4915]: I1124 21:43:19.146066 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-788f8c6c87-bw87v" event={"ID":"afef51f6-3303-49ea-945d-a5e98d263877","Type":"ContainerDied","Data":"b027c4567ed97c7853743d5ca0255d1d7ac52ddde67783d954d4b7d885a772ed"} Nov 24 21:43:19 crc kubenswrapper[4915]: I1124 21:43:19.146415 4915 scope.go:117] "RemoveContainer" containerID="ed030739eab59cd74ff208fb8ea9a0c9aebebeaa83b0cacf9b63cb312048c293" Nov 24 21:43:19 crc kubenswrapper[4915]: I1124 21:43:19.146979 4915 scope.go:117] "RemoveContainer" containerID="b027c4567ed97c7853743d5ca0255d1d7ac52ddde67783d954d4b7d885a772ed" Nov 24 21:43:19 crc kubenswrapper[4915]: E1124 21:43:19.147330 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-788f8c6c87-bw87v_openstack(afef51f6-3303-49ea-945d-a5e98d263877)\"" pod="openstack/heat-cfnapi-788f8c6c87-bw87v" podUID="afef51f6-3303-49ea-945d-a5e98d263877" Nov 24 21:43:19 crc kubenswrapper[4915]: I1124 21:43:19.149468 4915 generic.go:334] "Generic (PLEG): container finished" podID="60efd5e8-d968-4c42-bca3-5890cb8b3ce9" containerID="7ddb5e3468f6e601987c372df51f47725908dbce14072f6744025535a9dddb85" exitCode=1 Nov 24 21:43:19 crc kubenswrapper[4915]: I1124 21:43:19.149557 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-64fdcf7c87-z5n75" event={"ID":"60efd5e8-d968-4c42-bca3-5890cb8b3ce9","Type":"ContainerDied","Data":"7ddb5e3468f6e601987c372df51f47725908dbce14072f6744025535a9dddb85"} Nov 24 21:43:19 crc kubenswrapper[4915]: I1124 21:43:19.150281 4915 scope.go:117] "RemoveContainer" containerID="7ddb5e3468f6e601987c372df51f47725908dbce14072f6744025535a9dddb85" Nov 24 21:43:19 crc kubenswrapper[4915]: E1124 21:43:19.150582 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-64fdcf7c87-z5n75_openstack(60efd5e8-d968-4c42-bca3-5890cb8b3ce9)\"" pod="openstack/heat-api-64fdcf7c87-z5n75" podUID="60efd5e8-d968-4c42-bca3-5890cb8b3ce9" Nov 24 21:43:19 crc kubenswrapper[4915]: I1124 21:43:19.254970 4915 scope.go:117] "RemoveContainer" containerID="a6834d808011d59c43aa629a9039f7e6cf3f388d7f861ded3215aa997f625743" Nov 24 21:43:19 crc kubenswrapper[4915]: I1124 21:43:19.731981 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" Nov 24 21:43:19 crc kubenswrapper[4915]: I1124 21:43:19.796680 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-gzpxc"] Nov 24 21:43:19 crc kubenswrapper[4915]: I1124 21:43:19.797022 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-gzpxc" podUID="4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140" containerName="dnsmasq-dns" containerID="cri-o://d6e6fc895d43407901a8b3e58e339275612213ef5303ec18cdfbd0a09a77936f" gracePeriod=10 Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.231521 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb","Type":"ContainerStarted","Data":"a341bdbd36c838d6282800c59151bf8da0e4781776e7176542adb0c7493ba339"} Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.231980 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb" containerName="ceilometer-central-agent" containerID="cri-o://9d8fcc49d4845bde8e7a10366404f976096c59219377177fe6698b3e29d863d8" gracePeriod=30 Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.232238 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.232994 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb" containerName="proxy-httpd" containerID="cri-o://a341bdbd36c838d6282800c59151bf8da0e4781776e7176542adb0c7493ba339" gracePeriod=30 Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.233147 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb" containerName="ceilometer-notification-agent" containerID="cri-o://cd01ce869c7a1ae18efa30bd50d95827ef58a4ae136ea90c6193c577308cd5c2" gracePeriod=30 Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.233195 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb" containerName="sg-core" containerID="cri-o://9cacddc55197936037aabc9afb9efe5ecd8ae3e0b51843d17dadd249d3de5880" gracePeriod=30 Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.266005 4915 generic.go:334] "Generic (PLEG): container finished" podID="4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140" containerID="d6e6fc895d43407901a8b3e58e339275612213ef5303ec18cdfbd0a09a77936f" exitCode=0 Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.266085 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-gzpxc" event={"ID":"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140","Type":"ContainerDied","Data":"d6e6fc895d43407901a8b3e58e339275612213ef5303ec18cdfbd0a09a77936f"} Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.272641 4915 scope.go:117] "RemoveContainer" containerID="b027c4567ed97c7853743d5ca0255d1d7ac52ddde67783d954d4b7d885a772ed" Nov 24 21:43:20 crc kubenswrapper[4915]: E1124 21:43:20.273042 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-788f8c6c87-bw87v_openstack(afef51f6-3303-49ea-945d-a5e98d263877)\"" pod="openstack/heat-cfnapi-788f8c6c87-bw87v" podUID="afef51f6-3303-49ea-945d-a5e98d263877" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.275816 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.446657366 podStartE2EDuration="10.275790718s" podCreationTimestamp="2025-11-24 21:43:10 +0000 UTC" firstStartedPulling="2025-11-24 21:43:12.087479028 +0000 UTC m=+1410.403731201" lastFinishedPulling="2025-11-24 21:43:18.91661238 +0000 UTC m=+1417.232864553" observedRunningTime="2025-11-24 21:43:20.263847565 +0000 UTC m=+1418.580099738" watchObservedRunningTime="2025-11-24 21:43:20.275790718 +0000 UTC m=+1418.592042901" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.283590 4915 scope.go:117] "RemoveContainer" containerID="7ddb5e3468f6e601987c372df51f47725908dbce14072f6744025535a9dddb85" Nov 24 21:43:20 crc kubenswrapper[4915]: E1124 21:43:20.284048 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-64fdcf7c87-z5n75_openstack(60efd5e8-d968-4c42-bca3-5890cb8b3ce9)\"" pod="openstack/heat-api-64fdcf7c87-z5n75" podUID="60efd5e8-d968-4c42-bca3-5890cb8b3ce9" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.467410 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-gzpxc" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.490500 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-dns-svc\") pod \"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140\" (UID: \"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140\") " Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.490580 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-ovsdbserver-sb\") pod \"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140\" (UID: \"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140\") " Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.490796 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-ovsdbserver-nb\") pod \"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140\" (UID: \"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140\") " Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.490938 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-config\") pod \"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140\" (UID: \"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140\") " Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.490981 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5cx5\" (UniqueName: \"kubernetes.io/projected/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-kube-api-access-p5cx5\") pod \"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140\" (UID: \"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140\") " Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.491062 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-dns-swift-storage-0\") pod \"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140\" (UID: \"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140\") " Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.502985 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-kube-api-access-p5cx5" (OuterVolumeSpecName: "kube-api-access-p5cx5") pod "4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140" (UID: "4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140"). InnerVolumeSpecName "kube-api-access-p5cx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.609189 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5cx5\" (UniqueName: \"kubernetes.io/projected/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-kube-api-access-p5cx5\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.631541 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140" (UID: "4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.644342 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140" (UID: "4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.646605 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-config" (OuterVolumeSpecName: "config") pod "4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140" (UID: "4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.672025 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140" (UID: "4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.683877 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140" (UID: "4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.711567 4915 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.711610 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.711638 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.711650 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.711661 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.831575 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9jg68"] Nov 24 21:43:20 crc kubenswrapper[4915]: E1124 21:43:20.832146 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4387d1ab-ac63-465a-a164-1fb5f78352da" containerName="mariadb-account-create" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.832168 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="4387d1ab-ac63-465a-a164-1fb5f78352da" containerName="mariadb-account-create" Nov 24 21:43:20 crc kubenswrapper[4915]: E1124 21:43:20.832192 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c50a58e9-839d-4c0a-9aea-e27e9b892db0" containerName="mariadb-database-create" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.832199 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c50a58e9-839d-4c0a-9aea-e27e9b892db0" containerName="mariadb-database-create" Nov 24 21:43:20 crc kubenswrapper[4915]: E1124 21:43:20.832214 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ca21771-e57e-49fc-a4c0-a8fea17fd2ea" containerName="mariadb-database-create" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.832222 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ca21771-e57e-49fc-a4c0-a8fea17fd2ea" containerName="mariadb-database-create" Nov 24 21:43:20 crc kubenswrapper[4915]: E1124 21:43:20.832233 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140" containerName="init" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.832240 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140" containerName="init" Nov 24 21:43:20 crc kubenswrapper[4915]: E1124 21:43:20.832253 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ea7f227-3e0f-4e4f-9136-22a74b0f3057" containerName="mariadb-account-create" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.832258 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ea7f227-3e0f-4e4f-9136-22a74b0f3057" containerName="mariadb-account-create" Nov 24 21:43:20 crc kubenswrapper[4915]: E1124 21:43:20.832268 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140" containerName="dnsmasq-dns" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.832273 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140" containerName="dnsmasq-dns" Nov 24 21:43:20 crc kubenswrapper[4915]: E1124 21:43:20.832283 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="789abde1-5e42-4032-8b87-e3c475c709e8" containerName="mariadb-database-create" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.832289 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="789abde1-5e42-4032-8b87-e3c475c709e8" containerName="mariadb-database-create" Nov 24 21:43:20 crc kubenswrapper[4915]: E1124 21:43:20.832319 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1089ced-cc42-419b-a130-e71479e51b16" containerName="mariadb-account-create" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.832327 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1089ced-cc42-419b-a130-e71479e51b16" containerName="mariadb-account-create" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.832566 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140" containerName="dnsmasq-dns" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.832591 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ea7f227-3e0f-4e4f-9136-22a74b0f3057" containerName="mariadb-account-create" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.832605 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c50a58e9-839d-4c0a-9aea-e27e9b892db0" containerName="mariadb-database-create" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.832622 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="789abde1-5e42-4032-8b87-e3c475c709e8" containerName="mariadb-database-create" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.832633 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="4387d1ab-ac63-465a-a164-1fb5f78352da" containerName="mariadb-account-create" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.832645 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1089ced-cc42-419b-a130-e71479e51b16" containerName="mariadb-account-create" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.832661 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ca21771-e57e-49fc-a4c0-a8fea17fd2ea" containerName="mariadb-database-create" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.833505 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9jg68" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.836114 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.836335 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-9mtxz" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.836529 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.841403 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9jg68"] Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.915977 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bsmv\" (UniqueName: \"kubernetes.io/projected/a25cfd15-cca3-48ae-b45c-81cc7a690f31-kube-api-access-9bsmv\") pod \"nova-cell0-conductor-db-sync-9jg68\" (UID: \"a25cfd15-cca3-48ae-b45c-81cc7a690f31\") " pod="openstack/nova-cell0-conductor-db-sync-9jg68" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.916047 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a25cfd15-cca3-48ae-b45c-81cc7a690f31-config-data\") pod \"nova-cell0-conductor-db-sync-9jg68\" (UID: \"a25cfd15-cca3-48ae-b45c-81cc7a690f31\") " pod="openstack/nova-cell0-conductor-db-sync-9jg68" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.916117 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a25cfd15-cca3-48ae-b45c-81cc7a690f31-scripts\") pod \"nova-cell0-conductor-db-sync-9jg68\" (UID: \"a25cfd15-cca3-48ae-b45c-81cc7a690f31\") " pod="openstack/nova-cell0-conductor-db-sync-9jg68" Nov 24 21:43:20 crc kubenswrapper[4915]: I1124 21:43:20.916168 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25cfd15-cca3-48ae-b45c-81cc7a690f31-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9jg68\" (UID: \"a25cfd15-cca3-48ae-b45c-81cc7a690f31\") " pod="openstack/nova-cell0-conductor-db-sync-9jg68" Nov 24 21:43:21 crc kubenswrapper[4915]: I1124 21:43:21.018035 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a25cfd15-cca3-48ae-b45c-81cc7a690f31-config-data\") pod \"nova-cell0-conductor-db-sync-9jg68\" (UID: \"a25cfd15-cca3-48ae-b45c-81cc7a690f31\") " pod="openstack/nova-cell0-conductor-db-sync-9jg68" Nov 24 21:43:21 crc kubenswrapper[4915]: I1124 21:43:21.018116 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a25cfd15-cca3-48ae-b45c-81cc7a690f31-scripts\") pod \"nova-cell0-conductor-db-sync-9jg68\" (UID: \"a25cfd15-cca3-48ae-b45c-81cc7a690f31\") " pod="openstack/nova-cell0-conductor-db-sync-9jg68" Nov 24 21:43:21 crc kubenswrapper[4915]: I1124 21:43:21.018174 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25cfd15-cca3-48ae-b45c-81cc7a690f31-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9jg68\" (UID: \"a25cfd15-cca3-48ae-b45c-81cc7a690f31\") " pod="openstack/nova-cell0-conductor-db-sync-9jg68" Nov 24 21:43:21 crc kubenswrapper[4915]: I1124 21:43:21.018298 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bsmv\" (UniqueName: \"kubernetes.io/projected/a25cfd15-cca3-48ae-b45c-81cc7a690f31-kube-api-access-9bsmv\") pod \"nova-cell0-conductor-db-sync-9jg68\" (UID: \"a25cfd15-cca3-48ae-b45c-81cc7a690f31\") " pod="openstack/nova-cell0-conductor-db-sync-9jg68" Nov 24 21:43:21 crc kubenswrapper[4915]: I1124 21:43:21.023390 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a25cfd15-cca3-48ae-b45c-81cc7a690f31-config-data\") pod \"nova-cell0-conductor-db-sync-9jg68\" (UID: \"a25cfd15-cca3-48ae-b45c-81cc7a690f31\") " pod="openstack/nova-cell0-conductor-db-sync-9jg68" Nov 24 21:43:21 crc kubenswrapper[4915]: I1124 21:43:21.025255 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25cfd15-cca3-48ae-b45c-81cc7a690f31-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9jg68\" (UID: \"a25cfd15-cca3-48ae-b45c-81cc7a690f31\") " pod="openstack/nova-cell0-conductor-db-sync-9jg68" Nov 24 21:43:21 crc kubenswrapper[4915]: I1124 21:43:21.027027 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a25cfd15-cca3-48ae-b45c-81cc7a690f31-scripts\") pod \"nova-cell0-conductor-db-sync-9jg68\" (UID: \"a25cfd15-cca3-48ae-b45c-81cc7a690f31\") " pod="openstack/nova-cell0-conductor-db-sync-9jg68" Nov 24 21:43:21 crc kubenswrapper[4915]: I1124 21:43:21.037763 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bsmv\" (UniqueName: \"kubernetes.io/projected/a25cfd15-cca3-48ae-b45c-81cc7a690f31-kube-api-access-9bsmv\") pod \"nova-cell0-conductor-db-sync-9jg68\" (UID: \"a25cfd15-cca3-48ae-b45c-81cc7a690f31\") " pod="openstack/nova-cell0-conductor-db-sync-9jg68" Nov 24 21:43:21 crc kubenswrapper[4915]: I1124 21:43:21.160704 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9jg68" Nov 24 21:43:21 crc kubenswrapper[4915]: I1124 21:43:21.320438 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-gzpxc" event={"ID":"4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140","Type":"ContainerDied","Data":"65f70f6d6b12ef79a4482df97069162ec96de1596982c02c2c10b85406a7bf60"} Nov 24 21:43:21 crc kubenswrapper[4915]: I1124 21:43:21.320466 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-gzpxc" Nov 24 21:43:21 crc kubenswrapper[4915]: I1124 21:43:21.320485 4915 scope.go:117] "RemoveContainer" containerID="d6e6fc895d43407901a8b3e58e339275612213ef5303ec18cdfbd0a09a77936f" Nov 24 21:43:21 crc kubenswrapper[4915]: I1124 21:43:21.323943 4915 generic.go:334] "Generic (PLEG): container finished" podID="c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb" containerID="a341bdbd36c838d6282800c59151bf8da0e4781776e7176542adb0c7493ba339" exitCode=0 Nov 24 21:43:21 crc kubenswrapper[4915]: I1124 21:43:21.323979 4915 generic.go:334] "Generic (PLEG): container finished" podID="c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb" containerID="9cacddc55197936037aabc9afb9efe5ecd8ae3e0b51843d17dadd249d3de5880" exitCode=2 Nov 24 21:43:21 crc kubenswrapper[4915]: I1124 21:43:21.323987 4915 generic.go:334] "Generic (PLEG): container finished" podID="c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb" containerID="cd01ce869c7a1ae18efa30bd50d95827ef58a4ae136ea90c6193c577308cd5c2" exitCode=0 Nov 24 21:43:21 crc kubenswrapper[4915]: I1124 21:43:21.324008 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb","Type":"ContainerDied","Data":"a341bdbd36c838d6282800c59151bf8da0e4781776e7176542adb0c7493ba339"} Nov 24 21:43:21 crc kubenswrapper[4915]: I1124 21:43:21.324038 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb","Type":"ContainerDied","Data":"9cacddc55197936037aabc9afb9efe5ecd8ae3e0b51843d17dadd249d3de5880"} Nov 24 21:43:21 crc kubenswrapper[4915]: I1124 21:43:21.324049 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb","Type":"ContainerDied","Data":"cd01ce869c7a1ae18efa30bd50d95827ef58a4ae136ea90c6193c577308cd5c2"} Nov 24 21:43:21 crc kubenswrapper[4915]: I1124 21:43:21.360763 4915 scope.go:117] "RemoveContainer" containerID="dfca5ca6d5e8f82c6696193a03078f53441e1a6473fa31ce9b9dda2ab1adb4a6" Nov 24 21:43:21 crc kubenswrapper[4915]: I1124 21:43:21.471868 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-gzpxc"] Nov 24 21:43:21 crc kubenswrapper[4915]: I1124 21:43:21.488825 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-gzpxc"] Nov 24 21:43:21 crc kubenswrapper[4915]: W1124 21:43:21.679644 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda25cfd15_cca3_48ae_b45c_81cc7a690f31.slice/crio-d3573873f1183ab93e585f8a7efeedc05cac9f060fe73520133da9a01da570df WatchSource:0}: Error finding container d3573873f1183ab93e585f8a7efeedc05cac9f060fe73520133da9a01da570df: Status 404 returned error can't find the container with id d3573873f1183ab93e585f8a7efeedc05cac9f060fe73520133da9a01da570df Nov 24 21:43:21 crc kubenswrapper[4915]: I1124 21:43:21.716437 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9jg68"] Nov 24 21:43:22 crc kubenswrapper[4915]: E1124 21:43:22.111542 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e5395dd_1fa1_461c_b0eb_edc5c817955c.slice/crio-conmon-a2f806c8ed4bac045164780c23846ef30c72f1cc469575104ff81f9ce88ae615.scope\": RecentStats: unable to find data in memory cache]" Nov 24 21:43:22 crc kubenswrapper[4915]: E1124 21:43:22.265539 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e5395dd_1fa1_461c_b0eb_edc5c817955c.slice/crio-conmon-a2f806c8ed4bac045164780c23846ef30c72f1cc469575104ff81f9ce88ae615.scope\": RecentStats: unable to find data in memory cache]" Nov 24 21:43:22 crc kubenswrapper[4915]: I1124 21:43:22.320495 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-788f8c6c87-bw87v" Nov 24 21:43:22 crc kubenswrapper[4915]: I1124 21:43:22.321410 4915 scope.go:117] "RemoveContainer" containerID="b027c4567ed97c7853743d5ca0255d1d7ac52ddde67783d954d4b7d885a772ed" Nov 24 21:43:22 crc kubenswrapper[4915]: E1124 21:43:22.321720 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-788f8c6c87-bw87v_openstack(afef51f6-3303-49ea-945d-a5e98d263877)\"" pod="openstack/heat-cfnapi-788f8c6c87-bw87v" podUID="afef51f6-3303-49ea-945d-a5e98d263877" Nov 24 21:43:22 crc kubenswrapper[4915]: I1124 21:43:22.322026 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-788f8c6c87-bw87v" Nov 24 21:43:22 crc kubenswrapper[4915]: I1124 21:43:22.342327 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9jg68" event={"ID":"a25cfd15-cca3-48ae-b45c-81cc7a690f31","Type":"ContainerStarted","Data":"d3573873f1183ab93e585f8a7efeedc05cac9f060fe73520133da9a01da570df"} Nov 24 21:43:22 crc kubenswrapper[4915]: I1124 21:43:22.342986 4915 scope.go:117] "RemoveContainer" containerID="b027c4567ed97c7853743d5ca0255d1d7ac52ddde67783d954d4b7d885a772ed" Nov 24 21:43:22 crc kubenswrapper[4915]: E1124 21:43:22.343215 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-788f8c6c87-bw87v_openstack(afef51f6-3303-49ea-945d-a5e98d263877)\"" pod="openstack/heat-cfnapi-788f8c6c87-bw87v" podUID="afef51f6-3303-49ea-945d-a5e98d263877" Nov 24 21:43:22 crc kubenswrapper[4915]: I1124 21:43:22.419840 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-64fdcf7c87-z5n75" Nov 24 21:43:22 crc kubenswrapper[4915]: I1124 21:43:22.419884 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-64fdcf7c87-z5n75" Nov 24 21:43:22 crc kubenswrapper[4915]: I1124 21:43:22.420688 4915 scope.go:117] "RemoveContainer" containerID="7ddb5e3468f6e601987c372df51f47725908dbce14072f6744025535a9dddb85" Nov 24 21:43:22 crc kubenswrapper[4915]: E1124 21:43:22.421001 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-64fdcf7c87-z5n75_openstack(60efd5e8-d968-4c42-bca3-5890cb8b3ce9)\"" pod="openstack/heat-api-64fdcf7c87-z5n75" podUID="60efd5e8-d968-4c42-bca3-5890cb8b3ce9" Nov 24 21:43:22 crc kubenswrapper[4915]: I1124 21:43:22.441002 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140" path="/var/lib/kubelet/pods/4d8dde4f-2b9a-4c27-be36-9cbfb1cb4140/volumes" Nov 24 21:43:22 crc kubenswrapper[4915]: I1124 21:43:22.568436 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-htv6k"] Nov 24 21:43:22 crc kubenswrapper[4915]: I1124 21:43:22.571169 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-htv6k" Nov 24 21:43:22 crc kubenswrapper[4915]: I1124 21:43:22.608521 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-htv6k"] Nov 24 21:43:22 crc kubenswrapper[4915]: I1124 21:43:22.658257 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91467267-1652-40bb-aa0b-99ede2e49bb4-utilities\") pod \"redhat-operators-htv6k\" (UID: \"91467267-1652-40bb-aa0b-99ede2e49bb4\") " pod="openshift-marketplace/redhat-operators-htv6k" Nov 24 21:43:22 crc kubenswrapper[4915]: I1124 21:43:22.658459 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91467267-1652-40bb-aa0b-99ede2e49bb4-catalog-content\") pod \"redhat-operators-htv6k\" (UID: \"91467267-1652-40bb-aa0b-99ede2e49bb4\") " pod="openshift-marketplace/redhat-operators-htv6k" Nov 24 21:43:22 crc kubenswrapper[4915]: I1124 21:43:22.658563 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hdmf\" (UniqueName: \"kubernetes.io/projected/91467267-1652-40bb-aa0b-99ede2e49bb4-kube-api-access-5hdmf\") pod \"redhat-operators-htv6k\" (UID: \"91467267-1652-40bb-aa0b-99ede2e49bb4\") " pod="openshift-marketplace/redhat-operators-htv6k" Nov 24 21:43:22 crc kubenswrapper[4915]: I1124 21:43:22.761007 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hdmf\" (UniqueName: \"kubernetes.io/projected/91467267-1652-40bb-aa0b-99ede2e49bb4-kube-api-access-5hdmf\") pod \"redhat-operators-htv6k\" (UID: \"91467267-1652-40bb-aa0b-99ede2e49bb4\") " pod="openshift-marketplace/redhat-operators-htv6k" Nov 24 21:43:22 crc kubenswrapper[4915]: I1124 21:43:22.761141 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91467267-1652-40bb-aa0b-99ede2e49bb4-utilities\") pod \"redhat-operators-htv6k\" (UID: \"91467267-1652-40bb-aa0b-99ede2e49bb4\") " pod="openshift-marketplace/redhat-operators-htv6k" Nov 24 21:43:22 crc kubenswrapper[4915]: I1124 21:43:22.761262 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91467267-1652-40bb-aa0b-99ede2e49bb4-catalog-content\") pod \"redhat-operators-htv6k\" (UID: \"91467267-1652-40bb-aa0b-99ede2e49bb4\") " pod="openshift-marketplace/redhat-operators-htv6k" Nov 24 21:43:22 crc kubenswrapper[4915]: I1124 21:43:22.761648 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91467267-1652-40bb-aa0b-99ede2e49bb4-utilities\") pod \"redhat-operators-htv6k\" (UID: \"91467267-1652-40bb-aa0b-99ede2e49bb4\") " pod="openshift-marketplace/redhat-operators-htv6k" Nov 24 21:43:22 crc kubenswrapper[4915]: I1124 21:43:22.761696 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91467267-1652-40bb-aa0b-99ede2e49bb4-catalog-content\") pod \"redhat-operators-htv6k\" (UID: \"91467267-1652-40bb-aa0b-99ede2e49bb4\") " pod="openshift-marketplace/redhat-operators-htv6k" Nov 24 21:43:22 crc kubenswrapper[4915]: I1124 21:43:22.788450 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hdmf\" (UniqueName: \"kubernetes.io/projected/91467267-1652-40bb-aa0b-99ede2e49bb4-kube-api-access-5hdmf\") pod \"redhat-operators-htv6k\" (UID: \"91467267-1652-40bb-aa0b-99ede2e49bb4\") " pod="openshift-marketplace/redhat-operators-htv6k" Nov 24 21:43:22 crc kubenswrapper[4915]: I1124 21:43:22.897552 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-htv6k" Nov 24 21:43:23 crc kubenswrapper[4915]: I1124 21:43:23.663702 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-htv6k"] Nov 24 21:43:23 crc kubenswrapper[4915]: W1124 21:43:23.673803 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91467267_1652_40bb_aa0b_99ede2e49bb4.slice/crio-e259b0d23eba8d69eeab26d0412d268fc3ef8cbc7a6f6d38a9a3414c83f1b662 WatchSource:0}: Error finding container e259b0d23eba8d69eeab26d0412d268fc3ef8cbc7a6f6d38a9a3414c83f1b662: Status 404 returned error can't find the container with id e259b0d23eba8d69eeab26d0412d268fc3ef8cbc7a6f6d38a9a3414c83f1b662 Nov 24 21:43:24 crc kubenswrapper[4915]: I1124 21:43:24.366017 4915 generic.go:334] "Generic (PLEG): container finished" podID="91467267-1652-40bb-aa0b-99ede2e49bb4" containerID="487d7daf8de619d22ec1ecf71c975e31ded3d5b2fcf5d305422ea412db774bfb" exitCode=0 Nov 24 21:43:24 crc kubenswrapper[4915]: I1124 21:43:24.366164 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htv6k" event={"ID":"91467267-1652-40bb-aa0b-99ede2e49bb4","Type":"ContainerDied","Data":"487d7daf8de619d22ec1ecf71c975e31ded3d5b2fcf5d305422ea412db774bfb"} Nov 24 21:43:24 crc kubenswrapper[4915]: I1124 21:43:24.366388 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htv6k" event={"ID":"91467267-1652-40bb-aa0b-99ede2e49bb4","Type":"ContainerStarted","Data":"e259b0d23eba8d69eeab26d0412d268fc3ef8cbc7a6f6d38a9a3414c83f1b662"} Nov 24 21:43:24 crc kubenswrapper[4915]: I1124 21:43:24.515986 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-8f669b545-6bqfj" Nov 24 21:43:25 crc kubenswrapper[4915]: I1124 21:43:25.388159 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htv6k" event={"ID":"91467267-1652-40bb-aa0b-99ede2e49bb4","Type":"ContainerStarted","Data":"3fa464e92b916bf0d476320585eac1f40008cae1dcd8784831755b8ae1ca0506"} Nov 24 21:43:26 crc kubenswrapper[4915]: I1124 21:43:26.422699 4915 generic.go:334] "Generic (PLEG): container finished" podID="c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb" containerID="9d8fcc49d4845bde8e7a10366404f976096c59219377177fe6698b3e29d863d8" exitCode=0 Nov 24 21:43:26 crc kubenswrapper[4915]: I1124 21:43:26.424432 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb","Type":"ContainerDied","Data":"9d8fcc49d4845bde8e7a10366404f976096c59219377177fe6698b3e29d863d8"} Nov 24 21:43:26 crc kubenswrapper[4915]: I1124 21:43:26.424476 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb","Type":"ContainerDied","Data":"3840af77be930601a442407ddc24873ca9dc6938e81e48577528d6e4047340e7"} Nov 24 21:43:26 crc kubenswrapper[4915]: I1124 21:43:26.424486 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3840af77be930601a442407ddc24873ca9dc6938e81e48577528d6e4047340e7" Nov 24 21:43:26 crc kubenswrapper[4915]: I1124 21:43:26.424864 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:43:26 crc kubenswrapper[4915]: I1124 21:43:26.566740 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-combined-ca-bundle\") pod \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\" (UID: \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\") " Nov 24 21:43:26 crc kubenswrapper[4915]: I1124 21:43:26.566820 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-scripts\") pod \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\" (UID: \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\") " Nov 24 21:43:26 crc kubenswrapper[4915]: I1124 21:43:26.566999 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-sg-core-conf-yaml\") pod \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\" (UID: \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\") " Nov 24 21:43:26 crc kubenswrapper[4915]: I1124 21:43:26.567039 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzh4t\" (UniqueName: \"kubernetes.io/projected/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-kube-api-access-nzh4t\") pod \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\" (UID: \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\") " Nov 24 21:43:26 crc kubenswrapper[4915]: I1124 21:43:26.567129 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-run-httpd\") pod \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\" (UID: \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\") " Nov 24 21:43:26 crc kubenswrapper[4915]: I1124 21:43:26.567182 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-config-data\") pod \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\" (UID: \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\") " Nov 24 21:43:26 crc kubenswrapper[4915]: I1124 21:43:26.567248 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-log-httpd\") pod \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\" (UID: \"c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb\") " Nov 24 21:43:26 crc kubenswrapper[4915]: I1124 21:43:26.569849 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb" (UID: "c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:43:26 crc kubenswrapper[4915]: I1124 21:43:26.570800 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb" (UID: "c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:43:26 crc kubenswrapper[4915]: I1124 21:43:26.586976 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-scripts" (OuterVolumeSpecName: "scripts") pod "c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb" (UID: "c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:43:26 crc kubenswrapper[4915]: I1124 21:43:26.590384 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-kube-api-access-nzh4t" (OuterVolumeSpecName: "kube-api-access-nzh4t") pod "c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb" (UID: "c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb"). InnerVolumeSpecName "kube-api-access-nzh4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:43:26 crc kubenswrapper[4915]: I1124 21:43:26.610946 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb" (UID: "c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:43:26 crc kubenswrapper[4915]: I1124 21:43:26.673435 4915 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:26 crc kubenswrapper[4915]: I1124 21:43:26.680375 4915 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:26 crc kubenswrapper[4915]: I1124 21:43:26.680447 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:26 crc kubenswrapper[4915]: I1124 21:43:26.680523 4915 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:26 crc kubenswrapper[4915]: I1124 21:43:26.680610 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzh4t\" (UniqueName: \"kubernetes.io/projected/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-kube-api-access-nzh4t\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:26 crc kubenswrapper[4915]: I1124 21:43:26.737881 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb" (UID: "c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:43:26 crc kubenswrapper[4915]: I1124 21:43:26.785129 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:26 crc kubenswrapper[4915]: I1124 21:43:26.792819 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-config-data" (OuterVolumeSpecName: "config-data") pod "c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb" (UID: "c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:43:26 crc kubenswrapper[4915]: I1124 21:43:26.888128 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.435957 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.476834 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.501271 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.528027 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:43:27 crc kubenswrapper[4915]: E1124 21:43:27.528641 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb" containerName="ceilometer-notification-agent" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.528661 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb" containerName="ceilometer-notification-agent" Nov 24 21:43:27 crc kubenswrapper[4915]: E1124 21:43:27.528707 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb" containerName="sg-core" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.528717 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb" containerName="sg-core" Nov 24 21:43:27 crc kubenswrapper[4915]: E1124 21:43:27.528729 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb" containerName="proxy-httpd" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.528740 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb" containerName="proxy-httpd" Nov 24 21:43:27 crc kubenswrapper[4915]: E1124 21:43:27.528804 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb" containerName="ceilometer-central-agent" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.528813 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb" containerName="ceilometer-central-agent" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.529085 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb" containerName="sg-core" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.529107 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb" containerName="ceilometer-notification-agent" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.529121 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb" containerName="ceilometer-central-agent" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.529143 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb" containerName="proxy-httpd" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.531965 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.536395 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.536604 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.544718 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.721442 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee896549-5a9e-4159-b7d9-5eb0321366aa-log-httpd\") pod \"ceilometer-0\" (UID: \"ee896549-5a9e-4159-b7d9-5eb0321366aa\") " pod="openstack/ceilometer-0" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.721606 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee896549-5a9e-4159-b7d9-5eb0321366aa-config-data\") pod \"ceilometer-0\" (UID: \"ee896549-5a9e-4159-b7d9-5eb0321366aa\") " pod="openstack/ceilometer-0" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.721655 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee896549-5a9e-4159-b7d9-5eb0321366aa-run-httpd\") pod \"ceilometer-0\" (UID: \"ee896549-5a9e-4159-b7d9-5eb0321366aa\") " pod="openstack/ceilometer-0" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.721694 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee896549-5a9e-4159-b7d9-5eb0321366aa-scripts\") pod \"ceilometer-0\" (UID: \"ee896549-5a9e-4159-b7d9-5eb0321366aa\") " pod="openstack/ceilometer-0" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.721859 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m28cw\" (UniqueName: \"kubernetes.io/projected/ee896549-5a9e-4159-b7d9-5eb0321366aa-kube-api-access-m28cw\") pod \"ceilometer-0\" (UID: \"ee896549-5a9e-4159-b7d9-5eb0321366aa\") " pod="openstack/ceilometer-0" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.721971 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee896549-5a9e-4159-b7d9-5eb0321366aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ee896549-5a9e-4159-b7d9-5eb0321366aa\") " pod="openstack/ceilometer-0" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.722068 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee896549-5a9e-4159-b7d9-5eb0321366aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ee896549-5a9e-4159-b7d9-5eb0321366aa\") " pod="openstack/ceilometer-0" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.824347 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee896549-5a9e-4159-b7d9-5eb0321366aa-run-httpd\") pod \"ceilometer-0\" (UID: \"ee896549-5a9e-4159-b7d9-5eb0321366aa\") " pod="openstack/ceilometer-0" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.824419 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee896549-5a9e-4159-b7d9-5eb0321366aa-scripts\") pod \"ceilometer-0\" (UID: \"ee896549-5a9e-4159-b7d9-5eb0321366aa\") " pod="openstack/ceilometer-0" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.824448 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m28cw\" (UniqueName: \"kubernetes.io/projected/ee896549-5a9e-4159-b7d9-5eb0321366aa-kube-api-access-m28cw\") pod \"ceilometer-0\" (UID: \"ee896549-5a9e-4159-b7d9-5eb0321366aa\") " pod="openstack/ceilometer-0" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.824475 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee896549-5a9e-4159-b7d9-5eb0321366aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ee896549-5a9e-4159-b7d9-5eb0321366aa\") " pod="openstack/ceilometer-0" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.824506 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee896549-5a9e-4159-b7d9-5eb0321366aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ee896549-5a9e-4159-b7d9-5eb0321366aa\") " pod="openstack/ceilometer-0" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.824556 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee896549-5a9e-4159-b7d9-5eb0321366aa-log-httpd\") pod \"ceilometer-0\" (UID: \"ee896549-5a9e-4159-b7d9-5eb0321366aa\") " pod="openstack/ceilometer-0" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.824856 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee896549-5a9e-4159-b7d9-5eb0321366aa-run-httpd\") pod \"ceilometer-0\" (UID: \"ee896549-5a9e-4159-b7d9-5eb0321366aa\") " pod="openstack/ceilometer-0" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.825065 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee896549-5a9e-4159-b7d9-5eb0321366aa-log-httpd\") pod \"ceilometer-0\" (UID: \"ee896549-5a9e-4159-b7d9-5eb0321366aa\") " pod="openstack/ceilometer-0" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.825115 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee896549-5a9e-4159-b7d9-5eb0321366aa-config-data\") pod \"ceilometer-0\" (UID: \"ee896549-5a9e-4159-b7d9-5eb0321366aa\") " pod="openstack/ceilometer-0" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.830496 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee896549-5a9e-4159-b7d9-5eb0321366aa-scripts\") pod \"ceilometer-0\" (UID: \"ee896549-5a9e-4159-b7d9-5eb0321366aa\") " pod="openstack/ceilometer-0" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.832067 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee896549-5a9e-4159-b7d9-5eb0321366aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ee896549-5a9e-4159-b7d9-5eb0321366aa\") " pod="openstack/ceilometer-0" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.832801 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee896549-5a9e-4159-b7d9-5eb0321366aa-config-data\") pod \"ceilometer-0\" (UID: \"ee896549-5a9e-4159-b7d9-5eb0321366aa\") " pod="openstack/ceilometer-0" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.852761 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee896549-5a9e-4159-b7d9-5eb0321366aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ee896549-5a9e-4159-b7d9-5eb0321366aa\") " pod="openstack/ceilometer-0" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.876743 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m28cw\" (UniqueName: \"kubernetes.io/projected/ee896549-5a9e-4159-b7d9-5eb0321366aa-kube-api-access-m28cw\") pod \"ceilometer-0\" (UID: \"ee896549-5a9e-4159-b7d9-5eb0321366aa\") " pod="openstack/ceilometer-0" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.902413 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.960120 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-6f8894bc87-v9t9l" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.965370 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6d86bfd46c-5v69x" Nov 24 21:43:27 crc kubenswrapper[4915]: I1124 21:43:27.967363 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-54cc46598c-928hf" Nov 24 21:43:28 crc kubenswrapper[4915]: I1124 21:43:28.051949 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-7b6fc899f8-tjzhx" Nov 24 21:43:28 crc kubenswrapper[4915]: I1124 21:43:28.065353 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-788f8c6c87-bw87v"] Nov 24 21:43:28 crc kubenswrapper[4915]: I1124 21:43:28.119506 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-64fdcf7c87-z5n75"] Nov 24 21:43:28 crc kubenswrapper[4915]: I1124 21:43:28.454887 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb" path="/var/lib/kubelet/pods/c42c9385-a4cf-4bb3-b93b-fd1ad15cb3cb/volumes" Nov 24 21:43:29 crc kubenswrapper[4915]: I1124 21:43:29.101231 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:43:30 crc kubenswrapper[4915]: I1124 21:43:30.498056 4915 generic.go:334] "Generic (PLEG): container finished" podID="91467267-1652-40bb-aa0b-99ede2e49bb4" containerID="3fa464e92b916bf0d476320585eac1f40008cae1dcd8784831755b8ae1ca0506" exitCode=0 Nov 24 21:43:30 crc kubenswrapper[4915]: I1124 21:43:30.498454 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htv6k" event={"ID":"91467267-1652-40bb-aa0b-99ede2e49bb4","Type":"ContainerDied","Data":"3fa464e92b916bf0d476320585eac1f40008cae1dcd8784831755b8ae1ca0506"} Nov 24 21:43:32 crc kubenswrapper[4915]: I1124 21:43:32.374483 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-c6db6bd6d-wbhkr" Nov 24 21:43:32 crc kubenswrapper[4915]: I1124 21:43:32.424562 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-8f669b545-6bqfj"] Nov 24 21:43:32 crc kubenswrapper[4915]: I1124 21:43:32.424828 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-8f669b545-6bqfj" podUID="a652c587-ebd5-4783-8397-9275b5a3b682" containerName="heat-engine" containerID="cri-o://758e36b26cc0fb5693068dcda64a28fd6b96b4d2a353c93a74e9837e318e396b" gracePeriod=60 Nov 24 21:43:32 crc kubenswrapper[4915]: E1124 21:43:32.700032 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e5395dd_1fa1_461c_b0eb_edc5c817955c.slice/crio-conmon-a2f806c8ed4bac045164780c23846ef30c72f1cc469575104ff81f9ce88ae615.scope\": RecentStats: unable to find data in memory cache]" Nov 24 21:43:34 crc kubenswrapper[4915]: E1124 21:43:34.481540 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="758e36b26cc0fb5693068dcda64a28fd6b96b4d2a353c93a74e9837e318e396b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 21:43:34 crc kubenswrapper[4915]: E1124 21:43:34.483497 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="758e36b26cc0fb5693068dcda64a28fd6b96b4d2a353c93a74e9837e318e396b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 21:43:34 crc kubenswrapper[4915]: E1124 21:43:34.487191 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="758e36b26cc0fb5693068dcda64a28fd6b96b4d2a353c93a74e9837e318e396b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 21:43:34 crc kubenswrapper[4915]: E1124 21:43:34.487251 4915 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-8f669b545-6bqfj" podUID="a652c587-ebd5-4783-8397-9275b5a3b682" containerName="heat-engine" Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.020519 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-788f8c6c87-bw87v" Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.073163 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-64fdcf7c87-z5n75" Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.168250 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/afef51f6-3303-49ea-945d-a5e98d263877-config-data-custom\") pod \"afef51f6-3303-49ea-945d-a5e98d263877\" (UID: \"afef51f6-3303-49ea-945d-a5e98d263877\") " Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.168971 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afef51f6-3303-49ea-945d-a5e98d263877-config-data\") pod \"afef51f6-3303-49ea-945d-a5e98d263877\" (UID: \"afef51f6-3303-49ea-945d-a5e98d263877\") " Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.169105 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60efd5e8-d968-4c42-bca3-5890cb8b3ce9-config-data\") pod \"60efd5e8-d968-4c42-bca3-5890cb8b3ce9\" (UID: \"60efd5e8-d968-4c42-bca3-5890cb8b3ce9\") " Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.169159 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60efd5e8-d968-4c42-bca3-5890cb8b3ce9-config-data-custom\") pod \"60efd5e8-d968-4c42-bca3-5890cb8b3ce9\" (UID: \"60efd5e8-d968-4c42-bca3-5890cb8b3ce9\") " Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.169184 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz6p8\" (UniqueName: \"kubernetes.io/projected/60efd5e8-d968-4c42-bca3-5890cb8b3ce9-kube-api-access-xz6p8\") pod \"60efd5e8-d968-4c42-bca3-5890cb8b3ce9\" (UID: \"60efd5e8-d968-4c42-bca3-5890cb8b3ce9\") " Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.169263 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afef51f6-3303-49ea-945d-a5e98d263877-combined-ca-bundle\") pod \"afef51f6-3303-49ea-945d-a5e98d263877\" (UID: \"afef51f6-3303-49ea-945d-a5e98d263877\") " Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.169352 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn2n7\" (UniqueName: \"kubernetes.io/projected/afef51f6-3303-49ea-945d-a5e98d263877-kube-api-access-hn2n7\") pod \"afef51f6-3303-49ea-945d-a5e98d263877\" (UID: \"afef51f6-3303-49ea-945d-a5e98d263877\") " Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.169415 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60efd5e8-d968-4c42-bca3-5890cb8b3ce9-combined-ca-bundle\") pod \"60efd5e8-d968-4c42-bca3-5890cb8b3ce9\" (UID: \"60efd5e8-d968-4c42-bca3-5890cb8b3ce9\") " Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.188512 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afef51f6-3303-49ea-945d-a5e98d263877-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "afef51f6-3303-49ea-945d-a5e98d263877" (UID: "afef51f6-3303-49ea-945d-a5e98d263877"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.190404 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afef51f6-3303-49ea-945d-a5e98d263877-kube-api-access-hn2n7" (OuterVolumeSpecName: "kube-api-access-hn2n7") pod "afef51f6-3303-49ea-945d-a5e98d263877" (UID: "afef51f6-3303-49ea-945d-a5e98d263877"). InnerVolumeSpecName "kube-api-access-hn2n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.194997 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60efd5e8-d968-4c42-bca3-5890cb8b3ce9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "60efd5e8-d968-4c42-bca3-5890cb8b3ce9" (UID: "60efd5e8-d968-4c42-bca3-5890cb8b3ce9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.196415 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60efd5e8-d968-4c42-bca3-5890cb8b3ce9-kube-api-access-xz6p8" (OuterVolumeSpecName: "kube-api-access-xz6p8") pod "60efd5e8-d968-4c42-bca3-5890cb8b3ce9" (UID: "60efd5e8-d968-4c42-bca3-5890cb8b3ce9"). InnerVolumeSpecName "kube-api-access-xz6p8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.272690 4915 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/afef51f6-3303-49ea-945d-a5e98d263877-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.272734 4915 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/60efd5e8-d968-4c42-bca3-5890cb8b3ce9-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.272746 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xz6p8\" (UniqueName: \"kubernetes.io/projected/60efd5e8-d968-4c42-bca3-5890cb8b3ce9-kube-api-access-xz6p8\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.272761 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn2n7\" (UniqueName: \"kubernetes.io/projected/afef51f6-3303-49ea-945d-a5e98d263877-kube-api-access-hn2n7\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.341446 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afef51f6-3303-49ea-945d-a5e98d263877-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afef51f6-3303-49ea-945d-a5e98d263877" (UID: "afef51f6-3303-49ea-945d-a5e98d263877"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.369081 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afef51f6-3303-49ea-945d-a5e98d263877-config-data" (OuterVolumeSpecName: "config-data") pod "afef51f6-3303-49ea-945d-a5e98d263877" (UID: "afef51f6-3303-49ea-945d-a5e98d263877"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.378373 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afef51f6-3303-49ea-945d-a5e98d263877-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.378406 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afef51f6-3303-49ea-945d-a5e98d263877-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.497914 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60efd5e8-d968-4c42-bca3-5890cb8b3ce9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60efd5e8-d968-4c42-bca3-5890cb8b3ce9" (UID: "60efd5e8-d968-4c42-bca3-5890cb8b3ce9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.500908 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60efd5e8-d968-4c42-bca3-5890cb8b3ce9-config-data" (OuterVolumeSpecName: "config-data") pod "60efd5e8-d968-4c42-bca3-5890cb8b3ce9" (UID: "60efd5e8-d968-4c42-bca3-5890cb8b3ce9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.588611 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60efd5e8-d968-4c42-bca3-5890cb8b3ce9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.588858 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60efd5e8-d968-4c42-bca3-5890cb8b3ce9-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.599566 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.637034 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-64fdcf7c87-z5n75" event={"ID":"60efd5e8-d968-4c42-bca3-5890cb8b3ce9","Type":"ContainerDied","Data":"5607af2a44bb18d7f0b7131908b4ce41f0035482da47ea84a3ae8aa5728c062e"} Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.637077 4915 scope.go:117] "RemoveContainer" containerID="7ddb5e3468f6e601987c372df51f47725908dbce14072f6744025535a9dddb85" Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.637172 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-64fdcf7c87-z5n75" Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.659894 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htv6k" event={"ID":"91467267-1652-40bb-aa0b-99ede2e49bb4","Type":"ContainerStarted","Data":"bc3899f6b7601c747c5c36fc3fae30af059ecf121217c436a655b0835bf58838"} Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.662794 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-788f8c6c87-bw87v" event={"ID":"afef51f6-3303-49ea-945d-a5e98d263877","Type":"ContainerDied","Data":"130b4fd63d2e2f7878f3a96cafd82d0e9c248eee7c5d3efb6f8ce13e37282789"} Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.662850 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-788f8c6c87-bw87v" Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.699991 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-htv6k" podStartSLOduration=3.044210166 podStartE2EDuration="14.699970147s" podCreationTimestamp="2025-11-24 21:43:22 +0000 UTC" firstStartedPulling="2025-11-24 21:43:24.368624652 +0000 UTC m=+1422.684876815" lastFinishedPulling="2025-11-24 21:43:36.024384623 +0000 UTC m=+1434.340636796" observedRunningTime="2025-11-24 21:43:36.691410287 +0000 UTC m=+1435.007662460" watchObservedRunningTime="2025-11-24 21:43:36.699970147 +0000 UTC m=+1435.016222320" Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.732712 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-788f8c6c87-bw87v"] Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.764702 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-788f8c6c87-bw87v"] Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.796811 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-64fdcf7c87-z5n75"] Nov 24 21:43:36 crc kubenswrapper[4915]: E1124 21:43:36.828675 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e5395dd_1fa1_461c_b0eb_edc5c817955c.slice/crio-conmon-a2f806c8ed4bac045164780c23846ef30c72f1cc469575104ff81f9ce88ae615.scope\": RecentStats: unable to find data in memory cache]" Nov 24 21:43:36 crc kubenswrapper[4915]: I1124 21:43:36.882410 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-64fdcf7c87-z5n75"] Nov 24 21:43:37 crc kubenswrapper[4915]: I1124 21:43:37.124955 4915 scope.go:117] "RemoveContainer" containerID="b027c4567ed97c7853743d5ca0255d1d7ac52ddde67783d954d4b7d885a772ed" Nov 24 21:43:37 crc kubenswrapper[4915]: I1124 21:43:37.678979 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee896549-5a9e-4159-b7d9-5eb0321366aa","Type":"ContainerStarted","Data":"fca6a00c5a4e44ff9eaf608d5af02f35111dc78894f7e41b9cb344e7d42492bb"} Nov 24 21:43:37 crc kubenswrapper[4915]: I1124 21:43:37.682244 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9jg68" event={"ID":"a25cfd15-cca3-48ae-b45c-81cc7a690f31","Type":"ContainerStarted","Data":"6d1276a333bb1ea524f81c8c303c52a8e90f0e61337e70a168ecd6200bae1b69"} Nov 24 21:43:37 crc kubenswrapper[4915]: I1124 21:43:37.705447 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-9jg68" podStartSLOduration=3.398165894 podStartE2EDuration="17.705424906s" podCreationTimestamp="2025-11-24 21:43:20 +0000 UTC" firstStartedPulling="2025-11-24 21:43:21.721248712 +0000 UTC m=+1420.037500875" lastFinishedPulling="2025-11-24 21:43:36.028507714 +0000 UTC m=+1434.344759887" observedRunningTime="2025-11-24 21:43:37.697886244 +0000 UTC m=+1436.014138417" watchObservedRunningTime="2025-11-24 21:43:37.705424906 +0000 UTC m=+1436.021677079" Nov 24 21:43:38 crc kubenswrapper[4915]: I1124 21:43:38.441609 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60efd5e8-d968-4c42-bca3-5890cb8b3ce9" path="/var/lib/kubelet/pods/60efd5e8-d968-4c42-bca3-5890cb8b3ce9/volumes" Nov 24 21:43:38 crc kubenswrapper[4915]: I1124 21:43:38.442655 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afef51f6-3303-49ea-945d-a5e98d263877" path="/var/lib/kubelet/pods/afef51f6-3303-49ea-945d-a5e98d263877/volumes" Nov 24 21:43:38 crc kubenswrapper[4915]: I1124 21:43:38.697863 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee896549-5a9e-4159-b7d9-5eb0321366aa","Type":"ContainerStarted","Data":"ede936be91c81cbeec8d58555049b90d26a0205cf95c14700174b3487a2ea2ba"} Nov 24 21:43:38 crc kubenswrapper[4915]: I1124 21:43:38.697914 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee896549-5a9e-4159-b7d9-5eb0321366aa","Type":"ContainerStarted","Data":"7a69382b6cc2be0ad42fc337f95e7db8d0f1b3b9728ed7020c17db4f1c39679f"} Nov 24 21:43:39 crc kubenswrapper[4915]: I1124 21:43:39.384722 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:43:39 crc kubenswrapper[4915]: I1124 21:43:39.385186 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7e61db9f-1ee2-4a9a-88ee-3da3d975514a" containerName="glance-log" containerID="cri-o://a431a8479b3b83816b2ae374c619415c393d9ae1771f2b17130475d272e4ac73" gracePeriod=30 Nov 24 21:43:39 crc kubenswrapper[4915]: I1124 21:43:39.385617 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7e61db9f-1ee2-4a9a-88ee-3da3d975514a" containerName="glance-httpd" containerID="cri-o://7acd04682579663946c79236b8a9dce5d70edb35ad16629d503e76912c8f9f87" gracePeriod=30 Nov 24 21:43:39 crc kubenswrapper[4915]: I1124 21:43:39.719499 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db9f-1ee2-4a9a-88ee-3da3d975514a" containerID="a431a8479b3b83816b2ae374c619415c393d9ae1771f2b17130475d272e4ac73" exitCode=143 Nov 24 21:43:39 crc kubenswrapper[4915]: I1124 21:43:39.719600 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7e61db9f-1ee2-4a9a-88ee-3da3d975514a","Type":"ContainerDied","Data":"a431a8479b3b83816b2ae374c619415c393d9ae1771f2b17130475d272e4ac73"} Nov 24 21:43:39 crc kubenswrapper[4915]: I1124 21:43:39.722113 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee896549-5a9e-4159-b7d9-5eb0321366aa","Type":"ContainerStarted","Data":"c5ecd6165b41bac8eb231025c06b9eb6ae10f3057829cf5e807767c3845daa71"} Nov 24 21:43:41 crc kubenswrapper[4915]: I1124 21:43:41.744556 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee896549-5a9e-4159-b7d9-5eb0321366aa","Type":"ContainerStarted","Data":"c4155d6bf5ee23e0d09e319d78c27d6fe369080bbee6687058dc870277a3bd1c"} Nov 24 21:43:41 crc kubenswrapper[4915]: I1124 21:43:41.745075 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 21:43:41 crc kubenswrapper[4915]: I1124 21:43:41.744899 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ee896549-5a9e-4159-b7d9-5eb0321366aa" containerName="proxy-httpd" containerID="cri-o://c4155d6bf5ee23e0d09e319d78c27d6fe369080bbee6687058dc870277a3bd1c" gracePeriod=30 Nov 24 21:43:41 crc kubenswrapper[4915]: I1124 21:43:41.744665 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ee896549-5a9e-4159-b7d9-5eb0321366aa" containerName="ceilometer-central-agent" containerID="cri-o://7a69382b6cc2be0ad42fc337f95e7db8d0f1b3b9728ed7020c17db4f1c39679f" gracePeriod=30 Nov 24 21:43:41 crc kubenswrapper[4915]: I1124 21:43:41.744921 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ee896549-5a9e-4159-b7d9-5eb0321366aa" containerName="ceilometer-notification-agent" containerID="cri-o://ede936be91c81cbeec8d58555049b90d26a0205cf95c14700174b3487a2ea2ba" gracePeriod=30 Nov 24 21:43:41 crc kubenswrapper[4915]: I1124 21:43:41.744912 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ee896549-5a9e-4159-b7d9-5eb0321366aa" containerName="sg-core" containerID="cri-o://c5ecd6165b41bac8eb231025c06b9eb6ae10f3057829cf5e807767c3845daa71" gracePeriod=30 Nov 24 21:43:41 crc kubenswrapper[4915]: I1124 21:43:41.774146 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=10.900765277 podStartE2EDuration="14.774129195s" podCreationTimestamp="2025-11-24 21:43:27 +0000 UTC" firstStartedPulling="2025-11-24 21:43:36.623996401 +0000 UTC m=+1434.940248574" lastFinishedPulling="2025-11-24 21:43:40.497360319 +0000 UTC m=+1438.813612492" observedRunningTime="2025-11-24 21:43:41.772168541 +0000 UTC m=+1440.088420714" watchObservedRunningTime="2025-11-24 21:43:41.774129195 +0000 UTC m=+1440.090381368" Nov 24 21:43:42 crc kubenswrapper[4915]: I1124 21:43:42.757899 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e61db9f-1ee2-4a9a-88ee-3da3d975514a" containerID="7acd04682579663946c79236b8a9dce5d70edb35ad16629d503e76912c8f9f87" exitCode=0 Nov 24 21:43:42 crc kubenswrapper[4915]: I1124 21:43:42.757953 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7e61db9f-1ee2-4a9a-88ee-3da3d975514a","Type":"ContainerDied","Data":"7acd04682579663946c79236b8a9dce5d70edb35ad16629d503e76912c8f9f87"} Nov 24 21:43:42 crc kubenswrapper[4915]: I1124 21:43:42.761877 4915 generic.go:334] "Generic (PLEG): container finished" podID="ee896549-5a9e-4159-b7d9-5eb0321366aa" containerID="c5ecd6165b41bac8eb231025c06b9eb6ae10f3057829cf5e807767c3845daa71" exitCode=2 Nov 24 21:43:42 crc kubenswrapper[4915]: I1124 21:43:42.762013 4915 generic.go:334] "Generic (PLEG): container finished" podID="ee896549-5a9e-4159-b7d9-5eb0321366aa" containerID="ede936be91c81cbeec8d58555049b90d26a0205cf95c14700174b3487a2ea2ba" exitCode=0 Nov 24 21:43:42 crc kubenswrapper[4915]: I1124 21:43:42.761948 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee896549-5a9e-4159-b7d9-5eb0321366aa","Type":"ContainerDied","Data":"c5ecd6165b41bac8eb231025c06b9eb6ae10f3057829cf5e807767c3845daa71"} Nov 24 21:43:42 crc kubenswrapper[4915]: I1124 21:43:42.762052 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee896549-5a9e-4159-b7d9-5eb0321366aa","Type":"ContainerDied","Data":"ede936be91c81cbeec8d58555049b90d26a0205cf95c14700174b3487a2ea2ba"} Nov 24 21:43:42 crc kubenswrapper[4915]: I1124 21:43:42.860594 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:43:42 crc kubenswrapper[4915]: I1124 21:43:42.860897 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1e385544-9a00-45e1-a13d-246a4fb83c1a" containerName="glance-log" containerID="cri-o://f3e8acb1a79d00a6496f0fe12dd9bac40012355de41215adb38f498553c99853" gracePeriod=30 Nov 24 21:43:42 crc kubenswrapper[4915]: I1124 21:43:42.861147 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1e385544-9a00-45e1-a13d-246a4fb83c1a" containerName="glance-httpd" containerID="cri-o://db52c0172ea04d0a948de02991c171a2413e5b7665489c613c5f8155042a9753" gracePeriod=30 Nov 24 21:43:42 crc kubenswrapper[4915]: I1124 21:43:42.902016 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-htv6k" Nov 24 21:43:42 crc kubenswrapper[4915]: I1124 21:43:42.902458 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-htv6k" Nov 24 21:43:43 crc kubenswrapper[4915]: E1124 21:43:43.043293 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e385544_9a00_45e1_a13d_246a4fb83c1a.slice/crio-conmon-f3e8acb1a79d00a6496f0fe12dd9bac40012355de41215adb38f498553c99853.scope\": RecentStats: unable to find data in memory cache]" Nov 24 21:43:43 crc kubenswrapper[4915]: I1124 21:43:43.778304 4915 generic.go:334] "Generic (PLEG): container finished" podID="1e385544-9a00-45e1-a13d-246a4fb83c1a" containerID="f3e8acb1a79d00a6496f0fe12dd9bac40012355de41215adb38f498553c99853" exitCode=143 Nov 24 21:43:43 crc kubenswrapper[4915]: I1124 21:43:43.778579 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1e385544-9a00-45e1-a13d-246a4fb83c1a","Type":"ContainerDied","Data":"f3e8acb1a79d00a6496f0fe12dd9bac40012355de41215adb38f498553c99853"} Nov 24 21:43:43 crc kubenswrapper[4915]: I1124 21:43:43.962317 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 21:43:43 crc kubenswrapper[4915]: I1124 21:43:43.966341 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-htv6k" podUID="91467267-1652-40bb-aa0b-99ede2e49bb4" containerName="registry-server" probeResult="failure" output=< Nov 24 21:43:43 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Nov 24 21:43:43 crc kubenswrapper[4915]: > Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.114833 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-combined-ca-bundle\") pod \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.114900 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-logs\") pod \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.114977 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnsn7\" (UniqueName: \"kubernetes.io/projected/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-kube-api-access-pnsn7\") pod \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.115023 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-config-data\") pod \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.115087 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-public-tls-certs\") pod \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.115187 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.115377 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-httpd-run\") pod \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.115438 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-scripts\") pod \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\" (UID: \"7e61db9f-1ee2-4a9a-88ee-3da3d975514a\") " Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.116025 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7e61db9f-1ee2-4a9a-88ee-3da3d975514a" (UID: "7e61db9f-1ee2-4a9a-88ee-3da3d975514a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.116055 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-logs" (OuterVolumeSpecName: "logs") pod "7e61db9f-1ee2-4a9a-88ee-3da3d975514a" (UID: "7e61db9f-1ee2-4a9a-88ee-3da3d975514a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.116564 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.116599 4915 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.125254 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "7e61db9f-1ee2-4a9a-88ee-3da3d975514a" (UID: "7e61db9f-1ee2-4a9a-88ee-3da3d975514a"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.125463 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-kube-api-access-pnsn7" (OuterVolumeSpecName: "kube-api-access-pnsn7") pod "7e61db9f-1ee2-4a9a-88ee-3da3d975514a" (UID: "7e61db9f-1ee2-4a9a-88ee-3da3d975514a"). InnerVolumeSpecName "kube-api-access-pnsn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.125489 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-scripts" (OuterVolumeSpecName: "scripts") pod "7e61db9f-1ee2-4a9a-88ee-3da3d975514a" (UID: "7e61db9f-1ee2-4a9a-88ee-3da3d975514a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.164926 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e61db9f-1ee2-4a9a-88ee-3da3d975514a" (UID: "7e61db9f-1ee2-4a9a-88ee-3da3d975514a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.193363 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7e61db9f-1ee2-4a9a-88ee-3da3d975514a" (UID: "7e61db9f-1ee2-4a9a-88ee-3da3d975514a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.194967 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-config-data" (OuterVolumeSpecName: "config-data") pod "7e61db9f-1ee2-4a9a-88ee-3da3d975514a" (UID: "7e61db9f-1ee2-4a9a-88ee-3da3d975514a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.219176 4915 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.219240 4915 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.219255 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.219267 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.219281 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnsn7\" (UniqueName: \"kubernetes.io/projected/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-kube-api-access-pnsn7\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.219294 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e61db9f-1ee2-4a9a-88ee-3da3d975514a-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.245884 4915 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.321543 4915 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:44 crc kubenswrapper[4915]: E1124 21:43:44.481123 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="758e36b26cc0fb5693068dcda64a28fd6b96b4d2a353c93a74e9837e318e396b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 21:43:44 crc kubenswrapper[4915]: E1124 21:43:44.482732 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="758e36b26cc0fb5693068dcda64a28fd6b96b4d2a353c93a74e9837e318e396b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 21:43:44 crc kubenswrapper[4915]: E1124 21:43:44.484229 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="758e36b26cc0fb5693068dcda64a28fd6b96b4d2a353c93a74e9837e318e396b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 21:43:44 crc kubenswrapper[4915]: E1124 21:43:44.484310 4915 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-8f669b545-6bqfj" podUID="a652c587-ebd5-4783-8397-9275b5a3b682" containerName="heat-engine" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.793336 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7e61db9f-1ee2-4a9a-88ee-3da3d975514a","Type":"ContainerDied","Data":"44d266f0d2e1029385396fae3e264f866bd8b540ad4011a2b88ece3f9cadb8e4"} Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.793564 4915 scope.go:117] "RemoveContainer" containerID="7acd04682579663946c79236b8a9dce5d70edb35ad16629d503e76912c8f9f87" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.793698 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.824422 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.846594 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.859509 4915 scope.go:117] "RemoveContainer" containerID="a431a8479b3b83816b2ae374c619415c393d9ae1771f2b17130475d272e4ac73" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.861092 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:43:44 crc kubenswrapper[4915]: E1124 21:43:44.861602 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afef51f6-3303-49ea-945d-a5e98d263877" containerName="heat-cfnapi" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.861617 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="afef51f6-3303-49ea-945d-a5e98d263877" containerName="heat-cfnapi" Nov 24 21:43:44 crc kubenswrapper[4915]: E1124 21:43:44.861648 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e61db9f-1ee2-4a9a-88ee-3da3d975514a" containerName="glance-httpd" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.861655 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e61db9f-1ee2-4a9a-88ee-3da3d975514a" containerName="glance-httpd" Nov 24 21:43:44 crc kubenswrapper[4915]: E1124 21:43:44.861677 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e61db9f-1ee2-4a9a-88ee-3da3d975514a" containerName="glance-log" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.861683 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e61db9f-1ee2-4a9a-88ee-3da3d975514a" containerName="glance-log" Nov 24 21:43:44 crc kubenswrapper[4915]: E1124 21:43:44.861703 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60efd5e8-d968-4c42-bca3-5890cb8b3ce9" containerName="heat-api" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.861708 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="60efd5e8-d968-4c42-bca3-5890cb8b3ce9" containerName="heat-api" Nov 24 21:43:44 crc kubenswrapper[4915]: E1124 21:43:44.861718 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60efd5e8-d968-4c42-bca3-5890cb8b3ce9" containerName="heat-api" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.861723 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="60efd5e8-d968-4c42-bca3-5890cb8b3ce9" containerName="heat-api" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.861971 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="60efd5e8-d968-4c42-bca3-5890cb8b3ce9" containerName="heat-api" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.861988 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="afef51f6-3303-49ea-945d-a5e98d263877" containerName="heat-cfnapi" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.862002 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e61db9f-1ee2-4a9a-88ee-3da3d975514a" containerName="glance-log" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.862017 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="afef51f6-3303-49ea-945d-a5e98d263877" containerName="heat-cfnapi" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.862027 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e61db9f-1ee2-4a9a-88ee-3da3d975514a" containerName="glance-httpd" Nov 24 21:43:44 crc kubenswrapper[4915]: E1124 21:43:44.862233 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afef51f6-3303-49ea-945d-a5e98d263877" containerName="heat-cfnapi" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.862245 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="afef51f6-3303-49ea-945d-a5e98d263877" containerName="heat-cfnapi" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.862460 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="60efd5e8-d968-4c42-bca3-5890cb8b3ce9" containerName="heat-api" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.863276 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.870428 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.870718 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 24 21:43:44 crc kubenswrapper[4915]: I1124 21:43:44.876924 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:43:45 crc kubenswrapper[4915]: I1124 21:43:45.034851 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0ccc65b-d989-428d-8ec6-ad88e8a03f42-config-data\") pod \"glance-default-external-api-0\" (UID: \"b0ccc65b-d989-428d-8ec6-ad88e8a03f42\") " pod="openstack/glance-default-external-api-0" Nov 24 21:43:45 crc kubenswrapper[4915]: I1124 21:43:45.034965 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0ccc65b-d989-428d-8ec6-ad88e8a03f42-scripts\") pod \"glance-default-external-api-0\" (UID: \"b0ccc65b-d989-428d-8ec6-ad88e8a03f42\") " pod="openstack/glance-default-external-api-0" Nov 24 21:43:45 crc kubenswrapper[4915]: I1124 21:43:45.035034 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms7nb\" (UniqueName: \"kubernetes.io/projected/b0ccc65b-d989-428d-8ec6-ad88e8a03f42-kube-api-access-ms7nb\") pod \"glance-default-external-api-0\" (UID: \"b0ccc65b-d989-428d-8ec6-ad88e8a03f42\") " pod="openstack/glance-default-external-api-0" Nov 24 21:43:45 crc kubenswrapper[4915]: I1124 21:43:45.035130 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0ccc65b-d989-428d-8ec6-ad88e8a03f42-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b0ccc65b-d989-428d-8ec6-ad88e8a03f42\") " pod="openstack/glance-default-external-api-0" Nov 24 21:43:45 crc kubenswrapper[4915]: I1124 21:43:45.035201 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ccc65b-d989-428d-8ec6-ad88e8a03f42-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b0ccc65b-d989-428d-8ec6-ad88e8a03f42\") " pod="openstack/glance-default-external-api-0" Nov 24 21:43:45 crc kubenswrapper[4915]: I1124 21:43:45.035284 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0ccc65b-d989-428d-8ec6-ad88e8a03f42-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b0ccc65b-d989-428d-8ec6-ad88e8a03f42\") " pod="openstack/glance-default-external-api-0" Nov 24 21:43:45 crc kubenswrapper[4915]: I1124 21:43:45.035320 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0ccc65b-d989-428d-8ec6-ad88e8a03f42-logs\") pod \"glance-default-external-api-0\" (UID: \"b0ccc65b-d989-428d-8ec6-ad88e8a03f42\") " pod="openstack/glance-default-external-api-0" Nov 24 21:43:45 crc kubenswrapper[4915]: I1124 21:43:45.035464 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"b0ccc65b-d989-428d-8ec6-ad88e8a03f42\") " pod="openstack/glance-default-external-api-0" Nov 24 21:43:45 crc kubenswrapper[4915]: I1124 21:43:45.137138 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0ccc65b-d989-428d-8ec6-ad88e8a03f42-scripts\") pod \"glance-default-external-api-0\" (UID: \"b0ccc65b-d989-428d-8ec6-ad88e8a03f42\") " pod="openstack/glance-default-external-api-0" Nov 24 21:43:45 crc kubenswrapper[4915]: I1124 21:43:45.137215 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms7nb\" (UniqueName: \"kubernetes.io/projected/b0ccc65b-d989-428d-8ec6-ad88e8a03f42-kube-api-access-ms7nb\") pod \"glance-default-external-api-0\" (UID: \"b0ccc65b-d989-428d-8ec6-ad88e8a03f42\") " pod="openstack/glance-default-external-api-0" Nov 24 21:43:45 crc kubenswrapper[4915]: I1124 21:43:45.137282 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0ccc65b-d989-428d-8ec6-ad88e8a03f42-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b0ccc65b-d989-428d-8ec6-ad88e8a03f42\") " pod="openstack/glance-default-external-api-0" Nov 24 21:43:45 crc kubenswrapper[4915]: I1124 21:43:45.137311 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ccc65b-d989-428d-8ec6-ad88e8a03f42-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b0ccc65b-d989-428d-8ec6-ad88e8a03f42\") " pod="openstack/glance-default-external-api-0" Nov 24 21:43:45 crc kubenswrapper[4915]: I1124 21:43:45.137356 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0ccc65b-d989-428d-8ec6-ad88e8a03f42-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b0ccc65b-d989-428d-8ec6-ad88e8a03f42\") " pod="openstack/glance-default-external-api-0" Nov 24 21:43:45 crc kubenswrapper[4915]: I1124 21:43:45.137376 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0ccc65b-d989-428d-8ec6-ad88e8a03f42-logs\") pod \"glance-default-external-api-0\" (UID: \"b0ccc65b-d989-428d-8ec6-ad88e8a03f42\") " pod="openstack/glance-default-external-api-0" Nov 24 21:43:45 crc kubenswrapper[4915]: I1124 21:43:45.137402 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"b0ccc65b-d989-428d-8ec6-ad88e8a03f42\") " pod="openstack/glance-default-external-api-0" Nov 24 21:43:45 crc kubenswrapper[4915]: I1124 21:43:45.137490 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0ccc65b-d989-428d-8ec6-ad88e8a03f42-config-data\") pod \"glance-default-external-api-0\" (UID: \"b0ccc65b-d989-428d-8ec6-ad88e8a03f42\") " pod="openstack/glance-default-external-api-0" Nov 24 21:43:45 crc kubenswrapper[4915]: I1124 21:43:45.138157 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0ccc65b-d989-428d-8ec6-ad88e8a03f42-logs\") pod \"glance-default-external-api-0\" (UID: \"b0ccc65b-d989-428d-8ec6-ad88e8a03f42\") " pod="openstack/glance-default-external-api-0" Nov 24 21:43:45 crc kubenswrapper[4915]: I1124 21:43:45.138354 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"b0ccc65b-d989-428d-8ec6-ad88e8a03f42\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Nov 24 21:43:45 crc kubenswrapper[4915]: I1124 21:43:45.138402 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b0ccc65b-d989-428d-8ec6-ad88e8a03f42-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b0ccc65b-d989-428d-8ec6-ad88e8a03f42\") " pod="openstack/glance-default-external-api-0" Nov 24 21:43:45 crc kubenswrapper[4915]: I1124 21:43:45.142164 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0ccc65b-d989-428d-8ec6-ad88e8a03f42-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b0ccc65b-d989-428d-8ec6-ad88e8a03f42\") " pod="openstack/glance-default-external-api-0" Nov 24 21:43:45 crc kubenswrapper[4915]: I1124 21:43:45.143399 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0ccc65b-d989-428d-8ec6-ad88e8a03f42-config-data\") pod \"glance-default-external-api-0\" (UID: \"b0ccc65b-d989-428d-8ec6-ad88e8a03f42\") " pod="openstack/glance-default-external-api-0" Nov 24 21:43:45 crc kubenswrapper[4915]: I1124 21:43:45.143992 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0ccc65b-d989-428d-8ec6-ad88e8a03f42-scripts\") pod \"glance-default-external-api-0\" (UID: \"b0ccc65b-d989-428d-8ec6-ad88e8a03f42\") " pod="openstack/glance-default-external-api-0" Nov 24 21:43:45 crc kubenswrapper[4915]: I1124 21:43:45.158495 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0ccc65b-d989-428d-8ec6-ad88e8a03f42-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b0ccc65b-d989-428d-8ec6-ad88e8a03f42\") " pod="openstack/glance-default-external-api-0" Nov 24 21:43:45 crc kubenswrapper[4915]: I1124 21:43:45.160600 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms7nb\" (UniqueName: \"kubernetes.io/projected/b0ccc65b-d989-428d-8ec6-ad88e8a03f42-kube-api-access-ms7nb\") pod \"glance-default-external-api-0\" (UID: \"b0ccc65b-d989-428d-8ec6-ad88e8a03f42\") " pod="openstack/glance-default-external-api-0" Nov 24 21:43:45 crc kubenswrapper[4915]: I1124 21:43:45.192267 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"b0ccc65b-d989-428d-8ec6-ad88e8a03f42\") " pod="openstack/glance-default-external-api-0" Nov 24 21:43:45 crc kubenswrapper[4915]: I1124 21:43:45.216530 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 24 21:43:45 crc kubenswrapper[4915]: W1124 21:43:45.840444 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0ccc65b_d989_428d_8ec6_ad88e8a03f42.slice/crio-53ca1dee5696df9c34fbe518293fc7e979f4c701a3e7047927217b3bb9557623 WatchSource:0}: Error finding container 53ca1dee5696df9c34fbe518293fc7e979f4c701a3e7047927217b3bb9557623: Status 404 returned error can't find the container with id 53ca1dee5696df9c34fbe518293fc7e979f4c701a3e7047927217b3bb9557623 Nov 24 21:43:45 crc kubenswrapper[4915]: I1124 21:43:45.845975 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 24 21:43:46 crc kubenswrapper[4915]: I1124 21:43:46.449064 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e61db9f-1ee2-4a9a-88ee-3da3d975514a" path="/var/lib/kubelet/pods/7e61db9f-1ee2-4a9a-88ee-3da3d975514a/volumes" Nov 24 21:43:46 crc kubenswrapper[4915]: I1124 21:43:46.824154 4915 generic.go:334] "Generic (PLEG): container finished" podID="1e385544-9a00-45e1-a13d-246a4fb83c1a" containerID="db52c0172ea04d0a948de02991c171a2413e5b7665489c613c5f8155042a9753" exitCode=0 Nov 24 21:43:46 crc kubenswrapper[4915]: I1124 21:43:46.824486 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1e385544-9a00-45e1-a13d-246a4fb83c1a","Type":"ContainerDied","Data":"db52c0172ea04d0a948de02991c171a2413e5b7665489c613c5f8155042a9753"} Nov 24 21:43:46 crc kubenswrapper[4915]: I1124 21:43:46.824530 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1e385544-9a00-45e1-a13d-246a4fb83c1a","Type":"ContainerDied","Data":"94b7a860d9995963ad6fef87623e96a3bb4b6cab00f8852d3d4507597a7ec927"} Nov 24 21:43:46 crc kubenswrapper[4915]: I1124 21:43:46.824543 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94b7a860d9995963ad6fef87623e96a3bb4b6cab00f8852d3d4507597a7ec927" Nov 24 21:43:46 crc kubenswrapper[4915]: I1124 21:43:46.826667 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b0ccc65b-d989-428d-8ec6-ad88e8a03f42","Type":"ContainerStarted","Data":"9ec342a0893509fc409f34bc682c9fdc928f98eaa9d5059429a073ead28837f4"} Nov 24 21:43:46 crc kubenswrapper[4915]: I1124 21:43:46.826722 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b0ccc65b-d989-428d-8ec6-ad88e8a03f42","Type":"ContainerStarted","Data":"53ca1dee5696df9c34fbe518293fc7e979f4c701a3e7047927217b3bb9557623"} Nov 24 21:43:46 crc kubenswrapper[4915]: I1124 21:43:46.831518 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:43:46 crc kubenswrapper[4915]: I1124 21:43:46.920237 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e385544-9a00-45e1-a13d-246a4fb83c1a-scripts\") pod \"1e385544-9a00-45e1-a13d-246a4fb83c1a\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " Nov 24 21:43:46 crc kubenswrapper[4915]: I1124 21:43:46.920335 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e385544-9a00-45e1-a13d-246a4fb83c1a-httpd-run\") pod \"1e385544-9a00-45e1-a13d-246a4fb83c1a\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " Nov 24 21:43:46 crc kubenswrapper[4915]: I1124 21:43:46.920447 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"1e385544-9a00-45e1-a13d-246a4fb83c1a\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " Nov 24 21:43:46 crc kubenswrapper[4915]: I1124 21:43:46.920582 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e385544-9a00-45e1-a13d-246a4fb83c1a-internal-tls-certs\") pod \"1e385544-9a00-45e1-a13d-246a4fb83c1a\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " Nov 24 21:43:46 crc kubenswrapper[4915]: I1124 21:43:46.920628 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj9pr\" (UniqueName: \"kubernetes.io/projected/1e385544-9a00-45e1-a13d-246a4fb83c1a-kube-api-access-sj9pr\") pod \"1e385544-9a00-45e1-a13d-246a4fb83c1a\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " Nov 24 21:43:46 crc kubenswrapper[4915]: I1124 21:43:46.920701 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e385544-9a00-45e1-a13d-246a4fb83c1a-config-data\") pod \"1e385544-9a00-45e1-a13d-246a4fb83c1a\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " Nov 24 21:43:46 crc kubenswrapper[4915]: I1124 21:43:46.920792 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e385544-9a00-45e1-a13d-246a4fb83c1a-logs\") pod \"1e385544-9a00-45e1-a13d-246a4fb83c1a\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " Nov 24 21:43:46 crc kubenswrapper[4915]: I1124 21:43:46.920833 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e385544-9a00-45e1-a13d-246a4fb83c1a-combined-ca-bundle\") pod \"1e385544-9a00-45e1-a13d-246a4fb83c1a\" (UID: \"1e385544-9a00-45e1-a13d-246a4fb83c1a\") " Nov 24 21:43:46 crc kubenswrapper[4915]: I1124 21:43:46.926301 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e385544-9a00-45e1-a13d-246a4fb83c1a-logs" (OuterVolumeSpecName: "logs") pod "1e385544-9a00-45e1-a13d-246a4fb83c1a" (UID: "1e385544-9a00-45e1-a13d-246a4fb83c1a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:43:46 crc kubenswrapper[4915]: I1124 21:43:46.926538 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e385544-9a00-45e1-a13d-246a4fb83c1a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1e385544-9a00-45e1-a13d-246a4fb83c1a" (UID: "1e385544-9a00-45e1-a13d-246a4fb83c1a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:43:46 crc kubenswrapper[4915]: I1124 21:43:46.952039 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "1e385544-9a00-45e1-a13d-246a4fb83c1a" (UID: "1e385544-9a00-45e1-a13d-246a4fb83c1a"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 21:43:46 crc kubenswrapper[4915]: I1124 21:43:46.952234 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e385544-9a00-45e1-a13d-246a4fb83c1a-kube-api-access-sj9pr" (OuterVolumeSpecName: "kube-api-access-sj9pr") pod "1e385544-9a00-45e1-a13d-246a4fb83c1a" (UID: "1e385544-9a00-45e1-a13d-246a4fb83c1a"). InnerVolumeSpecName "kube-api-access-sj9pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:43:46 crc kubenswrapper[4915]: I1124 21:43:46.952317 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e385544-9a00-45e1-a13d-246a4fb83c1a-scripts" (OuterVolumeSpecName: "scripts") pod "1e385544-9a00-45e1-a13d-246a4fb83c1a" (UID: "1e385544-9a00-45e1-a13d-246a4fb83c1a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.040905 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e385544-9a00-45e1-a13d-246a4fb83c1a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e385544-9a00-45e1-a13d-246a4fb83c1a" (UID: "1e385544-9a00-45e1-a13d-246a4fb83c1a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.043411 4915 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e385544-9a00-45e1-a13d-246a4fb83c1a-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.043554 4915 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.043670 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sj9pr\" (UniqueName: \"kubernetes.io/projected/1e385544-9a00-45e1-a13d-246a4fb83c1a-kube-api-access-sj9pr\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.043750 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e385544-9a00-45e1-a13d-246a4fb83c1a-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.044183 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e385544-9a00-45e1-a13d-246a4fb83c1a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.044278 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e385544-9a00-45e1-a13d-246a4fb83c1a-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.091707 4915 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.114360 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e385544-9a00-45e1-a13d-246a4fb83c1a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1e385544-9a00-45e1-a13d-246a4fb83c1a" (UID: "1e385544-9a00-45e1-a13d-246a4fb83c1a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.116855 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e385544-9a00-45e1-a13d-246a4fb83c1a-config-data" (OuterVolumeSpecName: "config-data") pod "1e385544-9a00-45e1-a13d-246a4fb83c1a" (UID: "1e385544-9a00-45e1-a13d-246a4fb83c1a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.146718 4915 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e385544-9a00-45e1-a13d-246a4fb83c1a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.146760 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e385544-9a00-45e1-a13d-246a4fb83c1a-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.146786 4915 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.847830 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b0ccc65b-d989-428d-8ec6-ad88e8a03f42","Type":"ContainerStarted","Data":"6c0b645121584cd4cac7f95c1a05138cc45a247ad2863e1c252a9c1c84378538"} Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.850232 4915 generic.go:334] "Generic (PLEG): container finished" podID="a652c587-ebd5-4783-8397-9275b5a3b682" containerID="758e36b26cc0fb5693068dcda64a28fd6b96b4d2a353c93a74e9837e318e396b" exitCode=0 Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.850329 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.856963 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-8f669b545-6bqfj" event={"ID":"a652c587-ebd5-4783-8397-9275b5a3b682","Type":"ContainerDied","Data":"758e36b26cc0fb5693068dcda64a28fd6b96b4d2a353c93a74e9837e318e396b"} Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.857029 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-8f669b545-6bqfj" event={"ID":"a652c587-ebd5-4783-8397-9275b5a3b682","Type":"ContainerDied","Data":"9071d048339ffb1d7b8fb8fcc71ff3c2970d8956a73b7f831a09eb0de54314e4"} Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.857041 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9071d048339ffb1d7b8fb8fcc71ff3c2970d8956a73b7f831a09eb0de54314e4" Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.877301 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.8772831930000002 podStartE2EDuration="3.877283193s" podCreationTimestamp="2025-11-24 21:43:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:43:47.870671635 +0000 UTC m=+1446.186923808" watchObservedRunningTime="2025-11-24 21:43:47.877283193 +0000 UTC m=+1446.193535356" Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.896551 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-8f669b545-6bqfj" Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.917551 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.931530 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.965882 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a652c587-ebd5-4783-8397-9275b5a3b682-config-data\") pod \"a652c587-ebd5-4783-8397-9275b5a3b682\" (UID: \"a652c587-ebd5-4783-8397-9275b5a3b682\") " Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.966112 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbkqt\" (UniqueName: \"kubernetes.io/projected/a652c587-ebd5-4783-8397-9275b5a3b682-kube-api-access-fbkqt\") pod \"a652c587-ebd5-4783-8397-9275b5a3b682\" (UID: \"a652c587-ebd5-4783-8397-9275b5a3b682\") " Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.966327 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a652c587-ebd5-4783-8397-9275b5a3b682-config-data-custom\") pod \"a652c587-ebd5-4783-8397-9275b5a3b682\" (UID: \"a652c587-ebd5-4783-8397-9275b5a3b682\") " Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.966370 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a652c587-ebd5-4783-8397-9275b5a3b682-combined-ca-bundle\") pod \"a652c587-ebd5-4783-8397-9275b5a3b682\" (UID: \"a652c587-ebd5-4783-8397-9275b5a3b682\") " Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.973923 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:43:47 crc kubenswrapper[4915]: E1124 21:43:47.974408 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a652c587-ebd5-4783-8397-9275b5a3b682" containerName="heat-engine" Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.974420 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a652c587-ebd5-4783-8397-9275b5a3b682" containerName="heat-engine" Nov 24 21:43:47 crc kubenswrapper[4915]: E1124 21:43:47.974434 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e385544-9a00-45e1-a13d-246a4fb83c1a" containerName="glance-log" Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.974440 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e385544-9a00-45e1-a13d-246a4fb83c1a" containerName="glance-log" Nov 24 21:43:47 crc kubenswrapper[4915]: E1124 21:43:47.974458 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e385544-9a00-45e1-a13d-246a4fb83c1a" containerName="glance-httpd" Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.974464 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e385544-9a00-45e1-a13d-246a4fb83c1a" containerName="glance-httpd" Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.974713 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e385544-9a00-45e1-a13d-246a4fb83c1a" containerName="glance-httpd" Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.974734 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e385544-9a00-45e1-a13d-246a4fb83c1a" containerName="glance-log" Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.974751 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a652c587-ebd5-4783-8397-9275b5a3b682" containerName="heat-engine" Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.975080 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a652c587-ebd5-4783-8397-9275b5a3b682-kube-api-access-fbkqt" (OuterVolumeSpecName: "kube-api-access-fbkqt") pod "a652c587-ebd5-4783-8397-9275b5a3b682" (UID: "a652c587-ebd5-4783-8397-9275b5a3b682"). InnerVolumeSpecName "kube-api-access-fbkqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.975956 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.977516 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a652c587-ebd5-4783-8397-9275b5a3b682-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a652c587-ebd5-4783-8397-9275b5a3b682" (UID: "a652c587-ebd5-4783-8397-9275b5a3b682"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.979702 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 24 21:43:47 crc kubenswrapper[4915]: I1124 21:43:47.981637 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.021255 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.032379 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a652c587-ebd5-4783-8397-9275b5a3b682-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a652c587-ebd5-4783-8397-9275b5a3b682" (UID: "a652c587-ebd5-4783-8397-9275b5a3b682"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.068905 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbkqt\" (UniqueName: \"kubernetes.io/projected/a652c587-ebd5-4783-8397-9275b5a3b682-kube-api-access-fbkqt\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.068932 4915 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a652c587-ebd5-4783-8397-9275b5a3b682-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.068940 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a652c587-ebd5-4783-8397-9275b5a3b682-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.113628 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a652c587-ebd5-4783-8397-9275b5a3b682-config-data" (OuterVolumeSpecName: "config-data") pod "a652c587-ebd5-4783-8397-9275b5a3b682" (UID: "a652c587-ebd5-4783-8397-9275b5a3b682"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.171417 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5b5763a8-4eea-4c3b-bddc-a9d55da42631-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5b5763a8-4eea-4c3b-bddc-a9d55da42631\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.171489 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9p8q\" (UniqueName: \"kubernetes.io/projected/5b5763a8-4eea-4c3b-bddc-a9d55da42631-kube-api-access-l9p8q\") pod \"glance-default-internal-api-0\" (UID: \"5b5763a8-4eea-4c3b-bddc-a9d55da42631\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.171535 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b5763a8-4eea-4c3b-bddc-a9d55da42631-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5b5763a8-4eea-4c3b-bddc-a9d55da42631\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.171571 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b5763a8-4eea-4c3b-bddc-a9d55da42631-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5b5763a8-4eea-4c3b-bddc-a9d55da42631\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.171715 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b5763a8-4eea-4c3b-bddc-a9d55da42631-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5b5763a8-4eea-4c3b-bddc-a9d55da42631\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.171823 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b5763a8-4eea-4c3b-bddc-a9d55da42631-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5b5763a8-4eea-4c3b-bddc-a9d55da42631\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.172016 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b5763a8-4eea-4c3b-bddc-a9d55da42631-logs\") pod \"glance-default-internal-api-0\" (UID: \"5b5763a8-4eea-4c3b-bddc-a9d55da42631\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.172135 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"5b5763a8-4eea-4c3b-bddc-a9d55da42631\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.172299 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a652c587-ebd5-4783-8397-9275b5a3b682-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.274524 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b5763a8-4eea-4c3b-bddc-a9d55da42631-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5b5763a8-4eea-4c3b-bddc-a9d55da42631\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.274615 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b5763a8-4eea-4c3b-bddc-a9d55da42631-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5b5763a8-4eea-4c3b-bddc-a9d55da42631\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.274657 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b5763a8-4eea-4c3b-bddc-a9d55da42631-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5b5763a8-4eea-4c3b-bddc-a9d55da42631\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.274713 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b5763a8-4eea-4c3b-bddc-a9d55da42631-logs\") pod \"glance-default-internal-api-0\" (UID: \"5b5763a8-4eea-4c3b-bddc-a9d55da42631\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.274748 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"5b5763a8-4eea-4c3b-bddc-a9d55da42631\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.274844 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5b5763a8-4eea-4c3b-bddc-a9d55da42631-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5b5763a8-4eea-4c3b-bddc-a9d55da42631\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.274881 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9p8q\" (UniqueName: \"kubernetes.io/projected/5b5763a8-4eea-4c3b-bddc-a9d55da42631-kube-api-access-l9p8q\") pod \"glance-default-internal-api-0\" (UID: \"5b5763a8-4eea-4c3b-bddc-a9d55da42631\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.274907 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b5763a8-4eea-4c3b-bddc-a9d55da42631-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5b5763a8-4eea-4c3b-bddc-a9d55da42631\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.275898 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5b5763a8-4eea-4c3b-bddc-a9d55da42631-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5b5763a8-4eea-4c3b-bddc-a9d55da42631\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.276030 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b5763a8-4eea-4c3b-bddc-a9d55da42631-logs\") pod \"glance-default-internal-api-0\" (UID: \"5b5763a8-4eea-4c3b-bddc-a9d55da42631\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.276554 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"5b5763a8-4eea-4c3b-bddc-a9d55da42631\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.280522 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b5763a8-4eea-4c3b-bddc-a9d55da42631-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5b5763a8-4eea-4c3b-bddc-a9d55da42631\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.282488 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b5763a8-4eea-4c3b-bddc-a9d55da42631-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5b5763a8-4eea-4c3b-bddc-a9d55da42631\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.287534 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b5763a8-4eea-4c3b-bddc-a9d55da42631-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5b5763a8-4eea-4c3b-bddc-a9d55da42631\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.288988 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b5763a8-4eea-4c3b-bddc-a9d55da42631-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5b5763a8-4eea-4c3b-bddc-a9d55da42631\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.297853 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9p8q\" (UniqueName: \"kubernetes.io/projected/5b5763a8-4eea-4c3b-bddc-a9d55da42631-kube-api-access-l9p8q\") pod \"glance-default-internal-api-0\" (UID: \"5b5763a8-4eea-4c3b-bddc-a9d55da42631\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.329480 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"5b5763a8-4eea-4c3b-bddc-a9d55da42631\") " pod="openstack/glance-default-internal-api-0" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.371706 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.448312 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e385544-9a00-45e1-a13d-246a4fb83c1a" path="/var/lib/kubelet/pods/1e385544-9a00-45e1-a13d-246a4fb83c1a/volumes" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.872794 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-8f669b545-6bqfj" Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.910938 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-8f669b545-6bqfj"] Nov 24 21:43:48 crc kubenswrapper[4915]: I1124 21:43:48.925660 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-8f669b545-6bqfj"] Nov 24 21:43:49 crc kubenswrapper[4915]: I1124 21:43:49.067194 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 24 21:43:49 crc kubenswrapper[4915]: I1124 21:43:49.908443 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5b5763a8-4eea-4c3b-bddc-a9d55da42631","Type":"ContainerStarted","Data":"578b47888a7df38b1db01b0dcbf8bd1733c1a1d8e97aa406becf7389f51ae3ca"} Nov 24 21:43:49 crc kubenswrapper[4915]: I1124 21:43:49.908965 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5b5763a8-4eea-4c3b-bddc-a9d55da42631","Type":"ContainerStarted","Data":"33854cb11a3f9ab33093bf26f9c1907156432bd9a844cda7702e9a7a17c7ac14"} Nov 24 21:43:50 crc kubenswrapper[4915]: I1124 21:43:50.441270 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a652c587-ebd5-4783-8397-9275b5a3b682" path="/var/lib/kubelet/pods/a652c587-ebd5-4783-8397-9275b5a3b682/volumes" Nov 24 21:43:50 crc kubenswrapper[4915]: I1124 21:43:50.920495 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5b5763a8-4eea-4c3b-bddc-a9d55da42631","Type":"ContainerStarted","Data":"b6688428f497da66f2c994439d5c579183f88607d919f10163750374c6e95ca5"} Nov 24 21:43:50 crc kubenswrapper[4915]: I1124 21:43:50.943902 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.943877872 podStartE2EDuration="3.943877872s" podCreationTimestamp="2025-11-24 21:43:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:43:50.941182249 +0000 UTC m=+1449.257434452" watchObservedRunningTime="2025-11-24 21:43:50.943877872 +0000 UTC m=+1449.260130045" Nov 24 21:43:53 crc kubenswrapper[4915]: I1124 21:43:53.953496 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-htv6k" podUID="91467267-1652-40bb-aa0b-99ede2e49bb4" containerName="registry-server" probeResult="failure" output=< Nov 24 21:43:53 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Nov 24 21:43:53 crc kubenswrapper[4915]: > Nov 24 21:43:55 crc kubenswrapper[4915]: I1124 21:43:55.217032 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 21:43:55 crc kubenswrapper[4915]: I1124 21:43:55.217379 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 24 21:43:55 crc kubenswrapper[4915]: I1124 21:43:55.256595 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 21:43:55 crc kubenswrapper[4915]: I1124 21:43:55.275399 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 24 21:43:55 crc kubenswrapper[4915]: I1124 21:43:55.987517 4915 generic.go:334] "Generic (PLEG): container finished" podID="ee896549-5a9e-4159-b7d9-5eb0321366aa" containerID="7a69382b6cc2be0ad42fc337f95e7db8d0f1b3b9728ed7020c17db4f1c39679f" exitCode=0 Nov 24 21:43:55 crc kubenswrapper[4915]: I1124 21:43:55.987602 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee896549-5a9e-4159-b7d9-5eb0321366aa","Type":"ContainerDied","Data":"7a69382b6cc2be0ad42fc337f95e7db8d0f1b3b9728ed7020c17db4f1c39679f"} Nov 24 21:43:55 crc kubenswrapper[4915]: I1124 21:43:55.988322 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 21:43:55 crc kubenswrapper[4915]: I1124 21:43:55.988357 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 24 21:43:57 crc kubenswrapper[4915]: I1124 21:43:57.912278 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ee896549-5a9e-4159-b7d9-5eb0321366aa" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 24 21:43:58 crc kubenswrapper[4915]: I1124 21:43:58.373910 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 21:43:58 crc kubenswrapper[4915]: I1124 21:43:58.373953 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 24 21:43:58 crc kubenswrapper[4915]: I1124 21:43:58.407837 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 21:43:58 crc kubenswrapper[4915]: I1124 21:43:58.422441 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 24 21:43:58 crc kubenswrapper[4915]: I1124 21:43:58.743771 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 21:43:58 crc kubenswrapper[4915]: I1124 21:43:58.744216 4915 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 21:43:58 crc kubenswrapper[4915]: I1124 21:43:58.748801 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 24 21:43:59 crc kubenswrapper[4915]: I1124 21:43:59.042517 4915 generic.go:334] "Generic (PLEG): container finished" podID="a25cfd15-cca3-48ae-b45c-81cc7a690f31" containerID="6d1276a333bb1ea524f81c8c303c52a8e90f0e61337e70a168ecd6200bae1b69" exitCode=0 Nov 24 21:43:59 crc kubenswrapper[4915]: I1124 21:43:59.042603 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9jg68" event={"ID":"a25cfd15-cca3-48ae-b45c-81cc7a690f31","Type":"ContainerDied","Data":"6d1276a333bb1ea524f81c8c303c52a8e90f0e61337e70a168ecd6200bae1b69"} Nov 24 21:43:59 crc kubenswrapper[4915]: I1124 21:43:59.043074 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 21:43:59 crc kubenswrapper[4915]: I1124 21:43:59.043095 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 24 21:44:00 crc kubenswrapper[4915]: I1124 21:44:00.536134 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9jg68" Nov 24 21:44:00 crc kubenswrapper[4915]: I1124 21:44:00.669085 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a25cfd15-cca3-48ae-b45c-81cc7a690f31-config-data\") pod \"a25cfd15-cca3-48ae-b45c-81cc7a690f31\" (UID: \"a25cfd15-cca3-48ae-b45c-81cc7a690f31\") " Nov 24 21:44:00 crc kubenswrapper[4915]: I1124 21:44:00.669238 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25cfd15-cca3-48ae-b45c-81cc7a690f31-combined-ca-bundle\") pod \"a25cfd15-cca3-48ae-b45c-81cc7a690f31\" (UID: \"a25cfd15-cca3-48ae-b45c-81cc7a690f31\") " Nov 24 21:44:00 crc kubenswrapper[4915]: I1124 21:44:00.669318 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bsmv\" (UniqueName: \"kubernetes.io/projected/a25cfd15-cca3-48ae-b45c-81cc7a690f31-kube-api-access-9bsmv\") pod \"a25cfd15-cca3-48ae-b45c-81cc7a690f31\" (UID: \"a25cfd15-cca3-48ae-b45c-81cc7a690f31\") " Nov 24 21:44:00 crc kubenswrapper[4915]: I1124 21:44:00.669441 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a25cfd15-cca3-48ae-b45c-81cc7a690f31-scripts\") pod \"a25cfd15-cca3-48ae-b45c-81cc7a690f31\" (UID: \"a25cfd15-cca3-48ae-b45c-81cc7a690f31\") " Nov 24 21:44:00 crc kubenswrapper[4915]: I1124 21:44:00.678452 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25cfd15-cca3-48ae-b45c-81cc7a690f31-scripts" (OuterVolumeSpecName: "scripts") pod "a25cfd15-cca3-48ae-b45c-81cc7a690f31" (UID: "a25cfd15-cca3-48ae-b45c-81cc7a690f31"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:00 crc kubenswrapper[4915]: I1124 21:44:00.696022 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a25cfd15-cca3-48ae-b45c-81cc7a690f31-kube-api-access-9bsmv" (OuterVolumeSpecName: "kube-api-access-9bsmv") pod "a25cfd15-cca3-48ae-b45c-81cc7a690f31" (UID: "a25cfd15-cca3-48ae-b45c-81cc7a690f31"). InnerVolumeSpecName "kube-api-access-9bsmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:44:00 crc kubenswrapper[4915]: I1124 21:44:00.717633 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25cfd15-cca3-48ae-b45c-81cc7a690f31-config-data" (OuterVolumeSpecName: "config-data") pod "a25cfd15-cca3-48ae-b45c-81cc7a690f31" (UID: "a25cfd15-cca3-48ae-b45c-81cc7a690f31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:00 crc kubenswrapper[4915]: I1124 21:44:00.721960 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25cfd15-cca3-48ae-b45c-81cc7a690f31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a25cfd15-cca3-48ae-b45c-81cc7a690f31" (UID: "a25cfd15-cca3-48ae-b45c-81cc7a690f31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:00 crc kubenswrapper[4915]: I1124 21:44:00.772210 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a25cfd15-cca3-48ae-b45c-81cc7a690f31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:00 crc kubenswrapper[4915]: I1124 21:44:00.772251 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bsmv\" (UniqueName: \"kubernetes.io/projected/a25cfd15-cca3-48ae-b45c-81cc7a690f31-kube-api-access-9bsmv\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:00 crc kubenswrapper[4915]: I1124 21:44:00.772264 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a25cfd15-cca3-48ae-b45c-81cc7a690f31-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:00 crc kubenswrapper[4915]: I1124 21:44:00.772273 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a25cfd15-cca3-48ae-b45c-81cc7a690f31-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:01 crc kubenswrapper[4915]: I1124 21:44:01.064747 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9jg68" event={"ID":"a25cfd15-cca3-48ae-b45c-81cc7a690f31","Type":"ContainerDied","Data":"d3573873f1183ab93e585f8a7efeedc05cac9f060fe73520133da9a01da570df"} Nov 24 21:44:01 crc kubenswrapper[4915]: I1124 21:44:01.064820 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3573873f1183ab93e585f8a7efeedc05cac9f060fe73520133da9a01da570df" Nov 24 21:44:01 crc kubenswrapper[4915]: I1124 21:44:01.064820 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9jg68" Nov 24 21:44:01 crc kubenswrapper[4915]: I1124 21:44:01.168947 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 21:44:01 crc kubenswrapper[4915]: E1124 21:44:01.169503 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25cfd15-cca3-48ae-b45c-81cc7a690f31" containerName="nova-cell0-conductor-db-sync" Nov 24 21:44:01 crc kubenswrapper[4915]: I1124 21:44:01.169529 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25cfd15-cca3-48ae-b45c-81cc7a690f31" containerName="nova-cell0-conductor-db-sync" Nov 24 21:44:01 crc kubenswrapper[4915]: I1124 21:44:01.175050 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a25cfd15-cca3-48ae-b45c-81cc7a690f31" containerName="nova-cell0-conductor-db-sync" Nov 24 21:44:01 crc kubenswrapper[4915]: I1124 21:44:01.176229 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 21:44:01 crc kubenswrapper[4915]: I1124 21:44:01.181630 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 21:44:01 crc kubenswrapper[4915]: I1124 21:44:01.182680 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 24 21:44:01 crc kubenswrapper[4915]: I1124 21:44:01.182755 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-9mtxz" Nov 24 21:44:01 crc kubenswrapper[4915]: I1124 21:44:01.285585 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23643233-f874-4589-9494-dcdada59274e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"23643233-f874-4589-9494-dcdada59274e\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:44:01 crc kubenswrapper[4915]: I1124 21:44:01.285889 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23643233-f874-4589-9494-dcdada59274e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"23643233-f874-4589-9494-dcdada59274e\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:44:01 crc kubenswrapper[4915]: I1124 21:44:01.286127 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkckr\" (UniqueName: \"kubernetes.io/projected/23643233-f874-4589-9494-dcdada59274e-kube-api-access-gkckr\") pod \"nova-cell0-conductor-0\" (UID: \"23643233-f874-4589-9494-dcdada59274e\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:44:01 crc kubenswrapper[4915]: I1124 21:44:01.387987 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23643233-f874-4589-9494-dcdada59274e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"23643233-f874-4589-9494-dcdada59274e\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:44:01 crc kubenswrapper[4915]: I1124 21:44:01.388073 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23643233-f874-4589-9494-dcdada59274e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"23643233-f874-4589-9494-dcdada59274e\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:44:01 crc kubenswrapper[4915]: I1124 21:44:01.388191 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkckr\" (UniqueName: \"kubernetes.io/projected/23643233-f874-4589-9494-dcdada59274e-kube-api-access-gkckr\") pod \"nova-cell0-conductor-0\" (UID: \"23643233-f874-4589-9494-dcdada59274e\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:44:01 crc kubenswrapper[4915]: I1124 21:44:01.393608 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23643233-f874-4589-9494-dcdada59274e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"23643233-f874-4589-9494-dcdada59274e\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:44:01 crc kubenswrapper[4915]: I1124 21:44:01.393620 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23643233-f874-4589-9494-dcdada59274e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"23643233-f874-4589-9494-dcdada59274e\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:44:01 crc kubenswrapper[4915]: I1124 21:44:01.408264 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkckr\" (UniqueName: \"kubernetes.io/projected/23643233-f874-4589-9494-dcdada59274e-kube-api-access-gkckr\") pod \"nova-cell0-conductor-0\" (UID: \"23643233-f874-4589-9494-dcdada59274e\") " pod="openstack/nova-cell0-conductor-0" Nov 24 21:44:01 crc kubenswrapper[4915]: I1124 21:44:01.496446 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 24 21:44:01 crc kubenswrapper[4915]: I1124 21:44:01.557376 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 21:44:01 crc kubenswrapper[4915]: I1124 21:44:01.557482 4915 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 24 21:44:01 crc kubenswrapper[4915]: I1124 21:44:01.577710 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 24 21:44:02 crc kubenswrapper[4915]: I1124 21:44:02.063764 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 24 21:44:02 crc kubenswrapper[4915]: I1124 21:44:02.103013 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"23643233-f874-4589-9494-dcdada59274e","Type":"ContainerStarted","Data":"07e7d91c0f24ae001e68fce6ef4deb2b572e8e4e043676268451398060720d10"} Nov 24 21:44:03 crc kubenswrapper[4915]: I1124 21:44:03.123009 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"23643233-f874-4589-9494-dcdada59274e","Type":"ContainerStarted","Data":"208527ba49a60294d10ef5bf3331218d1362956864455ef0eb040afb9e75c9c6"} Nov 24 21:44:03 crc kubenswrapper[4915]: I1124 21:44:03.123661 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 24 21:44:03 crc kubenswrapper[4915]: I1124 21:44:03.146889 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.146867121 podStartE2EDuration="2.146867121s" podCreationTimestamp="2025-11-24 21:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:44:03.141982909 +0000 UTC m=+1461.458235082" watchObservedRunningTime="2025-11-24 21:44:03.146867121 +0000 UTC m=+1461.463119304" Nov 24 21:44:03 crc kubenswrapper[4915]: I1124 21:44:03.954227 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-htv6k" podUID="91467267-1652-40bb-aa0b-99ede2e49bb4" containerName="registry-server" probeResult="failure" output=< Nov 24 21:44:03 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Nov 24 21:44:03 crc kubenswrapper[4915]: > Nov 24 21:44:11 crc kubenswrapper[4915]: I1124 21:44:11.540818 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.226575 4915 generic.go:334] "Generic (PLEG): container finished" podID="ee896549-5a9e-4159-b7d9-5eb0321366aa" containerID="c4155d6bf5ee23e0d09e319d78c27d6fe369080bbee6687058dc870277a3bd1c" exitCode=137 Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.226647 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee896549-5a9e-4159-b7d9-5eb0321366aa","Type":"ContainerDied","Data":"c4155d6bf5ee23e0d09e319d78c27d6fe369080bbee6687058dc870277a3bd1c"} Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.227350 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee896549-5a9e-4159-b7d9-5eb0321366aa","Type":"ContainerDied","Data":"fca6a00c5a4e44ff9eaf608d5af02f35111dc78894f7e41b9cb344e7d42492bb"} Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.227461 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fca6a00c5a4e44ff9eaf608d5af02f35111dc78894f7e41b9cb344e7d42492bb" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.338473 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.375242 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-jshft"] Nov 24 21:44:12 crc kubenswrapper[4915]: E1124 21:44:12.375939 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee896549-5a9e-4159-b7d9-5eb0321366aa" containerName="sg-core" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.375959 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee896549-5a9e-4159-b7d9-5eb0321366aa" containerName="sg-core" Nov 24 21:44:12 crc kubenswrapper[4915]: E1124 21:44:12.375991 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee896549-5a9e-4159-b7d9-5eb0321366aa" containerName="ceilometer-central-agent" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.375998 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee896549-5a9e-4159-b7d9-5eb0321366aa" containerName="ceilometer-central-agent" Nov 24 21:44:12 crc kubenswrapper[4915]: E1124 21:44:12.376021 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee896549-5a9e-4159-b7d9-5eb0321366aa" containerName="proxy-httpd" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.376026 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee896549-5a9e-4159-b7d9-5eb0321366aa" containerName="proxy-httpd" Nov 24 21:44:12 crc kubenswrapper[4915]: E1124 21:44:12.376048 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee896549-5a9e-4159-b7d9-5eb0321366aa" containerName="ceilometer-notification-agent" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.376054 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee896549-5a9e-4159-b7d9-5eb0321366aa" containerName="ceilometer-notification-agent" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.376272 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee896549-5a9e-4159-b7d9-5eb0321366aa" containerName="ceilometer-notification-agent" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.376288 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee896549-5a9e-4159-b7d9-5eb0321366aa" containerName="proxy-httpd" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.376310 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee896549-5a9e-4159-b7d9-5eb0321366aa" containerName="sg-core" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.376318 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee896549-5a9e-4159-b7d9-5eb0321366aa" containerName="ceilometer-central-agent" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.377123 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jshft" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.388935 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.389256 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.414461 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-jshft"] Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.458251 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee896549-5a9e-4159-b7d9-5eb0321366aa-config-data\") pod \"ee896549-5a9e-4159-b7d9-5eb0321366aa\" (UID: \"ee896549-5a9e-4159-b7d9-5eb0321366aa\") " Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.459211 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee896549-5a9e-4159-b7d9-5eb0321366aa-log-httpd\") pod \"ee896549-5a9e-4159-b7d9-5eb0321366aa\" (UID: \"ee896549-5a9e-4159-b7d9-5eb0321366aa\") " Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.459283 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee896549-5a9e-4159-b7d9-5eb0321366aa-scripts\") pod \"ee896549-5a9e-4159-b7d9-5eb0321366aa\" (UID: \"ee896549-5a9e-4159-b7d9-5eb0321366aa\") " Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.459499 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m28cw\" (UniqueName: \"kubernetes.io/projected/ee896549-5a9e-4159-b7d9-5eb0321366aa-kube-api-access-m28cw\") pod \"ee896549-5a9e-4159-b7d9-5eb0321366aa\" (UID: \"ee896549-5a9e-4159-b7d9-5eb0321366aa\") " Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.459542 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee896549-5a9e-4159-b7d9-5eb0321366aa-sg-core-conf-yaml\") pod \"ee896549-5a9e-4159-b7d9-5eb0321366aa\" (UID: \"ee896549-5a9e-4159-b7d9-5eb0321366aa\") " Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.459596 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee896549-5a9e-4159-b7d9-5eb0321366aa-run-httpd\") pod \"ee896549-5a9e-4159-b7d9-5eb0321366aa\" (UID: \"ee896549-5a9e-4159-b7d9-5eb0321366aa\") " Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.459627 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee896549-5a9e-4159-b7d9-5eb0321366aa-combined-ca-bundle\") pod \"ee896549-5a9e-4159-b7d9-5eb0321366aa\" (UID: \"ee896549-5a9e-4159-b7d9-5eb0321366aa\") " Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.462192 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee896549-5a9e-4159-b7d9-5eb0321366aa-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ee896549-5a9e-4159-b7d9-5eb0321366aa" (UID: "ee896549-5a9e-4159-b7d9-5eb0321366aa"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.462293 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb3b004f-82a5-46ab-aff4-223567ddd793-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-jshft\" (UID: \"fb3b004f-82a5-46ab-aff4-223567ddd793\") " pod="openstack/nova-cell0-cell-mapping-jshft" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.462376 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb3b004f-82a5-46ab-aff4-223567ddd793-config-data\") pod \"nova-cell0-cell-mapping-jshft\" (UID: \"fb3b004f-82a5-46ab-aff4-223567ddd793\") " pod="openstack/nova-cell0-cell-mapping-jshft" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.462716 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb3b004f-82a5-46ab-aff4-223567ddd793-scripts\") pod \"nova-cell0-cell-mapping-jshft\" (UID: \"fb3b004f-82a5-46ab-aff4-223567ddd793\") " pod="openstack/nova-cell0-cell-mapping-jshft" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.462854 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcdks\" (UniqueName: \"kubernetes.io/projected/fb3b004f-82a5-46ab-aff4-223567ddd793-kube-api-access-zcdks\") pod \"nova-cell0-cell-mapping-jshft\" (UID: \"fb3b004f-82a5-46ab-aff4-223567ddd793\") " pod="openstack/nova-cell0-cell-mapping-jshft" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.463075 4915 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee896549-5a9e-4159-b7d9-5eb0321366aa-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.477248 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee896549-5a9e-4159-b7d9-5eb0321366aa-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ee896549-5a9e-4159-b7d9-5eb0321366aa" (UID: "ee896549-5a9e-4159-b7d9-5eb0321366aa"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.486884 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee896549-5a9e-4159-b7d9-5eb0321366aa-kube-api-access-m28cw" (OuterVolumeSpecName: "kube-api-access-m28cw") pod "ee896549-5a9e-4159-b7d9-5eb0321366aa" (UID: "ee896549-5a9e-4159-b7d9-5eb0321366aa"). InnerVolumeSpecName "kube-api-access-m28cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.520193 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee896549-5a9e-4159-b7d9-5eb0321366aa-scripts" (OuterVolumeSpecName: "scripts") pod "ee896549-5a9e-4159-b7d9-5eb0321366aa" (UID: "ee896549-5a9e-4159-b7d9-5eb0321366aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.564644 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcdks\" (UniqueName: \"kubernetes.io/projected/fb3b004f-82a5-46ab-aff4-223567ddd793-kube-api-access-zcdks\") pod \"nova-cell0-cell-mapping-jshft\" (UID: \"fb3b004f-82a5-46ab-aff4-223567ddd793\") " pod="openstack/nova-cell0-cell-mapping-jshft" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.564794 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb3b004f-82a5-46ab-aff4-223567ddd793-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-jshft\" (UID: \"fb3b004f-82a5-46ab-aff4-223567ddd793\") " pod="openstack/nova-cell0-cell-mapping-jshft" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.564823 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb3b004f-82a5-46ab-aff4-223567ddd793-config-data\") pod \"nova-cell0-cell-mapping-jshft\" (UID: \"fb3b004f-82a5-46ab-aff4-223567ddd793\") " pod="openstack/nova-cell0-cell-mapping-jshft" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.564903 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb3b004f-82a5-46ab-aff4-223567ddd793-scripts\") pod \"nova-cell0-cell-mapping-jshft\" (UID: \"fb3b004f-82a5-46ab-aff4-223567ddd793\") " pod="openstack/nova-cell0-cell-mapping-jshft" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.564968 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m28cw\" (UniqueName: \"kubernetes.io/projected/ee896549-5a9e-4159-b7d9-5eb0321366aa-kube-api-access-m28cw\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.564979 4915 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee896549-5a9e-4159-b7d9-5eb0321366aa-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.564987 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee896549-5a9e-4159-b7d9-5eb0321366aa-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.591846 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb3b004f-82a5-46ab-aff4-223567ddd793-scripts\") pod \"nova-cell0-cell-mapping-jshft\" (UID: \"fb3b004f-82a5-46ab-aff4-223567ddd793\") " pod="openstack/nova-cell0-cell-mapping-jshft" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.596011 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb3b004f-82a5-46ab-aff4-223567ddd793-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-jshft\" (UID: \"fb3b004f-82a5-46ab-aff4-223567ddd793\") " pod="openstack/nova-cell0-cell-mapping-jshft" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.597273 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.606697 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.612813 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.618363 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.641433 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb3b004f-82a5-46ab-aff4-223567ddd793-config-data\") pod \"nova-cell0-cell-mapping-jshft\" (UID: \"fb3b004f-82a5-46ab-aff4-223567ddd793\") " pod="openstack/nova-cell0-cell-mapping-jshft" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.665356 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcdks\" (UniqueName: \"kubernetes.io/projected/fb3b004f-82a5-46ab-aff4-223567ddd793-kube-api-access-zcdks\") pod \"nova-cell0-cell-mapping-jshft\" (UID: \"fb3b004f-82a5-46ab-aff4-223567ddd793\") " pod="openstack/nova-cell0-cell-mapping-jshft" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.666901 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tdqj\" (UniqueName: \"kubernetes.io/projected/c3012cd0-1234-4d16-ae47-7eb87738f54d-kube-api-access-6tdqj\") pod \"nova-api-0\" (UID: \"c3012cd0-1234-4d16-ae47-7eb87738f54d\") " pod="openstack/nova-api-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.667127 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3012cd0-1234-4d16-ae47-7eb87738f54d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c3012cd0-1234-4d16-ae47-7eb87738f54d\") " pod="openstack/nova-api-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.667493 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3012cd0-1234-4d16-ae47-7eb87738f54d-config-data\") pod \"nova-api-0\" (UID: \"c3012cd0-1234-4d16-ae47-7eb87738f54d\") " pod="openstack/nova-api-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.667678 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3012cd0-1234-4d16-ae47-7eb87738f54d-logs\") pod \"nova-api-0\" (UID: \"c3012cd0-1234-4d16-ae47-7eb87738f54d\") " pod="openstack/nova-api-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.707898 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.710929 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.717336 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.718956 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee896549-5a9e-4159-b7d9-5eb0321366aa-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ee896549-5a9e-4159-b7d9-5eb0321366aa" (UID: "ee896549-5a9e-4159-b7d9-5eb0321366aa"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.726402 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jshft" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.731688 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.757019 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.760534 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.765865 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.769821 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djkn5\" (UniqueName: \"kubernetes.io/projected/cf56610a-46ba-4937-b001-f3054a401f2f-kube-api-access-djkn5\") pod \"nova-scheduler-0\" (UID: \"cf56610a-46ba-4937-b001-f3054a401f2f\") " pod="openstack/nova-scheduler-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.769896 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3012cd0-1234-4d16-ae47-7eb87738f54d-config-data\") pod \"nova-api-0\" (UID: \"c3012cd0-1234-4d16-ae47-7eb87738f54d\") " pod="openstack/nova-api-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.769947 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf56610a-46ba-4937-b001-f3054a401f2f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cf56610a-46ba-4937-b001-f3054a401f2f\") " pod="openstack/nova-scheduler-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.770015 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3012cd0-1234-4d16-ae47-7eb87738f54d-logs\") pod \"nova-api-0\" (UID: \"c3012cd0-1234-4d16-ae47-7eb87738f54d\") " pod="openstack/nova-api-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.770132 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlkdc\" (UniqueName: \"kubernetes.io/projected/dbf66cbe-0c30-400e-a367-f176ddf6f94c-kube-api-access-rlkdc\") pod \"nova-metadata-0\" (UID: \"dbf66cbe-0c30-400e-a367-f176ddf6f94c\") " pod="openstack/nova-metadata-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.770162 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbf66cbe-0c30-400e-a367-f176ddf6f94c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dbf66cbe-0c30-400e-a367-f176ddf6f94c\") " pod="openstack/nova-metadata-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.770189 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tdqj\" (UniqueName: \"kubernetes.io/projected/c3012cd0-1234-4d16-ae47-7eb87738f54d-kube-api-access-6tdqj\") pod \"nova-api-0\" (UID: \"c3012cd0-1234-4d16-ae47-7eb87738f54d\") " pod="openstack/nova-api-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.770218 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3012cd0-1234-4d16-ae47-7eb87738f54d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c3012cd0-1234-4d16-ae47-7eb87738f54d\") " pod="openstack/nova-api-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.770253 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbf66cbe-0c30-400e-a367-f176ddf6f94c-config-data\") pod \"nova-metadata-0\" (UID: \"dbf66cbe-0c30-400e-a367-f176ddf6f94c\") " pod="openstack/nova-metadata-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.770327 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf56610a-46ba-4937-b001-f3054a401f2f-config-data\") pod \"nova-scheduler-0\" (UID: \"cf56610a-46ba-4937-b001-f3054a401f2f\") " pod="openstack/nova-scheduler-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.770492 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbf66cbe-0c30-400e-a367-f176ddf6f94c-logs\") pod \"nova-metadata-0\" (UID: \"dbf66cbe-0c30-400e-a367-f176ddf6f94c\") " pod="openstack/nova-metadata-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.770587 4915 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee896549-5a9e-4159-b7d9-5eb0321366aa-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.772383 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3012cd0-1234-4d16-ae47-7eb87738f54d-logs\") pod \"nova-api-0\" (UID: \"c3012cd0-1234-4d16-ae47-7eb87738f54d\") " pod="openstack/nova-api-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.795540 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3012cd0-1234-4d16-ae47-7eb87738f54d-config-data\") pod \"nova-api-0\" (UID: \"c3012cd0-1234-4d16-ae47-7eb87738f54d\") " pod="openstack/nova-api-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.797592 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3012cd0-1234-4d16-ae47-7eb87738f54d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c3012cd0-1234-4d16-ae47-7eb87738f54d\") " pod="openstack/nova-api-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.809809 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.820836 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tdqj\" (UniqueName: \"kubernetes.io/projected/c3012cd0-1234-4d16-ae47-7eb87738f54d-kube-api-access-6tdqj\") pod \"nova-api-0\" (UID: \"c3012cd0-1234-4d16-ae47-7eb87738f54d\") " pod="openstack/nova-api-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.833468 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.835329 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.838762 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.848043 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.876147 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbf66cbe-0c30-400e-a367-f176ddf6f94c-logs\") pod \"nova-metadata-0\" (UID: \"dbf66cbe-0c30-400e-a367-f176ddf6f94c\") " pod="openstack/nova-metadata-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.876204 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djkn5\" (UniqueName: \"kubernetes.io/projected/cf56610a-46ba-4937-b001-f3054a401f2f-kube-api-access-djkn5\") pod \"nova-scheduler-0\" (UID: \"cf56610a-46ba-4937-b001-f3054a401f2f\") " pod="openstack/nova-scheduler-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.876241 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf56610a-46ba-4937-b001-f3054a401f2f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cf56610a-46ba-4937-b001-f3054a401f2f\") " pod="openstack/nova-scheduler-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.876318 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlkdc\" (UniqueName: \"kubernetes.io/projected/dbf66cbe-0c30-400e-a367-f176ddf6f94c-kube-api-access-rlkdc\") pod \"nova-metadata-0\" (UID: \"dbf66cbe-0c30-400e-a367-f176ddf6f94c\") " pod="openstack/nova-metadata-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.876341 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbf66cbe-0c30-400e-a367-f176ddf6f94c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dbf66cbe-0c30-400e-a367-f176ddf6f94c\") " pod="openstack/nova-metadata-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.876368 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbf66cbe-0c30-400e-a367-f176ddf6f94c-config-data\") pod \"nova-metadata-0\" (UID: \"dbf66cbe-0c30-400e-a367-f176ddf6f94c\") " pod="openstack/nova-metadata-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.876405 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf56610a-46ba-4937-b001-f3054a401f2f-config-data\") pod \"nova-scheduler-0\" (UID: \"cf56610a-46ba-4937-b001-f3054a401f2f\") " pod="openstack/nova-scheduler-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.878352 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbf66cbe-0c30-400e-a367-f176ddf6f94c-logs\") pod \"nova-metadata-0\" (UID: \"dbf66cbe-0c30-400e-a367-f176ddf6f94c\") " pod="openstack/nova-metadata-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.885526 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-m2d7d"] Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.887265 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-m2d7d" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.887299 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf56610a-46ba-4937-b001-f3054a401f2f-config-data\") pod \"nova-scheduler-0\" (UID: \"cf56610a-46ba-4937-b001-f3054a401f2f\") " pod="openstack/nova-scheduler-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.890235 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbf66cbe-0c30-400e-a367-f176ddf6f94c-config-data\") pod \"nova-metadata-0\" (UID: \"dbf66cbe-0c30-400e-a367-f176ddf6f94c\") " pod="openstack/nova-metadata-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.890496 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf56610a-46ba-4937-b001-f3054a401f2f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cf56610a-46ba-4937-b001-f3054a401f2f\") " pod="openstack/nova-scheduler-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.894374 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbf66cbe-0c30-400e-a367-f176ddf6f94c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dbf66cbe-0c30-400e-a367-f176ddf6f94c\") " pod="openstack/nova-metadata-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.895819 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-m2d7d"] Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.903627 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djkn5\" (UniqueName: \"kubernetes.io/projected/cf56610a-46ba-4937-b001-f3054a401f2f-kube-api-access-djkn5\") pod \"nova-scheduler-0\" (UID: \"cf56610a-46ba-4937-b001-f3054a401f2f\") " pod="openstack/nova-scheduler-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.914505 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlkdc\" (UniqueName: \"kubernetes.io/projected/dbf66cbe-0c30-400e-a367-f176ddf6f94c-kube-api-access-rlkdc\") pod \"nova-metadata-0\" (UID: \"dbf66cbe-0c30-400e-a367-f176ddf6f94c\") " pod="openstack/nova-metadata-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.968756 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.980397 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-htv6k" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.985955 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkfrh\" (UniqueName: \"kubernetes.io/projected/7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1-kube-api-access-wkfrh\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.986072 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.986255 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:12 crc kubenswrapper[4915]: I1124 21:44:12.998962 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee896549-5a9e-4159-b7d9-5eb0321366aa-config-data" (OuterVolumeSpecName: "config-data") pod "ee896549-5a9e-4159-b7d9-5eb0321366aa" (UID: "ee896549-5a9e-4159-b7d9-5eb0321366aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.028567 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee896549-5a9e-4159-b7d9-5eb0321366aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee896549-5a9e-4159-b7d9-5eb0321366aa" (UID: "ee896549-5a9e-4159-b7d9-5eb0321366aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.083527 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.088768 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8641d657-17ff-4ab2-a915-d7b945a8e7bf-config\") pod \"dnsmasq-dns-568d7fd7cf-m2d7d\" (UID: \"8641d657-17ff-4ab2-a915-d7b945a8e7bf\") " pod="openstack/dnsmasq-dns-568d7fd7cf-m2d7d" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.088966 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.089072 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8641d657-17ff-4ab2-a915-d7b945a8e7bf-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-m2d7d\" (UID: \"8641d657-17ff-4ab2-a915-d7b945a8e7bf\") " pod="openstack/dnsmasq-dns-568d7fd7cf-m2d7d" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.089159 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8641d657-17ff-4ab2-a915-d7b945a8e7bf-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-m2d7d\" (UID: \"8641d657-17ff-4ab2-a915-d7b945a8e7bf\") " pod="openstack/dnsmasq-dns-568d7fd7cf-m2d7d" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.089244 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqprf\" (UniqueName: \"kubernetes.io/projected/8641d657-17ff-4ab2-a915-d7b945a8e7bf-kube-api-access-rqprf\") pod \"dnsmasq-dns-568d7fd7cf-m2d7d\" (UID: \"8641d657-17ff-4ab2-a915-d7b945a8e7bf\") " pod="openstack/dnsmasq-dns-568d7fd7cf-m2d7d" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.089363 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8641d657-17ff-4ab2-a915-d7b945a8e7bf-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-m2d7d\" (UID: \"8641d657-17ff-4ab2-a915-d7b945a8e7bf\") " pod="openstack/dnsmasq-dns-568d7fd7cf-m2d7d" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.089485 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkfrh\" (UniqueName: \"kubernetes.io/projected/7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1-kube-api-access-wkfrh\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.089560 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8641d657-17ff-4ab2-a915-d7b945a8e7bf-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-m2d7d\" (UID: \"8641d657-17ff-4ab2-a915-d7b945a8e7bf\") " pod="openstack/dnsmasq-dns-568d7fd7cf-m2d7d" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.089666 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.094584 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee896549-5a9e-4159-b7d9-5eb0321366aa-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.094694 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee896549-5a9e-4159-b7d9-5eb0321366aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.101197 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.119835 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.125251 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkfrh\" (UniqueName: \"kubernetes.io/projected/7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1-kube-api-access-wkfrh\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.132183 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.138458 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-htv6k" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.209066 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8641d657-17ff-4ab2-a915-d7b945a8e7bf-config\") pod \"dnsmasq-dns-568d7fd7cf-m2d7d\" (UID: \"8641d657-17ff-4ab2-a915-d7b945a8e7bf\") " pod="openstack/dnsmasq-dns-568d7fd7cf-m2d7d" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.210495 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8641d657-17ff-4ab2-a915-d7b945a8e7bf-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-m2d7d\" (UID: \"8641d657-17ff-4ab2-a915-d7b945a8e7bf\") " pod="openstack/dnsmasq-dns-568d7fd7cf-m2d7d" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.210572 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8641d657-17ff-4ab2-a915-d7b945a8e7bf-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-m2d7d\" (UID: \"8641d657-17ff-4ab2-a915-d7b945a8e7bf\") " pod="openstack/dnsmasq-dns-568d7fd7cf-m2d7d" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.210594 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqprf\" (UniqueName: \"kubernetes.io/projected/8641d657-17ff-4ab2-a915-d7b945a8e7bf-kube-api-access-rqprf\") pod \"dnsmasq-dns-568d7fd7cf-m2d7d\" (UID: \"8641d657-17ff-4ab2-a915-d7b945a8e7bf\") " pod="openstack/dnsmasq-dns-568d7fd7cf-m2d7d" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.211399 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8641d657-17ff-4ab2-a915-d7b945a8e7bf-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-m2d7d\" (UID: \"8641d657-17ff-4ab2-a915-d7b945a8e7bf\") " pod="openstack/dnsmasq-dns-568d7fd7cf-m2d7d" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.211454 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8641d657-17ff-4ab2-a915-d7b945a8e7bf-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-m2d7d\" (UID: \"8641d657-17ff-4ab2-a915-d7b945a8e7bf\") " pod="openstack/dnsmasq-dns-568d7fd7cf-m2d7d" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.241746 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8641d657-17ff-4ab2-a915-d7b945a8e7bf-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-m2d7d\" (UID: \"8641d657-17ff-4ab2-a915-d7b945a8e7bf\") " pod="openstack/dnsmasq-dns-568d7fd7cf-m2d7d" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.243558 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8641d657-17ff-4ab2-a915-d7b945a8e7bf-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-m2d7d\" (UID: \"8641d657-17ff-4ab2-a915-d7b945a8e7bf\") " pod="openstack/dnsmasq-dns-568d7fd7cf-m2d7d" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.244024 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8641d657-17ff-4ab2-a915-d7b945a8e7bf-config\") pod \"dnsmasq-dns-568d7fd7cf-m2d7d\" (UID: \"8641d657-17ff-4ab2-a915-d7b945a8e7bf\") " pod="openstack/dnsmasq-dns-568d7fd7cf-m2d7d" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.244712 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8641d657-17ff-4ab2-a915-d7b945a8e7bf-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-m2d7d\" (UID: \"8641d657-17ff-4ab2-a915-d7b945a8e7bf\") " pod="openstack/dnsmasq-dns-568d7fd7cf-m2d7d" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.245505 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8641d657-17ff-4ab2-a915-d7b945a8e7bf-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-m2d7d\" (UID: \"8641d657-17ff-4ab2-a915-d7b945a8e7bf\") " pod="openstack/dnsmasq-dns-568d7fd7cf-m2d7d" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.266447 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqprf\" (UniqueName: \"kubernetes.io/projected/8641d657-17ff-4ab2-a915-d7b945a8e7bf-kube-api-access-rqprf\") pod \"dnsmasq-dns-568d7fd7cf-m2d7d\" (UID: \"8641d657-17ff-4ab2-a915-d7b945a8e7bf\") " pod="openstack/dnsmasq-dns-568d7fd7cf-m2d7d" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.270477 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.330169 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.377886 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-m2d7d" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.429640 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.447851 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.476855 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.485560 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.492203 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.492396 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.495215 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.554391 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-config-data\") pod \"ceilometer-0\" (UID: \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\") " pod="openstack/ceilometer-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.554447 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-scripts\") pod \"ceilometer-0\" (UID: \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\") " pod="openstack/ceilometer-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.554663 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6dzx\" (UniqueName: \"kubernetes.io/projected/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-kube-api-access-n6dzx\") pod \"ceilometer-0\" (UID: \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\") " pod="openstack/ceilometer-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.554685 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-log-httpd\") pod \"ceilometer-0\" (UID: \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\") " pod="openstack/ceilometer-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.554712 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\") " pod="openstack/ceilometer-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.554760 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\") " pod="openstack/ceilometer-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.554793 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-run-httpd\") pod \"ceilometer-0\" (UID: \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\") " pod="openstack/ceilometer-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.585192 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-jshft"] Nov 24 21:44:13 crc kubenswrapper[4915]: W1124 21:44:13.632698 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb3b004f_82a5_46ab_aff4_223567ddd793.slice/crio-9d5d59933769e6e1672f812f38d95d36b870d88354b7c9f9cf7def2d7379960e WatchSource:0}: Error finding container 9d5d59933769e6e1672f812f38d95d36b870d88354b7c9f9cf7def2d7379960e: Status 404 returned error can't find the container with id 9d5d59933769e6e1672f812f38d95d36b870d88354b7c9f9cf7def2d7379960e Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.656983 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6dzx\" (UniqueName: \"kubernetes.io/projected/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-kube-api-access-n6dzx\") pod \"ceilometer-0\" (UID: \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\") " pod="openstack/ceilometer-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.657028 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-log-httpd\") pod \"ceilometer-0\" (UID: \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\") " pod="openstack/ceilometer-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.657065 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\") " pod="openstack/ceilometer-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.657098 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\") " pod="openstack/ceilometer-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.657113 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-run-httpd\") pod \"ceilometer-0\" (UID: \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\") " pod="openstack/ceilometer-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.657185 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-config-data\") pod \"ceilometer-0\" (UID: \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\") " pod="openstack/ceilometer-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.657223 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-scripts\") pod \"ceilometer-0\" (UID: \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\") " pod="openstack/ceilometer-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.657530 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-log-httpd\") pod \"ceilometer-0\" (UID: \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\") " pod="openstack/ceilometer-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.659135 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-run-httpd\") pod \"ceilometer-0\" (UID: \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\") " pod="openstack/ceilometer-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.663401 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\") " pod="openstack/ceilometer-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.663461 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-scripts\") pod \"ceilometer-0\" (UID: \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\") " pod="openstack/ceilometer-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.664804 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\") " pod="openstack/ceilometer-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.667550 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-config-data\") pod \"ceilometer-0\" (UID: \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\") " pod="openstack/ceilometer-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.688888 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6dzx\" (UniqueName: \"kubernetes.io/projected/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-kube-api-access-n6dzx\") pod \"ceilometer-0\" (UID: \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\") " pod="openstack/ceilometer-0" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.715139 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-htv6k"] Nov 24 21:44:13 crc kubenswrapper[4915]: E1124 21:44:13.812092 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee896549_5a9e_4159_b7d9_5eb0321366aa.slice\": RecentStats: unable to find data in memory cache]" Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.858305 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.886041 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:44:13 crc kubenswrapper[4915]: I1124 21:44:13.908440 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:44:14 crc kubenswrapper[4915]: I1124 21:44:14.109579 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:44:14 crc kubenswrapper[4915]: I1124 21:44:14.179648 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:44:14 crc kubenswrapper[4915]: I1124 21:44:14.257479 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 21:44:14 crc kubenswrapper[4915]: I1124 21:44:14.290536 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-m2d7d"] Nov 24 21:44:14 crc kubenswrapper[4915]: I1124 21:44:14.344303 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-m2d7d" event={"ID":"8641d657-17ff-4ab2-a915-d7b945a8e7bf","Type":"ContainerStarted","Data":"39aaaf5c99efbcaaff78ed994539dbf2fe4abf23f9f08beb61681d9c1e03410b"} Nov 24 21:44:14 crc kubenswrapper[4915]: I1124 21:44:14.346122 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1","Type":"ContainerStarted","Data":"1f9dbe87485f9f9418413450bd7a0250d4859b7c54b79d581eef245639d2cde3"} Nov 24 21:44:14 crc kubenswrapper[4915]: I1124 21:44:14.348536 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jshft" event={"ID":"fb3b004f-82a5-46ab-aff4-223567ddd793","Type":"ContainerStarted","Data":"a520360b07c830366aeddd990b071210934be56a39c28163798c5769febfd9c5"} Nov 24 21:44:14 crc kubenswrapper[4915]: I1124 21:44:14.348668 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jshft" event={"ID":"fb3b004f-82a5-46ab-aff4-223567ddd793","Type":"ContainerStarted","Data":"9d5d59933769e6e1672f812f38d95d36b870d88354b7c9f9cf7def2d7379960e"} Nov 24 21:44:14 crc kubenswrapper[4915]: I1124 21:44:14.353135 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cf56610a-46ba-4937-b001-f3054a401f2f","Type":"ContainerStarted","Data":"abc346e3f067ece41762bf80c03a78ab7fa870619ae8ac4736b5a896ddf6a66c"} Nov 24 21:44:14 crc kubenswrapper[4915]: I1124 21:44:14.366186 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3012cd0-1234-4d16-ae47-7eb87738f54d","Type":"ContainerStarted","Data":"8a887a6ba1d0968e665b05e15e8a3016c09d0070907bfd09caa34d119ac08c3b"} Nov 24 21:44:14 crc kubenswrapper[4915]: I1124 21:44:14.367604 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-htv6k" podUID="91467267-1652-40bb-aa0b-99ede2e49bb4" containerName="registry-server" containerID="cri-o://bc3899f6b7601c747c5c36fc3fae30af059ecf121217c436a655b0835bf58838" gracePeriod=2 Nov 24 21:44:14 crc kubenswrapper[4915]: I1124 21:44:14.367705 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dbf66cbe-0c30-400e-a367-f176ddf6f94c","Type":"ContainerStarted","Data":"5645bee5af9aa1bae6776a3fa8552b928dfe9bb51772731d00d80d04524b1e4a"} Nov 24 21:44:14 crc kubenswrapper[4915]: I1124 21:44:14.444439 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee896549-5a9e-4159-b7d9-5eb0321366aa" path="/var/lib/kubelet/pods/ee896549-5a9e-4159-b7d9-5eb0321366aa/volumes" Nov 24 21:44:14 crc kubenswrapper[4915]: I1124 21:44:14.649096 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-jshft" podStartSLOduration=2.649078197 podStartE2EDuration="2.649078197s" podCreationTimestamp="2025-11-24 21:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:44:14.368174822 +0000 UTC m=+1472.684427015" watchObservedRunningTime="2025-11-24 21:44:14.649078197 +0000 UTC m=+1472.965330370" Nov 24 21:44:14 crc kubenswrapper[4915]: I1124 21:44:14.692376 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:44:14 crc kubenswrapper[4915]: I1124 21:44:14.874382 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-298k5"] Nov 24 21:44:14 crc kubenswrapper[4915]: I1124 21:44:14.877481 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-298k5" Nov 24 21:44:14 crc kubenswrapper[4915]: I1124 21:44:14.884395 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 24 21:44:14 crc kubenswrapper[4915]: I1124 21:44:14.884664 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 24 21:44:14 crc kubenswrapper[4915]: I1124 21:44:14.902751 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-298k5"] Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.073319 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0b0bf9-6641-4a71-8adb-2c6e97412718-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-298k5\" (UID: \"5e0b0bf9-6641-4a71-8adb-2c6e97412718\") " pod="openstack/nova-cell1-conductor-db-sync-298k5" Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.073428 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e0b0bf9-6641-4a71-8adb-2c6e97412718-config-data\") pod \"nova-cell1-conductor-db-sync-298k5\" (UID: \"5e0b0bf9-6641-4a71-8adb-2c6e97412718\") " pod="openstack/nova-cell1-conductor-db-sync-298k5" Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.073477 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e0b0bf9-6641-4a71-8adb-2c6e97412718-scripts\") pod \"nova-cell1-conductor-db-sync-298k5\" (UID: \"5e0b0bf9-6641-4a71-8adb-2c6e97412718\") " pod="openstack/nova-cell1-conductor-db-sync-298k5" Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.073518 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt5wv\" (UniqueName: \"kubernetes.io/projected/5e0b0bf9-6641-4a71-8adb-2c6e97412718-kube-api-access-jt5wv\") pod \"nova-cell1-conductor-db-sync-298k5\" (UID: \"5e0b0bf9-6641-4a71-8adb-2c6e97412718\") " pod="openstack/nova-cell1-conductor-db-sync-298k5" Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.145052 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-htv6k" Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.177240 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e0b0bf9-6641-4a71-8adb-2c6e97412718-config-data\") pod \"nova-cell1-conductor-db-sync-298k5\" (UID: \"5e0b0bf9-6641-4a71-8adb-2c6e97412718\") " pod="openstack/nova-cell1-conductor-db-sync-298k5" Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.177342 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e0b0bf9-6641-4a71-8adb-2c6e97412718-scripts\") pod \"nova-cell1-conductor-db-sync-298k5\" (UID: \"5e0b0bf9-6641-4a71-8adb-2c6e97412718\") " pod="openstack/nova-cell1-conductor-db-sync-298k5" Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.177401 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt5wv\" (UniqueName: \"kubernetes.io/projected/5e0b0bf9-6641-4a71-8adb-2c6e97412718-kube-api-access-jt5wv\") pod \"nova-cell1-conductor-db-sync-298k5\" (UID: \"5e0b0bf9-6641-4a71-8adb-2c6e97412718\") " pod="openstack/nova-cell1-conductor-db-sync-298k5" Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.177556 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0b0bf9-6641-4a71-8adb-2c6e97412718-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-298k5\" (UID: \"5e0b0bf9-6641-4a71-8adb-2c6e97412718\") " pod="openstack/nova-cell1-conductor-db-sync-298k5" Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.181146 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e0b0bf9-6641-4a71-8adb-2c6e97412718-scripts\") pod \"nova-cell1-conductor-db-sync-298k5\" (UID: \"5e0b0bf9-6641-4a71-8adb-2c6e97412718\") " pod="openstack/nova-cell1-conductor-db-sync-298k5" Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.183241 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e0b0bf9-6641-4a71-8adb-2c6e97412718-config-data\") pod \"nova-cell1-conductor-db-sync-298k5\" (UID: \"5e0b0bf9-6641-4a71-8adb-2c6e97412718\") " pod="openstack/nova-cell1-conductor-db-sync-298k5" Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.193146 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0b0bf9-6641-4a71-8adb-2c6e97412718-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-298k5\" (UID: \"5e0b0bf9-6641-4a71-8adb-2c6e97412718\") " pod="openstack/nova-cell1-conductor-db-sync-298k5" Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.198741 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt5wv\" (UniqueName: \"kubernetes.io/projected/5e0b0bf9-6641-4a71-8adb-2c6e97412718-kube-api-access-jt5wv\") pod \"nova-cell1-conductor-db-sync-298k5\" (UID: \"5e0b0bf9-6641-4a71-8adb-2c6e97412718\") " pod="openstack/nova-cell1-conductor-db-sync-298k5" Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.270218 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-298k5" Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.279340 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91467267-1652-40bb-aa0b-99ede2e49bb4-utilities\") pod \"91467267-1652-40bb-aa0b-99ede2e49bb4\" (UID: \"91467267-1652-40bb-aa0b-99ede2e49bb4\") " Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.279443 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hdmf\" (UniqueName: \"kubernetes.io/projected/91467267-1652-40bb-aa0b-99ede2e49bb4-kube-api-access-5hdmf\") pod \"91467267-1652-40bb-aa0b-99ede2e49bb4\" (UID: \"91467267-1652-40bb-aa0b-99ede2e49bb4\") " Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.280395 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91467267-1652-40bb-aa0b-99ede2e49bb4-utilities" (OuterVolumeSpecName: "utilities") pod "91467267-1652-40bb-aa0b-99ede2e49bb4" (UID: "91467267-1652-40bb-aa0b-99ede2e49bb4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.280648 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91467267-1652-40bb-aa0b-99ede2e49bb4-catalog-content\") pod \"91467267-1652-40bb-aa0b-99ede2e49bb4\" (UID: \"91467267-1652-40bb-aa0b-99ede2e49bb4\") " Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.281266 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91467267-1652-40bb-aa0b-99ede2e49bb4-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.285980 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91467267-1652-40bb-aa0b-99ede2e49bb4-kube-api-access-5hdmf" (OuterVolumeSpecName: "kube-api-access-5hdmf") pod "91467267-1652-40bb-aa0b-99ede2e49bb4" (UID: "91467267-1652-40bb-aa0b-99ede2e49bb4"). InnerVolumeSpecName "kube-api-access-5hdmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.382923 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hdmf\" (UniqueName: \"kubernetes.io/projected/91467267-1652-40bb-aa0b-99ede2e49bb4-kube-api-access-5hdmf\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.393193 4915 generic.go:334] "Generic (PLEG): container finished" podID="8641d657-17ff-4ab2-a915-d7b945a8e7bf" containerID="6757784d5b17baf5e0989902d8ff83b28deb761d4564fe5916b71dbb6e4aa8ec" exitCode=0 Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.393274 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-m2d7d" event={"ID":"8641d657-17ff-4ab2-a915-d7b945a8e7bf","Type":"ContainerDied","Data":"6757784d5b17baf5e0989902d8ff83b28deb761d4564fe5916b71dbb6e4aa8ec"} Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.414210 4915 generic.go:334] "Generic (PLEG): container finished" podID="91467267-1652-40bb-aa0b-99ede2e49bb4" containerID="bc3899f6b7601c747c5c36fc3fae30af059ecf121217c436a655b0835bf58838" exitCode=0 Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.414311 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htv6k" event={"ID":"91467267-1652-40bb-aa0b-99ede2e49bb4","Type":"ContainerDied","Data":"bc3899f6b7601c747c5c36fc3fae30af059ecf121217c436a655b0835bf58838"} Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.414338 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-htv6k" event={"ID":"91467267-1652-40bb-aa0b-99ede2e49bb4","Type":"ContainerDied","Data":"e259b0d23eba8d69eeab26d0412d268fc3ef8cbc7a6f6d38a9a3414c83f1b662"} Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.414356 4915 scope.go:117] "RemoveContainer" containerID="bc3899f6b7601c747c5c36fc3fae30af059ecf121217c436a655b0835bf58838" Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.414495 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-htv6k" Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.457008 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca","Type":"ContainerStarted","Data":"c9d62bcb483229cfa0d75ede44e6eab6c895f6c4291912949d1422542884e9df"} Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.529421 4915 scope.go:117] "RemoveContainer" containerID="3fa464e92b916bf0d476320585eac1f40008cae1dcd8784831755b8ae1ca0506" Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.531642 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91467267-1652-40bb-aa0b-99ede2e49bb4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "91467267-1652-40bb-aa0b-99ede2e49bb4" (UID: "91467267-1652-40bb-aa0b-99ede2e49bb4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.610260 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91467267-1652-40bb-aa0b-99ede2e49bb4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.672834 4915 scope.go:117] "RemoveContainer" containerID="487d7daf8de619d22ec1ecf71c975e31ded3d5b2fcf5d305422ea412db774bfb" Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.782392 4915 scope.go:117] "RemoveContainer" containerID="bc3899f6b7601c747c5c36fc3fae30af059ecf121217c436a655b0835bf58838" Nov 24 21:44:15 crc kubenswrapper[4915]: E1124 21:44:15.782967 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc3899f6b7601c747c5c36fc3fae30af059ecf121217c436a655b0835bf58838\": container with ID starting with bc3899f6b7601c747c5c36fc3fae30af059ecf121217c436a655b0835bf58838 not found: ID does not exist" containerID="bc3899f6b7601c747c5c36fc3fae30af059ecf121217c436a655b0835bf58838" Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.783005 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc3899f6b7601c747c5c36fc3fae30af059ecf121217c436a655b0835bf58838"} err="failed to get container status \"bc3899f6b7601c747c5c36fc3fae30af059ecf121217c436a655b0835bf58838\": rpc error: code = NotFound desc = could not find container \"bc3899f6b7601c747c5c36fc3fae30af059ecf121217c436a655b0835bf58838\": container with ID starting with bc3899f6b7601c747c5c36fc3fae30af059ecf121217c436a655b0835bf58838 not found: ID does not exist" Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.783030 4915 scope.go:117] "RemoveContainer" containerID="3fa464e92b916bf0d476320585eac1f40008cae1dcd8784831755b8ae1ca0506" Nov 24 21:44:15 crc kubenswrapper[4915]: E1124 21:44:15.794261 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fa464e92b916bf0d476320585eac1f40008cae1dcd8784831755b8ae1ca0506\": container with ID starting with 3fa464e92b916bf0d476320585eac1f40008cae1dcd8784831755b8ae1ca0506 not found: ID does not exist" containerID="3fa464e92b916bf0d476320585eac1f40008cae1dcd8784831755b8ae1ca0506" Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.794314 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fa464e92b916bf0d476320585eac1f40008cae1dcd8784831755b8ae1ca0506"} err="failed to get container status \"3fa464e92b916bf0d476320585eac1f40008cae1dcd8784831755b8ae1ca0506\": rpc error: code = NotFound desc = could not find container \"3fa464e92b916bf0d476320585eac1f40008cae1dcd8784831755b8ae1ca0506\": container with ID starting with 3fa464e92b916bf0d476320585eac1f40008cae1dcd8784831755b8ae1ca0506 not found: ID does not exist" Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.794380 4915 scope.go:117] "RemoveContainer" containerID="487d7daf8de619d22ec1ecf71c975e31ded3d5b2fcf5d305422ea412db774bfb" Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.794373 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-htv6k"] Nov 24 21:44:15 crc kubenswrapper[4915]: E1124 21:44:15.794963 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"487d7daf8de619d22ec1ecf71c975e31ded3d5b2fcf5d305422ea412db774bfb\": container with ID starting with 487d7daf8de619d22ec1ecf71c975e31ded3d5b2fcf5d305422ea412db774bfb not found: ID does not exist" containerID="487d7daf8de619d22ec1ecf71c975e31ded3d5b2fcf5d305422ea412db774bfb" Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.794991 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"487d7daf8de619d22ec1ecf71c975e31ded3d5b2fcf5d305422ea412db774bfb"} err="failed to get container status \"487d7daf8de619d22ec1ecf71c975e31ded3d5b2fcf5d305422ea412db774bfb\": rpc error: code = NotFound desc = could not find container \"487d7daf8de619d22ec1ecf71c975e31ded3d5b2fcf5d305422ea412db774bfb\": container with ID starting with 487d7daf8de619d22ec1ecf71c975e31ded3d5b2fcf5d305422ea412db774bfb not found: ID does not exist" Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.810818 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-htv6k"] Nov 24 21:44:15 crc kubenswrapper[4915]: I1124 21:44:15.898428 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-298k5"] Nov 24 21:44:15 crc kubenswrapper[4915]: W1124 21:44:15.935713 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e0b0bf9_6641_4a71_8adb_2c6e97412718.slice/crio-a57b442a381a6d7f8268fcdd01fdfa9c67484fa9dac6c764f9cb211f20241dc9 WatchSource:0}: Error finding container a57b442a381a6d7f8268fcdd01fdfa9c67484fa9dac6c764f9cb211f20241dc9: Status 404 returned error can't find the container with id a57b442a381a6d7f8268fcdd01fdfa9c67484fa9dac6c764f9cb211f20241dc9 Nov 24 21:44:16 crc kubenswrapper[4915]: I1124 21:44:16.466983 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91467267-1652-40bb-aa0b-99ede2e49bb4" path="/var/lib/kubelet/pods/91467267-1652-40bb-aa0b-99ede2e49bb4/volumes" Nov 24 21:44:16 crc kubenswrapper[4915]: I1124 21:44:16.491575 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-298k5" event={"ID":"5e0b0bf9-6641-4a71-8adb-2c6e97412718","Type":"ContainerStarted","Data":"7b3a95eafda5ef5aa58648bc3ebdfaf7e7e1a3145b21869374f404dce05bc8cc"} Nov 24 21:44:16 crc kubenswrapper[4915]: I1124 21:44:16.491625 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-298k5" event={"ID":"5e0b0bf9-6641-4a71-8adb-2c6e97412718","Type":"ContainerStarted","Data":"a57b442a381a6d7f8268fcdd01fdfa9c67484fa9dac6c764f9cb211f20241dc9"} Nov 24 21:44:16 crc kubenswrapper[4915]: I1124 21:44:16.513089 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-m2d7d" event={"ID":"8641d657-17ff-4ab2-a915-d7b945a8e7bf","Type":"ContainerStarted","Data":"b0ada4a0a9bb059b84f137507a9ad84920888af4f10277f2370926934a5114c9"} Nov 24 21:44:16 crc kubenswrapper[4915]: I1124 21:44:16.513706 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-568d7fd7cf-m2d7d" Nov 24 21:44:16 crc kubenswrapper[4915]: I1124 21:44:16.525471 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-298k5" podStartSLOduration=2.525445601 podStartE2EDuration="2.525445601s" podCreationTimestamp="2025-11-24 21:44:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:44:16.509136062 +0000 UTC m=+1474.825388235" watchObservedRunningTime="2025-11-24 21:44:16.525445601 +0000 UTC m=+1474.841697794" Nov 24 21:44:16 crc kubenswrapper[4915]: I1124 21:44:16.535146 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca","Type":"ContainerStarted","Data":"652a122da7c263e7829bc43528afaa69be1053dfa9eb40937ea0c6fa7bc1fa7a"} Nov 24 21:44:16 crc kubenswrapper[4915]: I1124 21:44:16.544685 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-568d7fd7cf-m2d7d" podStartSLOduration=4.544662339 podStartE2EDuration="4.544662339s" podCreationTimestamp="2025-11-24 21:44:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:44:16.544482744 +0000 UTC m=+1474.860734937" watchObservedRunningTime="2025-11-24 21:44:16.544662339 +0000 UTC m=+1474.860914522" Nov 24 21:44:16 crc kubenswrapper[4915]: I1124 21:44:16.698554 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 21:44:16 crc kubenswrapper[4915]: I1124 21:44:16.711181 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:44:17 crc kubenswrapper[4915]: I1124 21:44:17.549486 4915 generic.go:334] "Generic (PLEG): container finished" podID="72a33116-8ea9-489b-819c-e69372202e03" containerID="e8c459a545949fde680ec3decd68590600bb768bc0ce898b901edef34d1ac4a0" exitCode=137 Nov 24 21:44:17 crc kubenswrapper[4915]: I1124 21:44:17.549613 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-54cc46598c-928hf" event={"ID":"72a33116-8ea9-489b-819c-e69372202e03","Type":"ContainerDied","Data":"e8c459a545949fde680ec3decd68590600bb768bc0ce898b901edef34d1ac4a0"} Nov 24 21:44:17 crc kubenswrapper[4915]: I1124 21:44:17.553322 4915 generic.go:334] "Generic (PLEG): container finished" podID="cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f" containerID="b28b879f3b20b16dbe0627e2fe2354877e944cd953073868451debaf3478b982" exitCode=137 Nov 24 21:44:17 crc kubenswrapper[4915]: I1124 21:44:17.553401 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b6fc899f8-tjzhx" event={"ID":"cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f","Type":"ContainerDied","Data":"b28b879f3b20b16dbe0627e2fe2354877e944cd953073868451debaf3478b982"} Nov 24 21:44:17 crc kubenswrapper[4915]: I1124 21:44:17.556729 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca","Type":"ContainerStarted","Data":"fc018c5e742b5c3febfa27f05dba9edd33bf8b77a5fa5222fc466cfda4197a4b"} Nov 24 21:44:19 crc kubenswrapper[4915]: I1124 21:44:19.753848 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-54cc46598c-928hf" podUID="72a33116-8ea9-489b-819c-e69372202e03" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.217:8004/healthcheck\": dial tcp 10.217.0.217:8004: connect: connection refused" Nov 24 21:44:19 crc kubenswrapper[4915]: I1124 21:44:19.936456 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b6fc899f8-tjzhx" Nov 24 21:44:20 crc kubenswrapper[4915]: I1124 21:44:20.034303 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f-combined-ca-bundle\") pod \"cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f\" (UID: \"cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f\") " Nov 24 21:44:20 crc kubenswrapper[4915]: I1124 21:44:20.034376 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57nz6\" (UniqueName: \"kubernetes.io/projected/cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f-kube-api-access-57nz6\") pod \"cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f\" (UID: \"cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f\") " Nov 24 21:44:20 crc kubenswrapper[4915]: I1124 21:44:20.034591 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f-config-data-custom\") pod \"cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f\" (UID: \"cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f\") " Nov 24 21:44:20 crc kubenswrapper[4915]: I1124 21:44:20.034701 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f-config-data\") pod \"cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f\" (UID: \"cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f\") " Nov 24 21:44:20 crc kubenswrapper[4915]: I1124 21:44:20.128989 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f-kube-api-access-57nz6" (OuterVolumeSpecName: "kube-api-access-57nz6") pod "cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f" (UID: "cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f"). InnerVolumeSpecName "kube-api-access-57nz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:44:20 crc kubenswrapper[4915]: I1124 21:44:20.134018 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f" (UID: "cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:20 crc kubenswrapper[4915]: I1124 21:44:20.141246 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57nz6\" (UniqueName: \"kubernetes.io/projected/cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f-kube-api-access-57nz6\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:20 crc kubenswrapper[4915]: I1124 21:44:20.141283 4915 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:20 crc kubenswrapper[4915]: I1124 21:44:20.230233 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f-config-data" (OuterVolumeSpecName: "config-data") pod "cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f" (UID: "cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:20 crc kubenswrapper[4915]: I1124 21:44:20.234206 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f" (UID: "cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:20 crc kubenswrapper[4915]: I1124 21:44:20.246647 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:20 crc kubenswrapper[4915]: I1124 21:44:20.246726 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:20 crc kubenswrapper[4915]: I1124 21:44:20.583618 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-54cc46598c-928hf" Nov 24 21:44:20 crc kubenswrapper[4915]: I1124 21:44:20.639803 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3012cd0-1234-4d16-ae47-7eb87738f54d","Type":"ContainerStarted","Data":"28f62c1e42afb84029f597f292bd6d55dca3a4f11a48141552676a97c68656b7"} Nov 24 21:44:20 crc kubenswrapper[4915]: I1124 21:44:20.642220 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-54cc46598c-928hf" event={"ID":"72a33116-8ea9-489b-819c-e69372202e03","Type":"ContainerDied","Data":"c415c7509f481afaf281f225c9382ec832b946f338f621cc6b7b75cc4e8e905e"} Nov 24 21:44:20 crc kubenswrapper[4915]: I1124 21:44:20.642300 4915 scope.go:117] "RemoveContainer" containerID="e8c459a545949fde680ec3decd68590600bb768bc0ce898b901edef34d1ac4a0" Nov 24 21:44:20 crc kubenswrapper[4915]: I1124 21:44:20.642588 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-54cc46598c-928hf" Nov 24 21:44:20 crc kubenswrapper[4915]: I1124 21:44:20.665448 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7b6fc899f8-tjzhx" event={"ID":"cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f","Type":"ContainerDied","Data":"fab474f06a11bd8ebf7af56b10b625f09a4227ae86f8acf3f5cb7a813445f396"} Nov 24 21:44:20 crc kubenswrapper[4915]: I1124 21:44:20.665560 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7b6fc899f8-tjzhx" Nov 24 21:44:20 crc kubenswrapper[4915]: I1124 21:44:20.667598 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72a33116-8ea9-489b-819c-e69372202e03-config-data-custom\") pod \"72a33116-8ea9-489b-819c-e69372202e03\" (UID: \"72a33116-8ea9-489b-819c-e69372202e03\") " Nov 24 21:44:20 crc kubenswrapper[4915]: I1124 21:44:20.667669 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72a33116-8ea9-489b-819c-e69372202e03-combined-ca-bundle\") pod \"72a33116-8ea9-489b-819c-e69372202e03\" (UID: \"72a33116-8ea9-489b-819c-e69372202e03\") " Nov 24 21:44:20 crc kubenswrapper[4915]: I1124 21:44:20.667718 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpx92\" (UniqueName: \"kubernetes.io/projected/72a33116-8ea9-489b-819c-e69372202e03-kube-api-access-hpx92\") pod \"72a33116-8ea9-489b-819c-e69372202e03\" (UID: \"72a33116-8ea9-489b-819c-e69372202e03\") " Nov 24 21:44:20 crc kubenswrapper[4915]: I1124 21:44:20.667954 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72a33116-8ea9-489b-819c-e69372202e03-config-data\") pod \"72a33116-8ea9-489b-819c-e69372202e03\" (UID: \"72a33116-8ea9-489b-819c-e69372202e03\") " Nov 24 21:44:20 crc kubenswrapper[4915]: I1124 21:44:20.738538 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7b6fc899f8-tjzhx"] Nov 24 21:44:20 crc kubenswrapper[4915]: I1124 21:44:20.751363 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7b6fc899f8-tjzhx"] Nov 24 21:44:20 crc kubenswrapper[4915]: I1124 21:44:20.759370 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72a33116-8ea9-489b-819c-e69372202e03-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "72a33116-8ea9-489b-819c-e69372202e03" (UID: "72a33116-8ea9-489b-819c-e69372202e03"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:20 crc kubenswrapper[4915]: I1124 21:44:20.761349 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72a33116-8ea9-489b-819c-e69372202e03-kube-api-access-hpx92" (OuterVolumeSpecName: "kube-api-access-hpx92") pod "72a33116-8ea9-489b-819c-e69372202e03" (UID: "72a33116-8ea9-489b-819c-e69372202e03"). InnerVolumeSpecName "kube-api-access-hpx92". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:44:20 crc kubenswrapper[4915]: I1124 21:44:20.771550 4915 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/72a33116-8ea9-489b-819c-e69372202e03-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:20 crc kubenswrapper[4915]: I1124 21:44:20.771594 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpx92\" (UniqueName: \"kubernetes.io/projected/72a33116-8ea9-489b-819c-e69372202e03-kube-api-access-hpx92\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:20 crc kubenswrapper[4915]: I1124 21:44:20.822858 4915 scope.go:117] "RemoveContainer" containerID="b28b879f3b20b16dbe0627e2fe2354877e944cd953073868451debaf3478b982" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.050673 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-nzxkj"] Nov 24 21:44:21 crc kubenswrapper[4915]: E1124 21:44:21.051621 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72a33116-8ea9-489b-819c-e69372202e03" containerName="heat-api" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.051647 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="72a33116-8ea9-489b-819c-e69372202e03" containerName="heat-api" Nov 24 21:44:21 crc kubenswrapper[4915]: E1124 21:44:21.051666 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91467267-1652-40bb-aa0b-99ede2e49bb4" containerName="extract-content" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.051673 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="91467267-1652-40bb-aa0b-99ede2e49bb4" containerName="extract-content" Nov 24 21:44:21 crc kubenswrapper[4915]: E1124 21:44:21.051683 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91467267-1652-40bb-aa0b-99ede2e49bb4" containerName="registry-server" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.051690 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="91467267-1652-40bb-aa0b-99ede2e49bb4" containerName="registry-server" Nov 24 21:44:21 crc kubenswrapper[4915]: E1124 21:44:21.051700 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f" containerName="heat-cfnapi" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.051705 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f" containerName="heat-cfnapi" Nov 24 21:44:21 crc kubenswrapper[4915]: E1124 21:44:21.051726 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91467267-1652-40bb-aa0b-99ede2e49bb4" containerName="extract-utilities" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.051732 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="91467267-1652-40bb-aa0b-99ede2e49bb4" containerName="extract-utilities" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.052021 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f" containerName="heat-cfnapi" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.052044 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="91467267-1652-40bb-aa0b-99ede2e49bb4" containerName="registry-server" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.052068 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="72a33116-8ea9-489b-819c-e69372202e03" containerName="heat-api" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.053047 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-nzxkj" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.067230 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-nzxkj"] Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.085842 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-405b-account-create-jjfvx"] Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.087334 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-405b-account-create-jjfvx" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.094584 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.137821 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-405b-account-create-jjfvx"] Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.177124 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72a33116-8ea9-489b-819c-e69372202e03-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72a33116-8ea9-489b-819c-e69372202e03" (UID: "72a33116-8ea9-489b-819c-e69372202e03"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.185288 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96df9840-ef15-4b5e-a23c-a7c0dbdf9d14-operator-scripts\") pod \"aodh-405b-account-create-jjfvx\" (UID: \"96df9840-ef15-4b5e-a23c-a7c0dbdf9d14\") " pod="openstack/aodh-405b-account-create-jjfvx" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.185325 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c94bae1f-7fd1-400e-b38f-6d835e52bb97-operator-scripts\") pod \"aodh-db-create-nzxkj\" (UID: \"c94bae1f-7fd1-400e-b38f-6d835e52bb97\") " pod="openstack/aodh-db-create-nzxkj" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.185436 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw2fb\" (UniqueName: \"kubernetes.io/projected/c94bae1f-7fd1-400e-b38f-6d835e52bb97-kube-api-access-jw2fb\") pod \"aodh-db-create-nzxkj\" (UID: \"c94bae1f-7fd1-400e-b38f-6d835e52bb97\") " pod="openstack/aodh-db-create-nzxkj" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.185520 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv7m4\" (UniqueName: \"kubernetes.io/projected/96df9840-ef15-4b5e-a23c-a7c0dbdf9d14-kube-api-access-fv7m4\") pod \"aodh-405b-account-create-jjfvx\" (UID: \"96df9840-ef15-4b5e-a23c-a7c0dbdf9d14\") " pod="openstack/aodh-405b-account-create-jjfvx" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.185598 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72a33116-8ea9-489b-819c-e69372202e03-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.284121 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72a33116-8ea9-489b-819c-e69372202e03-config-data" (OuterVolumeSpecName: "config-data") pod "72a33116-8ea9-489b-819c-e69372202e03" (UID: "72a33116-8ea9-489b-819c-e69372202e03"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.292707 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96df9840-ef15-4b5e-a23c-a7c0dbdf9d14-operator-scripts\") pod \"aodh-405b-account-create-jjfvx\" (UID: \"96df9840-ef15-4b5e-a23c-a7c0dbdf9d14\") " pod="openstack/aodh-405b-account-create-jjfvx" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.292769 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c94bae1f-7fd1-400e-b38f-6d835e52bb97-operator-scripts\") pod \"aodh-db-create-nzxkj\" (UID: \"c94bae1f-7fd1-400e-b38f-6d835e52bb97\") " pod="openstack/aodh-db-create-nzxkj" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.292924 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw2fb\" (UniqueName: \"kubernetes.io/projected/c94bae1f-7fd1-400e-b38f-6d835e52bb97-kube-api-access-jw2fb\") pod \"aodh-db-create-nzxkj\" (UID: \"c94bae1f-7fd1-400e-b38f-6d835e52bb97\") " pod="openstack/aodh-db-create-nzxkj" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.293006 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv7m4\" (UniqueName: \"kubernetes.io/projected/96df9840-ef15-4b5e-a23c-a7c0dbdf9d14-kube-api-access-fv7m4\") pod \"aodh-405b-account-create-jjfvx\" (UID: \"96df9840-ef15-4b5e-a23c-a7c0dbdf9d14\") " pod="openstack/aodh-405b-account-create-jjfvx" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.293110 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72a33116-8ea9-489b-819c-e69372202e03-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.294251 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96df9840-ef15-4b5e-a23c-a7c0dbdf9d14-operator-scripts\") pod \"aodh-405b-account-create-jjfvx\" (UID: \"96df9840-ef15-4b5e-a23c-a7c0dbdf9d14\") " pod="openstack/aodh-405b-account-create-jjfvx" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.294689 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c94bae1f-7fd1-400e-b38f-6d835e52bb97-operator-scripts\") pod \"aodh-db-create-nzxkj\" (UID: \"c94bae1f-7fd1-400e-b38f-6d835e52bb97\") " pod="openstack/aodh-db-create-nzxkj" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.326016 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw2fb\" (UniqueName: \"kubernetes.io/projected/c94bae1f-7fd1-400e-b38f-6d835e52bb97-kube-api-access-jw2fb\") pod \"aodh-db-create-nzxkj\" (UID: \"c94bae1f-7fd1-400e-b38f-6d835e52bb97\") " pod="openstack/aodh-db-create-nzxkj" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.326474 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv7m4\" (UniqueName: \"kubernetes.io/projected/96df9840-ef15-4b5e-a23c-a7c0dbdf9d14-kube-api-access-fv7m4\") pod \"aodh-405b-account-create-jjfvx\" (UID: \"96df9840-ef15-4b5e-a23c-a7c0dbdf9d14\") " pod="openstack/aodh-405b-account-create-jjfvx" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.438569 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-nzxkj" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.457589 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-405b-account-create-jjfvx" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.702222 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cf56610a-46ba-4937-b001-f3054a401f2f","Type":"ContainerStarted","Data":"a788202b84d776934294fe577d610efff3d340bf505787b8ad7def6968f798f2"} Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.706401 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca","Type":"ContainerStarted","Data":"a697934c1cbaa1a117ec4d480fcd6e76855e24f2c8cac88d01de1fe73ff91a90"} Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.713432 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dbf66cbe-0c30-400e-a367-f176ddf6f94c","Type":"ContainerStarted","Data":"338d78185e578e0de890f76d6bab6277eb864d0fa36a309c6b883fa097ba08ec"} Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.724615 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.792386338 podStartE2EDuration="9.724593414s" podCreationTimestamp="2025-11-24 21:44:12 +0000 UTC" firstStartedPulling="2025-11-24 21:44:14.109112774 +0000 UTC m=+1472.425364947" lastFinishedPulling="2025-11-24 21:44:20.04131985 +0000 UTC m=+1478.357572023" observedRunningTime="2025-11-24 21:44:21.721220783 +0000 UTC m=+1480.037472956" watchObservedRunningTime="2025-11-24 21:44:21.724593414 +0000 UTC m=+1480.040845587" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.740879 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1","Type":"ContainerStarted","Data":"6b0aa3428e2ffd403420cd969f1ed3877c982cf1e85c0b3af3385eb466455007"} Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.741187 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://6b0aa3428e2ffd403420cd969f1ed3877c982cf1e85c0b3af3385eb466455007" gracePeriod=30 Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.769988 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.007900503 podStartE2EDuration="9.769970956s" podCreationTimestamp="2025-11-24 21:44:12 +0000 UTC" firstStartedPulling="2025-11-24 21:44:14.276956765 +0000 UTC m=+1472.593208938" lastFinishedPulling="2025-11-24 21:44:20.039027218 +0000 UTC m=+1478.355279391" observedRunningTime="2025-11-24 21:44:21.767309634 +0000 UTC m=+1480.083561817" watchObservedRunningTime="2025-11-24 21:44:21.769970956 +0000 UTC m=+1480.086223129" Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.853138 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-54cc46598c-928hf"] Nov 24 21:44:21 crc kubenswrapper[4915]: I1124 21:44:21.873958 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-54cc46598c-928hf"] Nov 24 21:44:22 crc kubenswrapper[4915]: I1124 21:44:22.218605 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-nzxkj"] Nov 24 21:44:22 crc kubenswrapper[4915]: I1124 21:44:22.403420 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-405b-account-create-jjfvx"] Nov 24 21:44:22 crc kubenswrapper[4915]: I1124 21:44:22.449418 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72a33116-8ea9-489b-819c-e69372202e03" path="/var/lib/kubelet/pods/72a33116-8ea9-489b-819c-e69372202e03/volumes" Nov 24 21:44:22 crc kubenswrapper[4915]: I1124 21:44:22.450468 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f" path="/var/lib/kubelet/pods/cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f/volumes" Nov 24 21:44:22 crc kubenswrapper[4915]: I1124 21:44:22.755138 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-nzxkj" event={"ID":"c94bae1f-7fd1-400e-b38f-6d835e52bb97","Type":"ContainerStarted","Data":"ea4278de784986a17cbdb3b0f1a5edaa2ab5bf10129c971642f771752f8d6b02"} Nov 24 21:44:22 crc kubenswrapper[4915]: I1124 21:44:22.756792 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-405b-account-create-jjfvx" event={"ID":"96df9840-ef15-4b5e-a23c-a7c0dbdf9d14","Type":"ContainerStarted","Data":"ff2c58b3d81e12e3d0e483710cf7ab917be22f1ae1c6634ad51bc826d923cebe"} Nov 24 21:44:22 crc kubenswrapper[4915]: I1124 21:44:22.762275 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3012cd0-1234-4d16-ae47-7eb87738f54d","Type":"ContainerStarted","Data":"488ec1ee3de8b48b38b433ee5c077db6d3040ee6c11194adb3b5a15ab22d8bf7"} Nov 24 21:44:22 crc kubenswrapper[4915]: I1124 21:44:22.766053 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dbf66cbe-0c30-400e-a367-f176ddf6f94c","Type":"ContainerStarted","Data":"072872b04cbd566b271f89f335395bcde5e955da2c8020bd5ab2f1e55b23632d"} Nov 24 21:44:22 crc kubenswrapper[4915]: I1124 21:44:22.766956 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="dbf66cbe-0c30-400e-a367-f176ddf6f94c" containerName="nova-metadata-log" containerID="cri-o://338d78185e578e0de890f76d6bab6277eb864d0fa36a309c6b883fa097ba08ec" gracePeriod=30 Nov 24 21:44:22 crc kubenswrapper[4915]: I1124 21:44:22.767150 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="dbf66cbe-0c30-400e-a367-f176ddf6f94c" containerName="nova-metadata-metadata" containerID="cri-o://072872b04cbd566b271f89f335395bcde5e955da2c8020bd5ab2f1e55b23632d" gracePeriod=30 Nov 24 21:44:22 crc kubenswrapper[4915]: I1124 21:44:22.796646 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.683143638 podStartE2EDuration="10.796624755s" podCreationTimestamp="2025-11-24 21:44:12 +0000 UTC" firstStartedPulling="2025-11-24 21:44:13.897494916 +0000 UTC m=+1472.213747089" lastFinishedPulling="2025-11-24 21:44:20.010976033 +0000 UTC m=+1478.327228206" observedRunningTime="2025-11-24 21:44:22.778790295 +0000 UTC m=+1481.095042468" watchObservedRunningTime="2025-11-24 21:44:22.796624755 +0000 UTC m=+1481.112876928" Nov 24 21:44:22 crc kubenswrapper[4915]: I1124 21:44:22.815632 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.664918908 podStartE2EDuration="10.815607907s" podCreationTimestamp="2025-11-24 21:44:12 +0000 UTC" firstStartedPulling="2025-11-24 21:44:13.897398823 +0000 UTC m=+1472.213650996" lastFinishedPulling="2025-11-24 21:44:20.048087822 +0000 UTC m=+1478.364339995" observedRunningTime="2025-11-24 21:44:22.80160243 +0000 UTC m=+1481.117854603" watchObservedRunningTime="2025-11-24 21:44:22.815607907 +0000 UTC m=+1481.131860080" Nov 24 21:44:22 crc kubenswrapper[4915]: I1124 21:44:22.972846 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 21:44:22 crc kubenswrapper[4915]: I1124 21:44:22.973470 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 21:44:23 crc kubenswrapper[4915]: I1124 21:44:23.084972 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 21:44:23 crc kubenswrapper[4915]: I1124 21:44:23.085074 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 21:44:23 crc kubenswrapper[4915]: I1124 21:44:23.121304 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 21:44:23 crc kubenswrapper[4915]: I1124 21:44:23.121356 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 21:44:23 crc kubenswrapper[4915]: I1124 21:44:23.175456 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 21:44:23 crc kubenswrapper[4915]: I1124 21:44:23.271374 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:23 crc kubenswrapper[4915]: I1124 21:44:23.380977 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-568d7fd7cf-m2d7d" Nov 24 21:44:23 crc kubenswrapper[4915]: I1124 21:44:23.487347 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-ncs8x"] Nov 24 21:44:23 crc kubenswrapper[4915]: I1124 21:44:23.488235 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" podUID="0327c0c2-dd19-48b6-946d-3b1460a7a260" containerName="dnsmasq-dns" containerID="cri-o://2edd553f1ef5fbadcc771a627b44cc68540fef7b1f99306b6b987931cf80c7b5" gracePeriod=10 Nov 24 21:44:23 crc kubenswrapper[4915]: I1124 21:44:23.807466 4915 generic.go:334] "Generic (PLEG): container finished" podID="0327c0c2-dd19-48b6-946d-3b1460a7a260" containerID="2edd553f1ef5fbadcc771a627b44cc68540fef7b1f99306b6b987931cf80c7b5" exitCode=0 Nov 24 21:44:23 crc kubenswrapper[4915]: I1124 21:44:23.807939 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" event={"ID":"0327c0c2-dd19-48b6-946d-3b1460a7a260","Type":"ContainerDied","Data":"2edd553f1ef5fbadcc771a627b44cc68540fef7b1f99306b6b987931cf80c7b5"} Nov 24 21:44:23 crc kubenswrapper[4915]: I1124 21:44:23.832342 4915 generic.go:334] "Generic (PLEG): container finished" podID="dbf66cbe-0c30-400e-a367-f176ddf6f94c" containerID="072872b04cbd566b271f89f335395bcde5e955da2c8020bd5ab2f1e55b23632d" exitCode=0 Nov 24 21:44:23 crc kubenswrapper[4915]: I1124 21:44:23.832382 4915 generic.go:334] "Generic (PLEG): container finished" podID="dbf66cbe-0c30-400e-a367-f176ddf6f94c" containerID="338d78185e578e0de890f76d6bab6277eb864d0fa36a309c6b883fa097ba08ec" exitCode=143 Nov 24 21:44:23 crc kubenswrapper[4915]: I1124 21:44:23.832452 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dbf66cbe-0c30-400e-a367-f176ddf6f94c","Type":"ContainerDied","Data":"072872b04cbd566b271f89f335395bcde5e955da2c8020bd5ab2f1e55b23632d"} Nov 24 21:44:23 crc kubenswrapper[4915]: I1124 21:44:23.832489 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dbf66cbe-0c30-400e-a367-f176ddf6f94c","Type":"ContainerDied","Data":"338d78185e578e0de890f76d6bab6277eb864d0fa36a309c6b883fa097ba08ec"} Nov 24 21:44:23 crc kubenswrapper[4915]: I1124 21:44:23.840886 4915 generic.go:334] "Generic (PLEG): container finished" podID="c94bae1f-7fd1-400e-b38f-6d835e52bb97" containerID="6ecf36413b1fe9a95006d4a9c1d62fd142b7c0d6e4553b2e3485b1a2f446094f" exitCode=0 Nov 24 21:44:23 crc kubenswrapper[4915]: I1124 21:44:23.840965 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-nzxkj" event={"ID":"c94bae1f-7fd1-400e-b38f-6d835e52bb97","Type":"ContainerDied","Data":"6ecf36413b1fe9a95006d4a9c1d62fd142b7c0d6e4553b2e3485b1a2f446094f"} Nov 24 21:44:23 crc kubenswrapper[4915]: I1124 21:44:23.855336 4915 generic.go:334] "Generic (PLEG): container finished" podID="96df9840-ef15-4b5e-a23c-a7c0dbdf9d14" containerID="7c04e6f7357657b30eb6eeae432165a3498378c803732965f225a4a23419eaac" exitCode=0 Nov 24 21:44:23 crc kubenswrapper[4915]: I1124 21:44:23.855415 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-405b-account-create-jjfvx" event={"ID":"96df9840-ef15-4b5e-a23c-a7c0dbdf9d14","Type":"ContainerDied","Data":"7c04e6f7357657b30eb6eeae432165a3498378c803732965f225a4a23419eaac"} Nov 24 21:44:23 crc kubenswrapper[4915]: I1124 21:44:23.870105 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca","Type":"ContainerStarted","Data":"b296386b7382c00fb411a5ee525eccc1763b1dd4f49f297354a2d7120d31fb75"} Nov 24 21:44:23 crc kubenswrapper[4915]: I1124 21:44:23.872364 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0da56fdd-7e96-4fd2-93a7-cbab303ab8ca" containerName="ceilometer-central-agent" containerID="cri-o://652a122da7c263e7829bc43528afaa69be1053dfa9eb40937ea0c6fa7bc1fa7a" gracePeriod=30 Nov 24 21:44:23 crc kubenswrapper[4915]: I1124 21:44:23.872450 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 21:44:23 crc kubenswrapper[4915]: I1124 21:44:23.872492 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0da56fdd-7e96-4fd2-93a7-cbab303ab8ca" containerName="proxy-httpd" containerID="cri-o://b296386b7382c00fb411a5ee525eccc1763b1dd4f49f297354a2d7120d31fb75" gracePeriod=30 Nov 24 21:44:23 crc kubenswrapper[4915]: I1124 21:44:23.872533 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0da56fdd-7e96-4fd2-93a7-cbab303ab8ca" containerName="sg-core" containerID="cri-o://a697934c1cbaa1a117ec4d480fcd6e76855e24f2c8cac88d01de1fe73ff91a90" gracePeriod=30 Nov 24 21:44:23 crc kubenswrapper[4915]: I1124 21:44:23.872563 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0da56fdd-7e96-4fd2-93a7-cbab303ab8ca" containerName="ceilometer-notification-agent" containerID="cri-o://fc018c5e742b5c3febfa27f05dba9edd33bf8b77a5fa5222fc466cfda4197a4b" gracePeriod=30 Nov 24 21:44:23 crc kubenswrapper[4915]: I1124 21:44:23.969078 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 21:44:24 crc kubenswrapper[4915]: I1124 21:44:24.018981 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c3012cd0-1234-4d16-ae47-7eb87738f54d" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.231:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 21:44:24 crc kubenswrapper[4915]: I1124 21:44:24.061305 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.090813055 podStartE2EDuration="11.061286035s" podCreationTimestamp="2025-11-24 21:44:13 +0000 UTC" firstStartedPulling="2025-11-24 21:44:14.654132853 +0000 UTC m=+1472.970385026" lastFinishedPulling="2025-11-24 21:44:22.624605833 +0000 UTC m=+1480.940858006" observedRunningTime="2025-11-24 21:44:23.937758539 +0000 UTC m=+1482.254010802" watchObservedRunningTime="2025-11-24 21:44:24.061286035 +0000 UTC m=+1482.377538198" Nov 24 21:44:24 crc kubenswrapper[4915]: I1124 21:44:24.062659 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c3012cd0-1234-4d16-ae47-7eb87738f54d" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.231:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 21:44:24 crc kubenswrapper[4915]: I1124 21:44:24.106269 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:44:24 crc kubenswrapper[4915]: I1124 21:44:24.528102 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbf66cbe-0c30-400e-a367-f176ddf6f94c-combined-ca-bundle\") pod \"dbf66cbe-0c30-400e-a367-f176ddf6f94c\" (UID: \"dbf66cbe-0c30-400e-a367-f176ddf6f94c\") " Nov 24 21:44:24 crc kubenswrapper[4915]: I1124 21:44:24.528183 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbf66cbe-0c30-400e-a367-f176ddf6f94c-config-data\") pod \"dbf66cbe-0c30-400e-a367-f176ddf6f94c\" (UID: \"dbf66cbe-0c30-400e-a367-f176ddf6f94c\") " Nov 24 21:44:24 crc kubenswrapper[4915]: I1124 21:44:24.528465 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlkdc\" (UniqueName: \"kubernetes.io/projected/dbf66cbe-0c30-400e-a367-f176ddf6f94c-kube-api-access-rlkdc\") pod \"dbf66cbe-0c30-400e-a367-f176ddf6f94c\" (UID: \"dbf66cbe-0c30-400e-a367-f176ddf6f94c\") " Nov 24 21:44:24 crc kubenswrapper[4915]: I1124 21:44:24.528559 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbf66cbe-0c30-400e-a367-f176ddf6f94c-logs\") pod \"dbf66cbe-0c30-400e-a367-f176ddf6f94c\" (UID: \"dbf66cbe-0c30-400e-a367-f176ddf6f94c\") " Nov 24 21:44:24 crc kubenswrapper[4915]: I1124 21:44:24.536019 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbf66cbe-0c30-400e-a367-f176ddf6f94c-logs" (OuterVolumeSpecName: "logs") pod "dbf66cbe-0c30-400e-a367-f176ddf6f94c" (UID: "dbf66cbe-0c30-400e-a367-f176ddf6f94c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:44:24 crc kubenswrapper[4915]: I1124 21:44:24.579510 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbf66cbe-0c30-400e-a367-f176ddf6f94c-kube-api-access-rlkdc" (OuterVolumeSpecName: "kube-api-access-rlkdc") pod "dbf66cbe-0c30-400e-a367-f176ddf6f94c" (UID: "dbf66cbe-0c30-400e-a367-f176ddf6f94c"). InnerVolumeSpecName "kube-api-access-rlkdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:44:24 crc kubenswrapper[4915]: I1124 21:44:24.589352 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbf66cbe-0c30-400e-a367-f176ddf6f94c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbf66cbe-0c30-400e-a367-f176ddf6f94c" (UID: "dbf66cbe-0c30-400e-a367-f176ddf6f94c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:24 crc kubenswrapper[4915]: I1124 21:44:24.624737 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbf66cbe-0c30-400e-a367-f176ddf6f94c-config-data" (OuterVolumeSpecName: "config-data") pod "dbf66cbe-0c30-400e-a367-f176ddf6f94c" (UID: "dbf66cbe-0c30-400e-a367-f176ddf6f94c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:24 crc kubenswrapper[4915]: I1124 21:44:24.640654 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbf66cbe-0c30-400e-a367-f176ddf6f94c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:24 crc kubenswrapper[4915]: I1124 21:44:24.640693 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbf66cbe-0c30-400e-a367-f176ddf6f94c-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:24 crc kubenswrapper[4915]: I1124 21:44:24.640705 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlkdc\" (UniqueName: \"kubernetes.io/projected/dbf66cbe-0c30-400e-a367-f176ddf6f94c-kube-api-access-rlkdc\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:24 crc kubenswrapper[4915]: I1124 21:44:24.640721 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbf66cbe-0c30-400e-a367-f176ddf6f94c-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:24 crc kubenswrapper[4915]: I1124 21:44:24.869402 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" Nov 24 21:44:24 crc kubenswrapper[4915]: I1124 21:44:24.890707 4915 generic.go:334] "Generic (PLEG): container finished" podID="0da56fdd-7e96-4fd2-93a7-cbab303ab8ca" containerID="b296386b7382c00fb411a5ee525eccc1763b1dd4f49f297354a2d7120d31fb75" exitCode=0 Nov 24 21:44:24 crc kubenswrapper[4915]: I1124 21:44:24.890738 4915 generic.go:334] "Generic (PLEG): container finished" podID="0da56fdd-7e96-4fd2-93a7-cbab303ab8ca" containerID="a697934c1cbaa1a117ec4d480fcd6e76855e24f2c8cac88d01de1fe73ff91a90" exitCode=2 Nov 24 21:44:24 crc kubenswrapper[4915]: I1124 21:44:24.890747 4915 generic.go:334] "Generic (PLEG): container finished" podID="0da56fdd-7e96-4fd2-93a7-cbab303ab8ca" containerID="fc018c5e742b5c3febfa27f05dba9edd33bf8b77a5fa5222fc466cfda4197a4b" exitCode=0 Nov 24 21:44:24 crc kubenswrapper[4915]: I1124 21:44:24.890811 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca","Type":"ContainerDied","Data":"b296386b7382c00fb411a5ee525eccc1763b1dd4f49f297354a2d7120d31fb75"} Nov 24 21:44:24 crc kubenswrapper[4915]: I1124 21:44:24.890837 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca","Type":"ContainerDied","Data":"a697934c1cbaa1a117ec4d480fcd6e76855e24f2c8cac88d01de1fe73ff91a90"} Nov 24 21:44:24 crc kubenswrapper[4915]: I1124 21:44:24.890847 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca","Type":"ContainerDied","Data":"fc018c5e742b5c3febfa27f05dba9edd33bf8b77a5fa5222fc466cfda4197a4b"} Nov 24 21:44:24 crc kubenswrapper[4915]: I1124 21:44:24.896015 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" event={"ID":"0327c0c2-dd19-48b6-946d-3b1460a7a260","Type":"ContainerDied","Data":"70029c7897c4504f32eecdac7817e1f47056764739158d6bef3583e7e50326f8"} Nov 24 21:44:24 crc kubenswrapper[4915]: I1124 21:44:24.896053 4915 scope.go:117] "RemoveContainer" containerID="2edd553f1ef5fbadcc771a627b44cc68540fef7b1f99306b6b987931cf80c7b5" Nov 24 21:44:24 crc kubenswrapper[4915]: I1124 21:44:24.896182 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" Nov 24 21:44:24 crc kubenswrapper[4915]: I1124 21:44:24.905478 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dbf66cbe-0c30-400e-a367-f176ddf6f94c","Type":"ContainerDied","Data":"5645bee5af9aa1bae6776a3fa8552b928dfe9bb51772731d00d80d04524b1e4a"} Nov 24 21:44:24 crc kubenswrapper[4915]: I1124 21:44:24.906907 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:44:24 crc kubenswrapper[4915]: I1124 21:44:24.944360 4915 scope.go:117] "RemoveContainer" containerID="45897b5946030f47e1af32af054c49a1f36709d46d1ca32aba0807ce65de1e7b" Nov 24 21:44:24 crc kubenswrapper[4915]: I1124 21:44:24.970823 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:44:24 crc kubenswrapper[4915]: I1124 21:44:24.989936 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.011702 4915 scope.go:117] "RemoveContainer" containerID="072872b04cbd566b271f89f335395bcde5e955da2c8020bd5ab2f1e55b23632d" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.013736 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:44:25 crc kubenswrapper[4915]: E1124 21:44:25.014467 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0327c0c2-dd19-48b6-946d-3b1460a7a260" containerName="init" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.014486 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0327c0c2-dd19-48b6-946d-3b1460a7a260" containerName="init" Nov 24 21:44:25 crc kubenswrapper[4915]: E1124 21:44:25.014519 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0327c0c2-dd19-48b6-946d-3b1460a7a260" containerName="dnsmasq-dns" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.014526 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0327c0c2-dd19-48b6-946d-3b1460a7a260" containerName="dnsmasq-dns" Nov 24 21:44:25 crc kubenswrapper[4915]: E1124 21:44:25.014551 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbf66cbe-0c30-400e-a367-f176ddf6f94c" containerName="nova-metadata-metadata" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.014558 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbf66cbe-0c30-400e-a367-f176ddf6f94c" containerName="nova-metadata-metadata" Nov 24 21:44:25 crc kubenswrapper[4915]: E1124 21:44:25.014570 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbf66cbe-0c30-400e-a367-f176ddf6f94c" containerName="nova-metadata-log" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.014577 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbf66cbe-0c30-400e-a367-f176ddf6f94c" containerName="nova-metadata-log" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.014798 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="0327c0c2-dd19-48b6-946d-3b1460a7a260" containerName="dnsmasq-dns" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.014832 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbf66cbe-0c30-400e-a367-f176ddf6f94c" containerName="nova-metadata-metadata" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.014840 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbf66cbe-0c30-400e-a367-f176ddf6f94c" containerName="nova-metadata-log" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.016141 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.022395 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.024059 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.044894 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.066894 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0327c0c2-dd19-48b6-946d-3b1460a7a260-config\") pod \"0327c0c2-dd19-48b6-946d-3b1460a7a260\" (UID: \"0327c0c2-dd19-48b6-946d-3b1460a7a260\") " Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.067017 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0327c0c2-dd19-48b6-946d-3b1460a7a260-dns-svc\") pod \"0327c0c2-dd19-48b6-946d-3b1460a7a260\" (UID: \"0327c0c2-dd19-48b6-946d-3b1460a7a260\") " Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.067184 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0327c0c2-dd19-48b6-946d-3b1460a7a260-ovsdbserver-nb\") pod \"0327c0c2-dd19-48b6-946d-3b1460a7a260\" (UID: \"0327c0c2-dd19-48b6-946d-3b1460a7a260\") " Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.067224 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0327c0c2-dd19-48b6-946d-3b1460a7a260-dns-swift-storage-0\") pod \"0327c0c2-dd19-48b6-946d-3b1460a7a260\" (UID: \"0327c0c2-dd19-48b6-946d-3b1460a7a260\") " Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.067342 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwvng\" (UniqueName: \"kubernetes.io/projected/0327c0c2-dd19-48b6-946d-3b1460a7a260-kube-api-access-dwvng\") pod \"0327c0c2-dd19-48b6-946d-3b1460a7a260\" (UID: \"0327c0c2-dd19-48b6-946d-3b1460a7a260\") " Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.067452 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0327c0c2-dd19-48b6-946d-3b1460a7a260-ovsdbserver-sb\") pod \"0327c0c2-dd19-48b6-946d-3b1460a7a260\" (UID: \"0327c0c2-dd19-48b6-946d-3b1460a7a260\") " Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.100808 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0327c0c2-dd19-48b6-946d-3b1460a7a260-kube-api-access-dwvng" (OuterVolumeSpecName: "kube-api-access-dwvng") pod "0327c0c2-dd19-48b6-946d-3b1460a7a260" (UID: "0327c0c2-dd19-48b6-946d-3b1460a7a260"). InnerVolumeSpecName "kube-api-access-dwvng". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.105447 4915 scope.go:117] "RemoveContainer" containerID="338d78185e578e0de890f76d6bab6277eb864d0fa36a309c6b883fa097ba08ec" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.173255 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dac01c-4f6c-478a-8686-b2450ad59ec2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b6dac01c-4f6c-478a-8686-b2450ad59ec2\") " pod="openstack/nova-metadata-0" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.173394 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6dac01c-4f6c-478a-8686-b2450ad59ec2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b6dac01c-4f6c-478a-8686-b2450ad59ec2\") " pod="openstack/nova-metadata-0" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.173453 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6wvm\" (UniqueName: \"kubernetes.io/projected/b6dac01c-4f6c-478a-8686-b2450ad59ec2-kube-api-access-v6wvm\") pod \"nova-metadata-0\" (UID: \"b6dac01c-4f6c-478a-8686-b2450ad59ec2\") " pod="openstack/nova-metadata-0" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.173491 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6dac01c-4f6c-478a-8686-b2450ad59ec2-logs\") pod \"nova-metadata-0\" (UID: \"b6dac01c-4f6c-478a-8686-b2450ad59ec2\") " pod="openstack/nova-metadata-0" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.174511 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6dac01c-4f6c-478a-8686-b2450ad59ec2-config-data\") pod \"nova-metadata-0\" (UID: \"b6dac01c-4f6c-478a-8686-b2450ad59ec2\") " pod="openstack/nova-metadata-0" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.175431 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0327c0c2-dd19-48b6-946d-3b1460a7a260-config" (OuterVolumeSpecName: "config") pod "0327c0c2-dd19-48b6-946d-3b1460a7a260" (UID: "0327c0c2-dd19-48b6-946d-3b1460a7a260"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.183227 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0327c0c2-dd19-48b6-946d-3b1460a7a260-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0327c0c2-dd19-48b6-946d-3b1460a7a260" (UID: "0327c0c2-dd19-48b6-946d-3b1460a7a260"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.185601 4915 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0327c0c2-dd19-48b6-946d-3b1460a7a260-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.185840 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwvng\" (UniqueName: \"kubernetes.io/projected/0327c0c2-dd19-48b6-946d-3b1460a7a260-kube-api-access-dwvng\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.189372 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0327c0c2-dd19-48b6-946d-3b1460a7a260-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0327c0c2-dd19-48b6-946d-3b1460a7a260" (UID: "0327c0c2-dd19-48b6-946d-3b1460a7a260"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.200438 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0327c0c2-dd19-48b6-946d-3b1460a7a260-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0327c0c2-dd19-48b6-946d-3b1460a7a260" (UID: "0327c0c2-dd19-48b6-946d-3b1460a7a260"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.223638 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0327c0c2-dd19-48b6-946d-3b1460a7a260-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0327c0c2-dd19-48b6-946d-3b1460a7a260" (UID: "0327c0c2-dd19-48b6-946d-3b1460a7a260"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.287181 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dac01c-4f6c-478a-8686-b2450ad59ec2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b6dac01c-4f6c-478a-8686-b2450ad59ec2\") " pod="openstack/nova-metadata-0" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.287330 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6dac01c-4f6c-478a-8686-b2450ad59ec2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b6dac01c-4f6c-478a-8686-b2450ad59ec2\") " pod="openstack/nova-metadata-0" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.287370 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6wvm\" (UniqueName: \"kubernetes.io/projected/b6dac01c-4f6c-478a-8686-b2450ad59ec2-kube-api-access-v6wvm\") pod \"nova-metadata-0\" (UID: \"b6dac01c-4f6c-478a-8686-b2450ad59ec2\") " pod="openstack/nova-metadata-0" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.287410 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6dac01c-4f6c-478a-8686-b2450ad59ec2-logs\") pod \"nova-metadata-0\" (UID: \"b6dac01c-4f6c-478a-8686-b2450ad59ec2\") " pod="openstack/nova-metadata-0" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.287453 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6dac01c-4f6c-478a-8686-b2450ad59ec2-config-data\") pod \"nova-metadata-0\" (UID: \"b6dac01c-4f6c-478a-8686-b2450ad59ec2\") " pod="openstack/nova-metadata-0" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.287552 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0327c0c2-dd19-48b6-946d-3b1460a7a260-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.287562 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0327c0c2-dd19-48b6-946d-3b1460a7a260-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.287574 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0327c0c2-dd19-48b6-946d-3b1460a7a260-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.287582 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0327c0c2-dd19-48b6-946d-3b1460a7a260-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.288161 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6dac01c-4f6c-478a-8686-b2450ad59ec2-logs\") pod \"nova-metadata-0\" (UID: \"b6dac01c-4f6c-478a-8686-b2450ad59ec2\") " pod="openstack/nova-metadata-0" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.291993 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6dac01c-4f6c-478a-8686-b2450ad59ec2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b6dac01c-4f6c-478a-8686-b2450ad59ec2\") " pod="openstack/nova-metadata-0" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.292128 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dac01c-4f6c-478a-8686-b2450ad59ec2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b6dac01c-4f6c-478a-8686-b2450ad59ec2\") " pod="openstack/nova-metadata-0" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.292195 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6dac01c-4f6c-478a-8686-b2450ad59ec2-config-data\") pod \"nova-metadata-0\" (UID: \"b6dac01c-4f6c-478a-8686-b2450ad59ec2\") " pod="openstack/nova-metadata-0" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.312421 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6wvm\" (UniqueName: \"kubernetes.io/projected/b6dac01c-4f6c-478a-8686-b2450ad59ec2-kube-api-access-v6wvm\") pod \"nova-metadata-0\" (UID: \"b6dac01c-4f6c-478a-8686-b2450ad59ec2\") " pod="openstack/nova-metadata-0" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.375187 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:44:25 crc kubenswrapper[4915]: I1124 21:44:25.633091 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-405b-account-create-jjfvx" Nov 24 21:44:26 crc kubenswrapper[4915]: I1124 21:44:25.662070 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-nzxkj" Nov 24 21:44:26 crc kubenswrapper[4915]: I1124 21:44:25.667102 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-ncs8x"] Nov 24 21:44:26 crc kubenswrapper[4915]: I1124 21:44:25.697252 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-ncs8x"] Nov 24 21:44:26 crc kubenswrapper[4915]: I1124 21:44:25.798227 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c94bae1f-7fd1-400e-b38f-6d835e52bb97-operator-scripts\") pod \"c94bae1f-7fd1-400e-b38f-6d835e52bb97\" (UID: \"c94bae1f-7fd1-400e-b38f-6d835e52bb97\") " Nov 24 21:44:26 crc kubenswrapper[4915]: I1124 21:44:25.798378 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw2fb\" (UniqueName: \"kubernetes.io/projected/c94bae1f-7fd1-400e-b38f-6d835e52bb97-kube-api-access-jw2fb\") pod \"c94bae1f-7fd1-400e-b38f-6d835e52bb97\" (UID: \"c94bae1f-7fd1-400e-b38f-6d835e52bb97\") " Nov 24 21:44:26 crc kubenswrapper[4915]: I1124 21:44:25.798400 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96df9840-ef15-4b5e-a23c-a7c0dbdf9d14-operator-scripts\") pod \"96df9840-ef15-4b5e-a23c-a7c0dbdf9d14\" (UID: \"96df9840-ef15-4b5e-a23c-a7c0dbdf9d14\") " Nov 24 21:44:26 crc kubenswrapper[4915]: I1124 21:44:25.798550 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv7m4\" (UniqueName: \"kubernetes.io/projected/96df9840-ef15-4b5e-a23c-a7c0dbdf9d14-kube-api-access-fv7m4\") pod \"96df9840-ef15-4b5e-a23c-a7c0dbdf9d14\" (UID: \"96df9840-ef15-4b5e-a23c-a7c0dbdf9d14\") " Nov 24 21:44:26 crc kubenswrapper[4915]: I1124 21:44:25.801606 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c94bae1f-7fd1-400e-b38f-6d835e52bb97-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c94bae1f-7fd1-400e-b38f-6d835e52bb97" (UID: "c94bae1f-7fd1-400e-b38f-6d835e52bb97"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:44:26 crc kubenswrapper[4915]: I1124 21:44:25.801604 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96df9840-ef15-4b5e-a23c-a7c0dbdf9d14-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "96df9840-ef15-4b5e-a23c-a7c0dbdf9d14" (UID: "96df9840-ef15-4b5e-a23c-a7c0dbdf9d14"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:44:26 crc kubenswrapper[4915]: I1124 21:44:25.804251 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96df9840-ef15-4b5e-a23c-a7c0dbdf9d14-kube-api-access-fv7m4" (OuterVolumeSpecName: "kube-api-access-fv7m4") pod "96df9840-ef15-4b5e-a23c-a7c0dbdf9d14" (UID: "96df9840-ef15-4b5e-a23c-a7c0dbdf9d14"). InnerVolumeSpecName "kube-api-access-fv7m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:44:26 crc kubenswrapper[4915]: I1124 21:44:25.804866 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c94bae1f-7fd1-400e-b38f-6d835e52bb97-kube-api-access-jw2fb" (OuterVolumeSpecName: "kube-api-access-jw2fb") pod "c94bae1f-7fd1-400e-b38f-6d835e52bb97" (UID: "c94bae1f-7fd1-400e-b38f-6d835e52bb97"). InnerVolumeSpecName "kube-api-access-jw2fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:44:26 crc kubenswrapper[4915]: I1124 21:44:25.900991 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw2fb\" (UniqueName: \"kubernetes.io/projected/c94bae1f-7fd1-400e-b38f-6d835e52bb97-kube-api-access-jw2fb\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:26 crc kubenswrapper[4915]: I1124 21:44:25.901023 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96df9840-ef15-4b5e-a23c-a7c0dbdf9d14-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:26 crc kubenswrapper[4915]: I1124 21:44:25.901032 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv7m4\" (UniqueName: \"kubernetes.io/projected/96df9840-ef15-4b5e-a23c-a7c0dbdf9d14-kube-api-access-fv7m4\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:26 crc kubenswrapper[4915]: I1124 21:44:25.901041 4915 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c94bae1f-7fd1-400e-b38f-6d835e52bb97-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:26 crc kubenswrapper[4915]: I1124 21:44:25.918912 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-405b-account-create-jjfvx" event={"ID":"96df9840-ef15-4b5e-a23c-a7c0dbdf9d14","Type":"ContainerDied","Data":"ff2c58b3d81e12e3d0e483710cf7ab917be22f1ae1c6634ad51bc826d923cebe"} Nov 24 21:44:26 crc kubenswrapper[4915]: I1124 21:44:25.918963 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff2c58b3d81e12e3d0e483710cf7ab917be22f1ae1c6634ad51bc826d923cebe" Nov 24 21:44:26 crc kubenswrapper[4915]: I1124 21:44:25.919016 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-405b-account-create-jjfvx" Nov 24 21:44:26 crc kubenswrapper[4915]: I1124 21:44:25.929025 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-nzxkj" Nov 24 21:44:26 crc kubenswrapper[4915]: I1124 21:44:25.929031 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-nzxkj" event={"ID":"c94bae1f-7fd1-400e-b38f-6d835e52bb97","Type":"ContainerDied","Data":"ea4278de784986a17cbdb3b0f1a5edaa2ab5bf10129c971642f771752f8d6b02"} Nov 24 21:44:26 crc kubenswrapper[4915]: I1124 21:44:25.929105 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea4278de784986a17cbdb3b0f1a5edaa2ab5bf10129c971642f771752f8d6b02" Nov 24 21:44:26 crc kubenswrapper[4915]: I1124 21:44:26.408560 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:44:26 crc kubenswrapper[4915]: I1124 21:44:26.462543 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0327c0c2-dd19-48b6-946d-3b1460a7a260" path="/var/lib/kubelet/pods/0327c0c2-dd19-48b6-946d-3b1460a7a260/volumes" Nov 24 21:44:26 crc kubenswrapper[4915]: I1124 21:44:26.463215 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbf66cbe-0c30-400e-a367-f176ddf6f94c" path="/var/lib/kubelet/pods/dbf66cbe-0c30-400e-a367-f176ddf6f94c/volumes" Nov 24 21:44:26 crc kubenswrapper[4915]: I1124 21:44:26.939299 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b6dac01c-4f6c-478a-8686-b2450ad59ec2","Type":"ContainerStarted","Data":"b8e6d145eb90989f54e49165184ca6e3e65eacaeb7ef6489842bf7b5cf0f6522"} Nov 24 21:44:26 crc kubenswrapper[4915]: I1124 21:44:26.939352 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b6dac01c-4f6c-478a-8686-b2450ad59ec2","Type":"ContainerStarted","Data":"a38a32992b751f1ac867bb8b4ddb156e46ba5f7b8d2bcf11a0b8668b25057a20"} Nov 24 21:44:26 crc kubenswrapper[4915]: I1124 21:44:26.939372 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b6dac01c-4f6c-478a-8686-b2450ad59ec2","Type":"ContainerStarted","Data":"156f1f2045d643816264d7c54ae38bccff429ddaf8885d3f01a78fdc31787960"} Nov 24 21:44:26 crc kubenswrapper[4915]: I1124 21:44:26.962345 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.962327095 podStartE2EDuration="2.962327095s" podCreationTimestamp="2025-11-24 21:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:44:26.958517843 +0000 UTC m=+1485.274770036" watchObservedRunningTime="2025-11-24 21:44:26.962327095 +0000 UTC m=+1485.278579268" Nov 24 21:44:27 crc kubenswrapper[4915]: I1124 21:44:27.956940 4915 generic.go:334] "Generic (PLEG): container finished" podID="fb3b004f-82a5-46ab-aff4-223567ddd793" containerID="a520360b07c830366aeddd990b071210934be56a39c28163798c5769febfd9c5" exitCode=0 Nov 24 21:44:27 crc kubenswrapper[4915]: I1124 21:44:27.957043 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jshft" event={"ID":"fb3b004f-82a5-46ab-aff4-223567ddd793","Type":"ContainerDied","Data":"a520360b07c830366aeddd990b071210934be56a39c28163798c5769febfd9c5"} Nov 24 21:44:27 crc kubenswrapper[4915]: I1124 21:44:27.959886 4915 generic.go:334] "Generic (PLEG): container finished" podID="5e0b0bf9-6641-4a71-8adb-2c6e97412718" containerID="7b3a95eafda5ef5aa58648bc3ebdfaf7e7e1a3145b21869374f404dce05bc8cc" exitCode=0 Nov 24 21:44:27 crc kubenswrapper[4915]: I1124 21:44:27.960880 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-298k5" event={"ID":"5e0b0bf9-6641-4a71-8adb-2c6e97412718","Type":"ContainerDied","Data":"7b3a95eafda5ef5aa58648bc3ebdfaf7e7e1a3145b21869374f404dce05bc8cc"} Nov 24 21:44:28 crc kubenswrapper[4915]: I1124 21:44:28.853152 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:44:28 crc kubenswrapper[4915]: I1124 21:44:28.965801 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6dzx\" (UniqueName: \"kubernetes.io/projected/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-kube-api-access-n6dzx\") pod \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\" (UID: \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\") " Nov 24 21:44:28 crc kubenswrapper[4915]: I1124 21:44:28.966645 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-combined-ca-bundle\") pod \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\" (UID: \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\") " Nov 24 21:44:28 crc kubenswrapper[4915]: I1124 21:44:28.967112 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-sg-core-conf-yaml\") pod \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\" (UID: \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\") " Nov 24 21:44:28 crc kubenswrapper[4915]: I1124 21:44:28.967487 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-log-httpd\") pod \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\" (UID: \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\") " Nov 24 21:44:28 crc kubenswrapper[4915]: I1124 21:44:28.967740 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-config-data\") pod \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\" (UID: \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\") " Nov 24 21:44:28 crc kubenswrapper[4915]: I1124 21:44:28.967948 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-scripts\") pod \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\" (UID: \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\") " Nov 24 21:44:28 crc kubenswrapper[4915]: I1124 21:44:28.968049 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-run-httpd\") pod \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\" (UID: \"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca\") " Nov 24 21:44:28 crc kubenswrapper[4915]: I1124 21:44:28.968638 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0da56fdd-7e96-4fd2-93a7-cbab303ab8ca" (UID: "0da56fdd-7e96-4fd2-93a7-cbab303ab8ca"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:44:28 crc kubenswrapper[4915]: I1124 21:44:28.968978 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0da56fdd-7e96-4fd2-93a7-cbab303ab8ca" (UID: "0da56fdd-7e96-4fd2-93a7-cbab303ab8ca"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:44:28 crc kubenswrapper[4915]: I1124 21:44:28.969421 4915 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:28 crc kubenswrapper[4915]: I1124 21:44:28.969509 4915 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:28 crc kubenswrapper[4915]: I1124 21:44:28.973092 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-kube-api-access-n6dzx" (OuterVolumeSpecName: "kube-api-access-n6dzx") pod "0da56fdd-7e96-4fd2-93a7-cbab303ab8ca" (UID: "0da56fdd-7e96-4fd2-93a7-cbab303ab8ca"). InnerVolumeSpecName "kube-api-access-n6dzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:44:28 crc kubenswrapper[4915]: I1124 21:44:28.974272 4915 generic.go:334] "Generic (PLEG): container finished" podID="0da56fdd-7e96-4fd2-93a7-cbab303ab8ca" containerID="652a122da7c263e7829bc43528afaa69be1053dfa9eb40937ea0c6fa7bc1fa7a" exitCode=0 Nov 24 21:44:28 crc kubenswrapper[4915]: I1124 21:44:28.974359 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:44:28 crc kubenswrapper[4915]: I1124 21:44:28.974368 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca","Type":"ContainerDied","Data":"652a122da7c263e7829bc43528afaa69be1053dfa9eb40937ea0c6fa7bc1fa7a"} Nov 24 21:44:28 crc kubenswrapper[4915]: I1124 21:44:28.974411 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0da56fdd-7e96-4fd2-93a7-cbab303ab8ca","Type":"ContainerDied","Data":"c9d62bcb483229cfa0d75ede44e6eab6c895f6c4291912949d1422542884e9df"} Nov 24 21:44:28 crc kubenswrapper[4915]: I1124 21:44:28.974429 4915 scope.go:117] "RemoveContainer" containerID="b296386b7382c00fb411a5ee525eccc1763b1dd4f49f297354a2d7120d31fb75" Nov 24 21:44:28 crc kubenswrapper[4915]: I1124 21:44:28.976268 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-scripts" (OuterVolumeSpecName: "scripts") pod "0da56fdd-7e96-4fd2-93a7-cbab303ab8ca" (UID: "0da56fdd-7e96-4fd2-93a7-cbab303ab8ca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.032678 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0da56fdd-7e96-4fd2-93a7-cbab303ab8ca" (UID: "0da56fdd-7e96-4fd2-93a7-cbab303ab8ca"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.072858 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6dzx\" (UniqueName: \"kubernetes.io/projected/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-kube-api-access-n6dzx\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.072887 4915 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.072900 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.078406 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0da56fdd-7e96-4fd2-93a7-cbab303ab8ca" (UID: "0da56fdd-7e96-4fd2-93a7-cbab303ab8ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.116603 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-config-data" (OuterVolumeSpecName: "config-data") pod "0da56fdd-7e96-4fd2-93a7-cbab303ab8ca" (UID: "0da56fdd-7e96-4fd2-93a7-cbab303ab8ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.147201 4915 scope.go:117] "RemoveContainer" containerID="a697934c1cbaa1a117ec4d480fcd6e76855e24f2c8cac88d01de1fe73ff91a90" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.170079 4915 scope.go:117] "RemoveContainer" containerID="fc018c5e742b5c3febfa27f05dba9edd33bf8b77a5fa5222fc466cfda4197a4b" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.175359 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.175502 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.195349 4915 scope.go:117] "RemoveContainer" containerID="652a122da7c263e7829bc43528afaa69be1053dfa9eb40937ea0c6fa7bc1fa7a" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.220103 4915 scope.go:117] "RemoveContainer" containerID="b296386b7382c00fb411a5ee525eccc1763b1dd4f49f297354a2d7120d31fb75" Nov 24 21:44:29 crc kubenswrapper[4915]: E1124 21:44:29.221043 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b296386b7382c00fb411a5ee525eccc1763b1dd4f49f297354a2d7120d31fb75\": container with ID starting with b296386b7382c00fb411a5ee525eccc1763b1dd4f49f297354a2d7120d31fb75 not found: ID does not exist" containerID="b296386b7382c00fb411a5ee525eccc1763b1dd4f49f297354a2d7120d31fb75" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.221099 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b296386b7382c00fb411a5ee525eccc1763b1dd4f49f297354a2d7120d31fb75"} err="failed to get container status \"b296386b7382c00fb411a5ee525eccc1763b1dd4f49f297354a2d7120d31fb75\": rpc error: code = NotFound desc = could not find container \"b296386b7382c00fb411a5ee525eccc1763b1dd4f49f297354a2d7120d31fb75\": container with ID starting with b296386b7382c00fb411a5ee525eccc1763b1dd4f49f297354a2d7120d31fb75 not found: ID does not exist" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.221132 4915 scope.go:117] "RemoveContainer" containerID="a697934c1cbaa1a117ec4d480fcd6e76855e24f2c8cac88d01de1fe73ff91a90" Nov 24 21:44:29 crc kubenswrapper[4915]: E1124 21:44:29.221504 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a697934c1cbaa1a117ec4d480fcd6e76855e24f2c8cac88d01de1fe73ff91a90\": container with ID starting with a697934c1cbaa1a117ec4d480fcd6e76855e24f2c8cac88d01de1fe73ff91a90 not found: ID does not exist" containerID="a697934c1cbaa1a117ec4d480fcd6e76855e24f2c8cac88d01de1fe73ff91a90" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.221530 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a697934c1cbaa1a117ec4d480fcd6e76855e24f2c8cac88d01de1fe73ff91a90"} err="failed to get container status \"a697934c1cbaa1a117ec4d480fcd6e76855e24f2c8cac88d01de1fe73ff91a90\": rpc error: code = NotFound desc = could not find container \"a697934c1cbaa1a117ec4d480fcd6e76855e24f2c8cac88d01de1fe73ff91a90\": container with ID starting with a697934c1cbaa1a117ec4d480fcd6e76855e24f2c8cac88d01de1fe73ff91a90 not found: ID does not exist" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.221547 4915 scope.go:117] "RemoveContainer" containerID="fc018c5e742b5c3febfa27f05dba9edd33bf8b77a5fa5222fc466cfda4197a4b" Nov 24 21:44:29 crc kubenswrapper[4915]: E1124 21:44:29.221789 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc018c5e742b5c3febfa27f05dba9edd33bf8b77a5fa5222fc466cfda4197a4b\": container with ID starting with fc018c5e742b5c3febfa27f05dba9edd33bf8b77a5fa5222fc466cfda4197a4b not found: ID does not exist" containerID="fc018c5e742b5c3febfa27f05dba9edd33bf8b77a5fa5222fc466cfda4197a4b" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.221818 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc018c5e742b5c3febfa27f05dba9edd33bf8b77a5fa5222fc466cfda4197a4b"} err="failed to get container status \"fc018c5e742b5c3febfa27f05dba9edd33bf8b77a5fa5222fc466cfda4197a4b\": rpc error: code = NotFound desc = could not find container \"fc018c5e742b5c3febfa27f05dba9edd33bf8b77a5fa5222fc466cfda4197a4b\": container with ID starting with fc018c5e742b5c3febfa27f05dba9edd33bf8b77a5fa5222fc466cfda4197a4b not found: ID does not exist" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.221837 4915 scope.go:117] "RemoveContainer" containerID="652a122da7c263e7829bc43528afaa69be1053dfa9eb40937ea0c6fa7bc1fa7a" Nov 24 21:44:29 crc kubenswrapper[4915]: E1124 21:44:29.222512 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"652a122da7c263e7829bc43528afaa69be1053dfa9eb40937ea0c6fa7bc1fa7a\": container with ID starting with 652a122da7c263e7829bc43528afaa69be1053dfa9eb40937ea0c6fa7bc1fa7a not found: ID does not exist" containerID="652a122da7c263e7829bc43528afaa69be1053dfa9eb40937ea0c6fa7bc1fa7a" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.222533 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"652a122da7c263e7829bc43528afaa69be1053dfa9eb40937ea0c6fa7bc1fa7a"} err="failed to get container status \"652a122da7c263e7829bc43528afaa69be1053dfa9eb40937ea0c6fa7bc1fa7a\": rpc error: code = NotFound desc = could not find container \"652a122da7c263e7829bc43528afaa69be1053dfa9eb40937ea0c6fa7bc1fa7a\": container with ID starting with 652a122da7c263e7829bc43528afaa69be1053dfa9eb40937ea0c6fa7bc1fa7a not found: ID does not exist" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.334829 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.365685 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.380576 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:44:29 crc kubenswrapper[4915]: E1124 21:44:29.381121 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0da56fdd-7e96-4fd2-93a7-cbab303ab8ca" containerName="sg-core" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.381139 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0da56fdd-7e96-4fd2-93a7-cbab303ab8ca" containerName="sg-core" Nov 24 21:44:29 crc kubenswrapper[4915]: E1124 21:44:29.381180 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0da56fdd-7e96-4fd2-93a7-cbab303ab8ca" containerName="ceilometer-notification-agent" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.381188 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0da56fdd-7e96-4fd2-93a7-cbab303ab8ca" containerName="ceilometer-notification-agent" Nov 24 21:44:29 crc kubenswrapper[4915]: E1124 21:44:29.381212 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96df9840-ef15-4b5e-a23c-a7c0dbdf9d14" containerName="mariadb-account-create" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.381220 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="96df9840-ef15-4b5e-a23c-a7c0dbdf9d14" containerName="mariadb-account-create" Nov 24 21:44:29 crc kubenswrapper[4915]: E1124 21:44:29.381235 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0da56fdd-7e96-4fd2-93a7-cbab303ab8ca" containerName="ceilometer-central-agent" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.381243 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0da56fdd-7e96-4fd2-93a7-cbab303ab8ca" containerName="ceilometer-central-agent" Nov 24 21:44:29 crc kubenswrapper[4915]: E1124 21:44:29.381262 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c94bae1f-7fd1-400e-b38f-6d835e52bb97" containerName="mariadb-database-create" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.381269 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c94bae1f-7fd1-400e-b38f-6d835e52bb97" containerName="mariadb-database-create" Nov 24 21:44:29 crc kubenswrapper[4915]: E1124 21:44:29.381283 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0da56fdd-7e96-4fd2-93a7-cbab303ab8ca" containerName="proxy-httpd" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.381290 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0da56fdd-7e96-4fd2-93a7-cbab303ab8ca" containerName="proxy-httpd" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.381545 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="96df9840-ef15-4b5e-a23c-a7c0dbdf9d14" containerName="mariadb-account-create" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.381566 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="0da56fdd-7e96-4fd2-93a7-cbab303ab8ca" containerName="proxy-httpd" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.381590 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c94bae1f-7fd1-400e-b38f-6d835e52bb97" containerName="mariadb-database-create" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.381602 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="0da56fdd-7e96-4fd2-93a7-cbab303ab8ca" containerName="ceilometer-central-agent" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.381614 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="0da56fdd-7e96-4fd2-93a7-cbab303ab8ca" containerName="sg-core" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.381623 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="0da56fdd-7e96-4fd2-93a7-cbab303ab8ca" containerName="ceilometer-notification-agent" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.385920 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.387751 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.388086 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.403304 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.541654 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-298k5" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.548567 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jshft" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.583580 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e821fdc7-f166-4347-8087-341868680ed0-run-httpd\") pod \"ceilometer-0\" (UID: \"e821fdc7-f166-4347-8087-341868680ed0\") " pod="openstack/ceilometer-0" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.583644 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e821fdc7-f166-4347-8087-341868680ed0-scripts\") pod \"ceilometer-0\" (UID: \"e821fdc7-f166-4347-8087-341868680ed0\") " pod="openstack/ceilometer-0" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.583689 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e821fdc7-f166-4347-8087-341868680ed0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e821fdc7-f166-4347-8087-341868680ed0\") " pod="openstack/ceilometer-0" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.583837 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e821fdc7-f166-4347-8087-341868680ed0-config-data\") pod \"ceilometer-0\" (UID: \"e821fdc7-f166-4347-8087-341868680ed0\") " pod="openstack/ceilometer-0" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.583897 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e821fdc7-f166-4347-8087-341868680ed0-log-httpd\") pod \"ceilometer-0\" (UID: \"e821fdc7-f166-4347-8087-341868680ed0\") " pod="openstack/ceilometer-0" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.583969 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e821fdc7-f166-4347-8087-341868680ed0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e821fdc7-f166-4347-8087-341868680ed0\") " pod="openstack/ceilometer-0" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.584014 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7vmh\" (UniqueName: \"kubernetes.io/projected/e821fdc7-f166-4347-8087-341868680ed0-kube-api-access-s7vmh\") pod \"ceilometer-0\" (UID: \"e821fdc7-f166-4347-8087-341868680ed0\") " pod="openstack/ceilometer-0" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.686025 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt5wv\" (UniqueName: \"kubernetes.io/projected/5e0b0bf9-6641-4a71-8adb-2c6e97412718-kube-api-access-jt5wv\") pod \"5e0b0bf9-6641-4a71-8adb-2c6e97412718\" (UID: \"5e0b0bf9-6641-4a71-8adb-2c6e97412718\") " Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.686266 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb3b004f-82a5-46ab-aff4-223567ddd793-combined-ca-bundle\") pod \"fb3b004f-82a5-46ab-aff4-223567ddd793\" (UID: \"fb3b004f-82a5-46ab-aff4-223567ddd793\") " Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.686390 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb3b004f-82a5-46ab-aff4-223567ddd793-config-data\") pod \"fb3b004f-82a5-46ab-aff4-223567ddd793\" (UID: \"fb3b004f-82a5-46ab-aff4-223567ddd793\") " Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.686487 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb3b004f-82a5-46ab-aff4-223567ddd793-scripts\") pod \"fb3b004f-82a5-46ab-aff4-223567ddd793\" (UID: \"fb3b004f-82a5-46ab-aff4-223567ddd793\") " Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.686532 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e0b0bf9-6641-4a71-8adb-2c6e97412718-scripts\") pod \"5e0b0bf9-6641-4a71-8adb-2c6e97412718\" (UID: \"5e0b0bf9-6641-4a71-8adb-2c6e97412718\") " Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.686605 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcdks\" (UniqueName: \"kubernetes.io/projected/fb3b004f-82a5-46ab-aff4-223567ddd793-kube-api-access-zcdks\") pod \"fb3b004f-82a5-46ab-aff4-223567ddd793\" (UID: \"fb3b004f-82a5-46ab-aff4-223567ddd793\") " Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.686635 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0b0bf9-6641-4a71-8adb-2c6e97412718-combined-ca-bundle\") pod \"5e0b0bf9-6641-4a71-8adb-2c6e97412718\" (UID: \"5e0b0bf9-6641-4a71-8adb-2c6e97412718\") " Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.687662 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e0b0bf9-6641-4a71-8adb-2c6e97412718-config-data\") pod \"5e0b0bf9-6641-4a71-8adb-2c6e97412718\" (UID: \"5e0b0bf9-6641-4a71-8adb-2c6e97412718\") " Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.688733 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e821fdc7-f166-4347-8087-341868680ed0-run-httpd\") pod \"ceilometer-0\" (UID: \"e821fdc7-f166-4347-8087-341868680ed0\") " pod="openstack/ceilometer-0" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.688817 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e821fdc7-f166-4347-8087-341868680ed0-scripts\") pod \"ceilometer-0\" (UID: \"e821fdc7-f166-4347-8087-341868680ed0\") " pod="openstack/ceilometer-0" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.688897 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e821fdc7-f166-4347-8087-341868680ed0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e821fdc7-f166-4347-8087-341868680ed0\") " pod="openstack/ceilometer-0" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.689007 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e821fdc7-f166-4347-8087-341868680ed0-config-data\") pod \"ceilometer-0\" (UID: \"e821fdc7-f166-4347-8087-341868680ed0\") " pod="openstack/ceilometer-0" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.689070 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e821fdc7-f166-4347-8087-341868680ed0-log-httpd\") pod \"ceilometer-0\" (UID: \"e821fdc7-f166-4347-8087-341868680ed0\") " pod="openstack/ceilometer-0" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.689136 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e821fdc7-f166-4347-8087-341868680ed0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e821fdc7-f166-4347-8087-341868680ed0\") " pod="openstack/ceilometer-0" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.689189 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7vmh\" (UniqueName: \"kubernetes.io/projected/e821fdc7-f166-4347-8087-341868680ed0-kube-api-access-s7vmh\") pod \"ceilometer-0\" (UID: \"e821fdc7-f166-4347-8087-341868680ed0\") " pod="openstack/ceilometer-0" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.689428 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e821fdc7-f166-4347-8087-341868680ed0-run-httpd\") pod \"ceilometer-0\" (UID: \"e821fdc7-f166-4347-8087-341868680ed0\") " pod="openstack/ceilometer-0" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.691892 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e821fdc7-f166-4347-8087-341868680ed0-log-httpd\") pod \"ceilometer-0\" (UID: \"e821fdc7-f166-4347-8087-341868680ed0\") " pod="openstack/ceilometer-0" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.692556 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e0b0bf9-6641-4a71-8adb-2c6e97412718-scripts" (OuterVolumeSpecName: "scripts") pod "5e0b0bf9-6641-4a71-8adb-2c6e97412718" (UID: "5e0b0bf9-6641-4a71-8adb-2c6e97412718"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.695581 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e0b0bf9-6641-4a71-8adb-2c6e97412718-kube-api-access-jt5wv" (OuterVolumeSpecName: "kube-api-access-jt5wv") pod "5e0b0bf9-6641-4a71-8adb-2c6e97412718" (UID: "5e0b0bf9-6641-4a71-8adb-2c6e97412718"). InnerVolumeSpecName "kube-api-access-jt5wv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.697656 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb3b004f-82a5-46ab-aff4-223567ddd793-scripts" (OuterVolumeSpecName: "scripts") pod "fb3b004f-82a5-46ab-aff4-223567ddd793" (UID: "fb3b004f-82a5-46ab-aff4-223567ddd793"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.699224 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e821fdc7-f166-4347-8087-341868680ed0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e821fdc7-f166-4347-8087-341868680ed0\") " pod="openstack/ceilometer-0" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.699543 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e821fdc7-f166-4347-8087-341868680ed0-scripts\") pod \"ceilometer-0\" (UID: \"e821fdc7-f166-4347-8087-341868680ed0\") " pod="openstack/ceilometer-0" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.703557 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e821fdc7-f166-4347-8087-341868680ed0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e821fdc7-f166-4347-8087-341868680ed0\") " pod="openstack/ceilometer-0" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.704980 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb3b004f-82a5-46ab-aff4-223567ddd793-kube-api-access-zcdks" (OuterVolumeSpecName: "kube-api-access-zcdks") pod "fb3b004f-82a5-46ab-aff4-223567ddd793" (UID: "fb3b004f-82a5-46ab-aff4-223567ddd793"). InnerVolumeSpecName "kube-api-access-zcdks". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.713163 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e821fdc7-f166-4347-8087-341868680ed0-config-data\") pod \"ceilometer-0\" (UID: \"e821fdc7-f166-4347-8087-341868680ed0\") " pod="openstack/ceilometer-0" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.714397 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7vmh\" (UniqueName: \"kubernetes.io/projected/e821fdc7-f166-4347-8087-341868680ed0-kube-api-access-s7vmh\") pod \"ceilometer-0\" (UID: \"e821fdc7-f166-4347-8087-341868680ed0\") " pod="openstack/ceilometer-0" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.728130 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb3b004f-82a5-46ab-aff4-223567ddd793-config-data" (OuterVolumeSpecName: "config-data") pod "fb3b004f-82a5-46ab-aff4-223567ddd793" (UID: "fb3b004f-82a5-46ab-aff4-223567ddd793"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.733333 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-688b9f5b49-ncs8x" podUID="0327c0c2-dd19-48b6-946d-3b1460a7a260" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.215:5353: i/o timeout" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.742680 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-7b6fc899f8-tjzhx" podUID="cda3e4e6-1dbb-4bcb-bdce-d0a62ea9ad8f" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.216:8000/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.751054 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb3b004f-82a5-46ab-aff4-223567ddd793-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb3b004f-82a5-46ab-aff4-223567ddd793" (UID: "fb3b004f-82a5-46ab-aff4-223567ddd793"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:29 crc kubenswrapper[4915]: E1124 21:44:29.759492 4915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e0b0bf9-6641-4a71-8adb-2c6e97412718-config-data podName:5e0b0bf9-6641-4a71-8adb-2c6e97412718 nodeName:}" failed. No retries permitted until 2025-11-24 21:44:30.259465568 +0000 UTC m=+1488.575717751 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/5e0b0bf9-6641-4a71-8adb-2c6e97412718-config-data") pod "5e0b0bf9-6641-4a71-8adb-2c6e97412718" (UID: "5e0b0bf9-6641-4a71-8adb-2c6e97412718") : error deleting /var/lib/kubelet/pods/5e0b0bf9-6641-4a71-8adb-2c6e97412718/volume-subpaths: remove /var/lib/kubelet/pods/5e0b0bf9-6641-4a71-8adb-2c6e97412718/volume-subpaths: no such file or directory Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.762525 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e0b0bf9-6641-4a71-8adb-2c6e97412718-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e0b0bf9-6641-4a71-8adb-2c6e97412718" (UID: "5e0b0bf9-6641-4a71-8adb-2c6e97412718"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.791346 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb3b004f-82a5-46ab-aff4-223567ddd793-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.791513 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb3b004f-82a5-46ab-aff4-223567ddd793-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.791602 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e0b0bf9-6641-4a71-8adb-2c6e97412718-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.791663 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcdks\" (UniqueName: \"kubernetes.io/projected/fb3b004f-82a5-46ab-aff4-223567ddd793-kube-api-access-zcdks\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.791743 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e0b0bf9-6641-4a71-8adb-2c6e97412718-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.791832 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt5wv\" (UniqueName: \"kubernetes.io/projected/5e0b0bf9-6641-4a71-8adb-2c6e97412718-kube-api-access-jt5wv\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.791914 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb3b004f-82a5-46ab-aff4-223567ddd793-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:29 crc kubenswrapper[4915]: I1124 21:44:29.838135 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.010749 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jshft" event={"ID":"fb3b004f-82a5-46ab-aff4-223567ddd793","Type":"ContainerDied","Data":"9d5d59933769e6e1672f812f38d95d36b870d88354b7c9f9cf7def2d7379960e"} Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.011086 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d5d59933769e6e1672f812f38d95d36b870d88354b7c9f9cf7def2d7379960e" Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.011377 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jshft" Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.023665 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-298k5" event={"ID":"5e0b0bf9-6641-4a71-8adb-2c6e97412718","Type":"ContainerDied","Data":"a57b442a381a6d7f8268fcdd01fdfa9c67484fa9dac6c764f9cb211f20241dc9"} Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.023726 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a57b442a381a6d7f8268fcdd01fdfa9c67484fa9dac6c764f9cb211f20241dc9" Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.023789 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-298k5" Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.096002 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 21:44:30 crc kubenswrapper[4915]: E1124 21:44:30.097094 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e0b0bf9-6641-4a71-8adb-2c6e97412718" containerName="nova-cell1-conductor-db-sync" Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.097117 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e0b0bf9-6641-4a71-8adb-2c6e97412718" containerName="nova-cell1-conductor-db-sync" Nov 24 21:44:30 crc kubenswrapper[4915]: E1124 21:44:30.097166 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb3b004f-82a5-46ab-aff4-223567ddd793" containerName="nova-manage" Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.097173 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb3b004f-82a5-46ab-aff4-223567ddd793" containerName="nova-manage" Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.097365 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb3b004f-82a5-46ab-aff4-223567ddd793" containerName="nova-manage" Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.097381 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e0b0bf9-6641-4a71-8adb-2c6e97412718" containerName="nova-cell1-conductor-db-sync" Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.098393 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.105230 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm88m\" (UniqueName: \"kubernetes.io/projected/7a63b596-ad59-47a7-bc24-7efbc0a4001b-kube-api-access-mm88m\") pod \"nova-cell1-conductor-0\" (UID: \"7a63b596-ad59-47a7-bc24-7efbc0a4001b\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.105339 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a63b596-ad59-47a7-bc24-7efbc0a4001b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7a63b596-ad59-47a7-bc24-7efbc0a4001b\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.105413 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a63b596-ad59-47a7-bc24-7efbc0a4001b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7a63b596-ad59-47a7-bc24-7efbc0a4001b\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.113454 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.206756 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a63b596-ad59-47a7-bc24-7efbc0a4001b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7a63b596-ad59-47a7-bc24-7efbc0a4001b\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.206840 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a63b596-ad59-47a7-bc24-7efbc0a4001b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7a63b596-ad59-47a7-bc24-7efbc0a4001b\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.206948 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm88m\" (UniqueName: \"kubernetes.io/projected/7a63b596-ad59-47a7-bc24-7efbc0a4001b-kube-api-access-mm88m\") pod \"nova-cell1-conductor-0\" (UID: \"7a63b596-ad59-47a7-bc24-7efbc0a4001b\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.221061 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a63b596-ad59-47a7-bc24-7efbc0a4001b-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"7a63b596-ad59-47a7-bc24-7efbc0a4001b\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.225485 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a63b596-ad59-47a7-bc24-7efbc0a4001b-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"7a63b596-ad59-47a7-bc24-7efbc0a4001b\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.258104 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.258326 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c3012cd0-1234-4d16-ae47-7eb87738f54d" containerName="nova-api-log" containerID="cri-o://28f62c1e42afb84029f597f292bd6d55dca3a4f11a48141552676a97c68656b7" gracePeriod=30 Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.258768 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c3012cd0-1234-4d16-ae47-7eb87738f54d" containerName="nova-api-api" containerID="cri-o://488ec1ee3de8b48b38b433ee5c077db6d3040ee6c11194adb3b5a15ab22d8bf7" gracePeriod=30 Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.267604 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm88m\" (UniqueName: \"kubernetes.io/projected/7a63b596-ad59-47a7-bc24-7efbc0a4001b-kube-api-access-mm88m\") pod \"nova-cell1-conductor-0\" (UID: \"7a63b596-ad59-47a7-bc24-7efbc0a4001b\") " pod="openstack/nova-cell1-conductor-0" Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.289072 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.289259 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="cf56610a-46ba-4937-b001-f3054a401f2f" containerName="nova-scheduler-scheduler" containerID="cri-o://a788202b84d776934294fe577d610efff3d340bf505787b8ad7def6968f798f2" gracePeriod=30 Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.308297 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e0b0bf9-6641-4a71-8adb-2c6e97412718-config-data\") pod \"5e0b0bf9-6641-4a71-8adb-2c6e97412718\" (UID: \"5e0b0bf9-6641-4a71-8adb-2c6e97412718\") " Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.314628 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e0b0bf9-6641-4a71-8adb-2c6e97412718-config-data" (OuterVolumeSpecName: "config-data") pod "5e0b0bf9-6641-4a71-8adb-2c6e97412718" (UID: "5e0b0bf9-6641-4a71-8adb-2c6e97412718"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.363815 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.371278 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b6dac01c-4f6c-478a-8686-b2450ad59ec2" containerName="nova-metadata-log" containerID="cri-o://a38a32992b751f1ac867bb8b4ddb156e46ba5f7b8d2bcf11a0b8668b25057a20" gracePeriod=30 Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.371830 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b6dac01c-4f6c-478a-8686-b2450ad59ec2" containerName="nova-metadata-metadata" containerID="cri-o://b8e6d145eb90989f54e49165184ca6e3e65eacaeb7ef6489842bf7b5cf0f6522" gracePeriod=30 Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.376627 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.376657 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.383294 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.419449 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e0b0bf9-6641-4a71-8adb-2c6e97412718-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.444601 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0da56fdd-7e96-4fd2-93a7-cbab303ab8ca" path="/var/lib/kubelet/pods/0da56fdd-7e96-4fd2-93a7-cbab303ab8ca/volumes" Nov 24 21:44:30 crc kubenswrapper[4915]: I1124 21:44:30.462407 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.025999 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.035711 4915 generic.go:334] "Generic (PLEG): container finished" podID="c3012cd0-1234-4d16-ae47-7eb87738f54d" containerID="28f62c1e42afb84029f597f292bd6d55dca3a4f11a48141552676a97c68656b7" exitCode=143 Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.035769 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3012cd0-1234-4d16-ae47-7eb87738f54d","Type":"ContainerDied","Data":"28f62c1e42afb84029f597f292bd6d55dca3a4f11a48141552676a97c68656b7"} Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.037291 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e821fdc7-f166-4347-8087-341868680ed0","Type":"ContainerStarted","Data":"cdcfbf740bc09231d322579dde74446ff45bbd7c6f9084d8085911877f7ffcdd"} Nov 24 21:44:31 crc kubenswrapper[4915]: W1124 21:44:31.037741 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a63b596_ad59_47a7_bc24_7efbc0a4001b.slice/crio-89874ee651bb647ebfa37fc3c0ee98f52287c4f1c37c8b117edb6dded9e03e4c WatchSource:0}: Error finding container 89874ee651bb647ebfa37fc3c0ee98f52287c4f1c37c8b117edb6dded9e03e4c: Status 404 returned error can't find the container with id 89874ee651bb647ebfa37fc3c0ee98f52287c4f1c37c8b117edb6dded9e03e4c Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.039770 4915 generic.go:334] "Generic (PLEG): container finished" podID="b6dac01c-4f6c-478a-8686-b2450ad59ec2" containerID="b8e6d145eb90989f54e49165184ca6e3e65eacaeb7ef6489842bf7b5cf0f6522" exitCode=0 Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.039816 4915 generic.go:334] "Generic (PLEG): container finished" podID="b6dac01c-4f6c-478a-8686-b2450ad59ec2" containerID="a38a32992b751f1ac867bb8b4ddb156e46ba5f7b8d2bcf11a0b8668b25057a20" exitCode=143 Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.039837 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b6dac01c-4f6c-478a-8686-b2450ad59ec2","Type":"ContainerDied","Data":"b8e6d145eb90989f54e49165184ca6e3e65eacaeb7ef6489842bf7b5cf0f6522"} Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.039883 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b6dac01c-4f6c-478a-8686-b2450ad59ec2","Type":"ContainerDied","Data":"a38a32992b751f1ac867bb8b4ddb156e46ba5f7b8d2bcf11a0b8668b25057a20"} Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.250645 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.340742 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6dac01c-4f6c-478a-8686-b2450ad59ec2-nova-metadata-tls-certs\") pod \"b6dac01c-4f6c-478a-8686-b2450ad59ec2\" (UID: \"b6dac01c-4f6c-478a-8686-b2450ad59ec2\") " Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.340880 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dac01c-4f6c-478a-8686-b2450ad59ec2-combined-ca-bundle\") pod \"b6dac01c-4f6c-478a-8686-b2450ad59ec2\" (UID: \"b6dac01c-4f6c-478a-8686-b2450ad59ec2\") " Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.341060 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6wvm\" (UniqueName: \"kubernetes.io/projected/b6dac01c-4f6c-478a-8686-b2450ad59ec2-kube-api-access-v6wvm\") pod \"b6dac01c-4f6c-478a-8686-b2450ad59ec2\" (UID: \"b6dac01c-4f6c-478a-8686-b2450ad59ec2\") " Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.341106 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6dac01c-4f6c-478a-8686-b2450ad59ec2-logs\") pod \"b6dac01c-4f6c-478a-8686-b2450ad59ec2\" (UID: \"b6dac01c-4f6c-478a-8686-b2450ad59ec2\") " Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.341157 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6dac01c-4f6c-478a-8686-b2450ad59ec2-config-data\") pod \"b6dac01c-4f6c-478a-8686-b2450ad59ec2\" (UID: \"b6dac01c-4f6c-478a-8686-b2450ad59ec2\") " Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.342602 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6dac01c-4f6c-478a-8686-b2450ad59ec2-logs" (OuterVolumeSpecName: "logs") pod "b6dac01c-4f6c-478a-8686-b2450ad59ec2" (UID: "b6dac01c-4f6c-478a-8686-b2450ad59ec2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.349074 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6dac01c-4f6c-478a-8686-b2450ad59ec2-kube-api-access-v6wvm" (OuterVolumeSpecName: "kube-api-access-v6wvm") pod "b6dac01c-4f6c-478a-8686-b2450ad59ec2" (UID: "b6dac01c-4f6c-478a-8686-b2450ad59ec2"). InnerVolumeSpecName "kube-api-access-v6wvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.386124 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6dac01c-4f6c-478a-8686-b2450ad59ec2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6dac01c-4f6c-478a-8686-b2450ad59ec2" (UID: "b6dac01c-4f6c-478a-8686-b2450ad59ec2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.396434 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6dac01c-4f6c-478a-8686-b2450ad59ec2-config-data" (OuterVolumeSpecName: "config-data") pod "b6dac01c-4f6c-478a-8686-b2450ad59ec2" (UID: "b6dac01c-4f6c-478a-8686-b2450ad59ec2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.425949 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6dac01c-4f6c-478a-8686-b2450ad59ec2-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b6dac01c-4f6c-478a-8686-b2450ad59ec2" (UID: "b6dac01c-4f6c-478a-8686-b2450ad59ec2"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.444356 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6wvm\" (UniqueName: \"kubernetes.io/projected/b6dac01c-4f6c-478a-8686-b2450ad59ec2-kube-api-access-v6wvm\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.444394 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6dac01c-4f6c-478a-8686-b2450ad59ec2-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.444407 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6dac01c-4f6c-478a-8686-b2450ad59ec2-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.444418 4915 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6dac01c-4f6c-478a-8686-b2450ad59ec2-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.444429 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6dac01c-4f6c-478a-8686-b2450ad59ec2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.503681 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-cxqp9"] Nov 24 21:44:31 crc kubenswrapper[4915]: E1124 21:44:31.505482 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6dac01c-4f6c-478a-8686-b2450ad59ec2" containerName="nova-metadata-metadata" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.505577 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6dac01c-4f6c-478a-8686-b2450ad59ec2" containerName="nova-metadata-metadata" Nov 24 21:44:31 crc kubenswrapper[4915]: E1124 21:44:31.505743 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6dac01c-4f6c-478a-8686-b2450ad59ec2" containerName="nova-metadata-log" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.505864 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6dac01c-4f6c-478a-8686-b2450ad59ec2" containerName="nova-metadata-log" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.506259 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6dac01c-4f6c-478a-8686-b2450ad59ec2" containerName="nova-metadata-metadata" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.506343 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6dac01c-4f6c-478a-8686-b2450ad59ec2" containerName="nova-metadata-log" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.507893 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-cxqp9" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.509689 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.510608 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-497jp" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.510892 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.511070 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.518603 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-cxqp9"] Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.574197 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6-config-data\") pod \"aodh-db-sync-cxqp9\" (UID: \"6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6\") " pod="openstack/aodh-db-sync-cxqp9" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.574256 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6-combined-ca-bundle\") pod \"aodh-db-sync-cxqp9\" (UID: \"6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6\") " pod="openstack/aodh-db-sync-cxqp9" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.574315 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6-scripts\") pod \"aodh-db-sync-cxqp9\" (UID: \"6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6\") " pod="openstack/aodh-db-sync-cxqp9" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.574335 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmcxx\" (UniqueName: \"kubernetes.io/projected/6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6-kube-api-access-rmcxx\") pod \"aodh-db-sync-cxqp9\" (UID: \"6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6\") " pod="openstack/aodh-db-sync-cxqp9" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.676174 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6-config-data\") pod \"aodh-db-sync-cxqp9\" (UID: \"6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6\") " pod="openstack/aodh-db-sync-cxqp9" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.676253 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6-combined-ca-bundle\") pod \"aodh-db-sync-cxqp9\" (UID: \"6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6\") " pod="openstack/aodh-db-sync-cxqp9" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.676310 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6-scripts\") pod \"aodh-db-sync-cxqp9\" (UID: \"6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6\") " pod="openstack/aodh-db-sync-cxqp9" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.676333 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmcxx\" (UniqueName: \"kubernetes.io/projected/6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6-kube-api-access-rmcxx\") pod \"aodh-db-sync-cxqp9\" (UID: \"6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6\") " pod="openstack/aodh-db-sync-cxqp9" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.685680 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6-combined-ca-bundle\") pod \"aodh-db-sync-cxqp9\" (UID: \"6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6\") " pod="openstack/aodh-db-sync-cxqp9" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.687507 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6-config-data\") pod \"aodh-db-sync-cxqp9\" (UID: \"6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6\") " pod="openstack/aodh-db-sync-cxqp9" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.687756 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6-scripts\") pod \"aodh-db-sync-cxqp9\" (UID: \"6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6\") " pod="openstack/aodh-db-sync-cxqp9" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.701416 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmcxx\" (UniqueName: \"kubernetes.io/projected/6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6-kube-api-access-rmcxx\") pod \"aodh-db-sync-cxqp9\" (UID: \"6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6\") " pod="openstack/aodh-db-sync-cxqp9" Nov 24 21:44:31 crc kubenswrapper[4915]: I1124 21:44:31.774164 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-cxqp9" Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.059682 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b6dac01c-4f6c-478a-8686-b2450ad59ec2","Type":"ContainerDied","Data":"156f1f2045d643816264d7c54ae38bccff429ddaf8885d3f01a78fdc31787960"} Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.060021 4915 scope.go:117] "RemoveContainer" containerID="b8e6d145eb90989f54e49165184ca6e3e65eacaeb7ef6489842bf7b5cf0f6522" Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.060177 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.064524 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7a63b596-ad59-47a7-bc24-7efbc0a4001b","Type":"ContainerStarted","Data":"04b00f8b4ea23f99fffa8ceeb3cf9a474b38fc7184ec2ea14f05af58dfd4540f"} Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.064576 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"7a63b596-ad59-47a7-bc24-7efbc0a4001b","Type":"ContainerStarted","Data":"89874ee651bb647ebfa37fc3c0ee98f52287c4f1c37c8b117edb6dded9e03e4c"} Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.065274 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.077641 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e821fdc7-f166-4347-8087-341868680ed0","Type":"ContainerStarted","Data":"78dd86358ee6c0cc1ed1da71f0880114bd2f1e8812ed147732e76fece1820578"} Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.077681 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e821fdc7-f166-4347-8087-341868680ed0","Type":"ContainerStarted","Data":"7d334b2e176b14611bb63d228a184c051d6ff8d39b2a646ab95dce4c91eaded3"} Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.105733 4915 scope.go:117] "RemoveContainer" containerID="a38a32992b751f1ac867bb8b4ddb156e46ba5f7b8d2bcf11a0b8668b25057a20" Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.107520 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.107504515 podStartE2EDuration="2.107504515s" podCreationTimestamp="2025-11-24 21:44:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:44:32.0913554 +0000 UTC m=+1490.407607593" watchObservedRunningTime="2025-11-24 21:44:32.107504515 +0000 UTC m=+1490.423756688" Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.163767 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.185757 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.212661 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.214809 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.217217 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.217314 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.228708 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.275489 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-cxqp9"] Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.298319 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9c0d54-711e-4173-bc12-063f838129e4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fd9c0d54-711e-4173-bc12-063f838129e4\") " pod="openstack/nova-metadata-0" Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.298666 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd9c0d54-711e-4173-bc12-063f838129e4-config-data\") pod \"nova-metadata-0\" (UID: \"fd9c0d54-711e-4173-bc12-063f838129e4\") " pod="openstack/nova-metadata-0" Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.298770 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mblzn\" (UniqueName: \"kubernetes.io/projected/fd9c0d54-711e-4173-bc12-063f838129e4-kube-api-access-mblzn\") pod \"nova-metadata-0\" (UID: \"fd9c0d54-711e-4173-bc12-063f838129e4\") " pod="openstack/nova-metadata-0" Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.298916 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd9c0d54-711e-4173-bc12-063f838129e4-logs\") pod \"nova-metadata-0\" (UID: \"fd9c0d54-711e-4173-bc12-063f838129e4\") " pod="openstack/nova-metadata-0" Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.298949 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd9c0d54-711e-4173-bc12-063f838129e4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fd9c0d54-711e-4173-bc12-063f838129e4\") " pod="openstack/nova-metadata-0" Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.400195 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mblzn\" (UniqueName: \"kubernetes.io/projected/fd9c0d54-711e-4173-bc12-063f838129e4-kube-api-access-mblzn\") pod \"nova-metadata-0\" (UID: \"fd9c0d54-711e-4173-bc12-063f838129e4\") " pod="openstack/nova-metadata-0" Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.400482 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd9c0d54-711e-4173-bc12-063f838129e4-logs\") pod \"nova-metadata-0\" (UID: \"fd9c0d54-711e-4173-bc12-063f838129e4\") " pod="openstack/nova-metadata-0" Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.400502 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd9c0d54-711e-4173-bc12-063f838129e4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fd9c0d54-711e-4173-bc12-063f838129e4\") " pod="openstack/nova-metadata-0" Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.400534 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9c0d54-711e-4173-bc12-063f838129e4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fd9c0d54-711e-4173-bc12-063f838129e4\") " pod="openstack/nova-metadata-0" Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.400636 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd9c0d54-711e-4173-bc12-063f838129e4-config-data\") pod \"nova-metadata-0\" (UID: \"fd9c0d54-711e-4173-bc12-063f838129e4\") " pod="openstack/nova-metadata-0" Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.401084 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd9c0d54-711e-4173-bc12-063f838129e4-logs\") pod \"nova-metadata-0\" (UID: \"fd9c0d54-711e-4173-bc12-063f838129e4\") " pod="openstack/nova-metadata-0" Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.405098 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd9c0d54-711e-4173-bc12-063f838129e4-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fd9c0d54-711e-4173-bc12-063f838129e4\") " pod="openstack/nova-metadata-0" Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.405326 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd9c0d54-711e-4173-bc12-063f838129e4-config-data\") pod \"nova-metadata-0\" (UID: \"fd9c0d54-711e-4173-bc12-063f838129e4\") " pod="openstack/nova-metadata-0" Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.406323 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9c0d54-711e-4173-bc12-063f838129e4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fd9c0d54-711e-4173-bc12-063f838129e4\") " pod="openstack/nova-metadata-0" Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.421667 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mblzn\" (UniqueName: \"kubernetes.io/projected/fd9c0d54-711e-4173-bc12-063f838129e4-kube-api-access-mblzn\") pod \"nova-metadata-0\" (UID: \"fd9c0d54-711e-4173-bc12-063f838129e4\") " pod="openstack/nova-metadata-0" Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.442126 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6dac01c-4f6c-478a-8686-b2450ad59ec2" path="/var/lib/kubelet/pods/b6dac01c-4f6c-478a-8686-b2450ad59ec2/volumes" Nov 24 21:44:32 crc kubenswrapper[4915]: I1124 21:44:32.549151 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:44:33 crc kubenswrapper[4915]: I1124 21:44:33.094423 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-cxqp9" event={"ID":"6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6","Type":"ContainerStarted","Data":"0b1284a95537413d4706b26032331aad9aaa3e7d6c1a9a66ff5c278e661cb6da"} Nov 24 21:44:33 crc kubenswrapper[4915]: I1124 21:44:33.101697 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e821fdc7-f166-4347-8087-341868680ed0","Type":"ContainerStarted","Data":"019c777f7456362868035db56dc52b710b0b59619f1f69e8bd2254d90e518d0f"} Nov 24 21:44:33 crc kubenswrapper[4915]: E1124 21:44:33.128442 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a788202b84d776934294fe577d610efff3d340bf505787b8ad7def6968f798f2" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 21:44:33 crc kubenswrapper[4915]: E1124 21:44:33.130056 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a788202b84d776934294fe577d610efff3d340bf505787b8ad7def6968f798f2" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 21:44:33 crc kubenswrapper[4915]: E1124 21:44:33.131078 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a788202b84d776934294fe577d610efff3d340bf505787b8ad7def6968f798f2" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 24 21:44:33 crc kubenswrapper[4915]: E1124 21:44:33.131109 4915 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="cf56610a-46ba-4937-b001-f3054a401f2f" containerName="nova-scheduler-scheduler" Nov 24 21:44:33 crc kubenswrapper[4915]: I1124 21:44:33.151938 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:44:34 crc kubenswrapper[4915]: I1124 21:44:34.162659 4915 generic.go:334] "Generic (PLEG): container finished" podID="c3012cd0-1234-4d16-ae47-7eb87738f54d" containerID="488ec1ee3de8b48b38b433ee5c077db6d3040ee6c11194adb3b5a15ab22d8bf7" exitCode=0 Nov 24 21:44:34 crc kubenswrapper[4915]: I1124 21:44:34.163003 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3012cd0-1234-4d16-ae47-7eb87738f54d","Type":"ContainerDied","Data":"488ec1ee3de8b48b38b433ee5c077db6d3040ee6c11194adb3b5a15ab22d8bf7"} Nov 24 21:44:34 crc kubenswrapper[4915]: I1124 21:44:34.177648 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd9c0d54-711e-4173-bc12-063f838129e4","Type":"ContainerStarted","Data":"4d191e4ffe24bf8103bc844b3369351ac9597be595150c6a693ec88108a5ac59"} Nov 24 21:44:34 crc kubenswrapper[4915]: I1124 21:44:34.177933 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd9c0d54-711e-4173-bc12-063f838129e4","Type":"ContainerStarted","Data":"9a8e6378a26a819d1c386971aa686b62d52e803cc3f083df6ef55beda11dcd0d"} Nov 24 21:44:34 crc kubenswrapper[4915]: I1124 21:44:34.177946 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd9c0d54-711e-4173-bc12-063f838129e4","Type":"ContainerStarted","Data":"57af5ac38b2217d4ef840c36711fcd9138dc4058e88dba668c32a86eac92b561"} Nov 24 21:44:34 crc kubenswrapper[4915]: I1124 21:44:34.195544 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:44:34 crc kubenswrapper[4915]: I1124 21:44:34.236844 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.236824431 podStartE2EDuration="2.236824431s" podCreationTimestamp="2025-11-24 21:44:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:44:34.199296021 +0000 UTC m=+1492.515548204" watchObservedRunningTime="2025-11-24 21:44:34.236824431 +0000 UTC m=+1492.553076604" Nov 24 21:44:34 crc kubenswrapper[4915]: I1124 21:44:34.352338 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tdqj\" (UniqueName: \"kubernetes.io/projected/c3012cd0-1234-4d16-ae47-7eb87738f54d-kube-api-access-6tdqj\") pod \"c3012cd0-1234-4d16-ae47-7eb87738f54d\" (UID: \"c3012cd0-1234-4d16-ae47-7eb87738f54d\") " Nov 24 21:44:34 crc kubenswrapper[4915]: I1124 21:44:34.352434 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3012cd0-1234-4d16-ae47-7eb87738f54d-combined-ca-bundle\") pod \"c3012cd0-1234-4d16-ae47-7eb87738f54d\" (UID: \"c3012cd0-1234-4d16-ae47-7eb87738f54d\") " Nov 24 21:44:34 crc kubenswrapper[4915]: I1124 21:44:34.352613 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3012cd0-1234-4d16-ae47-7eb87738f54d-logs\") pod \"c3012cd0-1234-4d16-ae47-7eb87738f54d\" (UID: \"c3012cd0-1234-4d16-ae47-7eb87738f54d\") " Nov 24 21:44:34 crc kubenswrapper[4915]: I1124 21:44:34.352669 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3012cd0-1234-4d16-ae47-7eb87738f54d-config-data\") pod \"c3012cd0-1234-4d16-ae47-7eb87738f54d\" (UID: \"c3012cd0-1234-4d16-ae47-7eb87738f54d\") " Nov 24 21:44:34 crc kubenswrapper[4915]: I1124 21:44:34.353532 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3012cd0-1234-4d16-ae47-7eb87738f54d-logs" (OuterVolumeSpecName: "logs") pod "c3012cd0-1234-4d16-ae47-7eb87738f54d" (UID: "c3012cd0-1234-4d16-ae47-7eb87738f54d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:44:34 crc kubenswrapper[4915]: I1124 21:44:34.365703 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3012cd0-1234-4d16-ae47-7eb87738f54d-kube-api-access-6tdqj" (OuterVolumeSpecName: "kube-api-access-6tdqj") pod "c3012cd0-1234-4d16-ae47-7eb87738f54d" (UID: "c3012cd0-1234-4d16-ae47-7eb87738f54d"). InnerVolumeSpecName "kube-api-access-6tdqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:44:34 crc kubenswrapper[4915]: I1124 21:44:34.433599 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3012cd0-1234-4d16-ae47-7eb87738f54d-config-data" (OuterVolumeSpecName: "config-data") pod "c3012cd0-1234-4d16-ae47-7eb87738f54d" (UID: "c3012cd0-1234-4d16-ae47-7eb87738f54d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:34 crc kubenswrapper[4915]: I1124 21:44:34.442267 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3012cd0-1234-4d16-ae47-7eb87738f54d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3012cd0-1234-4d16-ae47-7eb87738f54d" (UID: "c3012cd0-1234-4d16-ae47-7eb87738f54d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:34 crc kubenswrapper[4915]: I1124 21:44:34.456508 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3012cd0-1234-4d16-ae47-7eb87738f54d-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:34 crc kubenswrapper[4915]: I1124 21:44:34.456545 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tdqj\" (UniqueName: \"kubernetes.io/projected/c3012cd0-1234-4d16-ae47-7eb87738f54d-kube-api-access-6tdqj\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:34 crc kubenswrapper[4915]: I1124 21:44:34.456560 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3012cd0-1234-4d16-ae47-7eb87738f54d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:34 crc kubenswrapper[4915]: I1124 21:44:34.456572 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3012cd0-1234-4d16-ae47-7eb87738f54d-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:35 crc kubenswrapper[4915]: I1124 21:44:35.189548 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e821fdc7-f166-4347-8087-341868680ed0","Type":"ContainerStarted","Data":"8fb222fbac2d203d15ec3549fc8309d9b833504ee6b0080402eecc860aec4373"} Nov 24 21:44:35 crc kubenswrapper[4915]: I1124 21:44:35.189996 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 21:44:35 crc kubenswrapper[4915]: I1124 21:44:35.193626 4915 generic.go:334] "Generic (PLEG): container finished" podID="cf56610a-46ba-4937-b001-f3054a401f2f" containerID="a788202b84d776934294fe577d610efff3d340bf505787b8ad7def6968f798f2" exitCode=0 Nov 24 21:44:35 crc kubenswrapper[4915]: I1124 21:44:35.193670 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cf56610a-46ba-4937-b001-f3054a401f2f","Type":"ContainerDied","Data":"a788202b84d776934294fe577d610efff3d340bf505787b8ad7def6968f798f2"} Nov 24 21:44:35 crc kubenswrapper[4915]: I1124 21:44:35.198041 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:44:35 crc kubenswrapper[4915]: I1124 21:44:35.198066 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c3012cd0-1234-4d16-ae47-7eb87738f54d","Type":"ContainerDied","Data":"8a887a6ba1d0968e665b05e15e8a3016c09d0070907bfd09caa34d119ac08c3b"} Nov 24 21:44:35 crc kubenswrapper[4915]: I1124 21:44:35.198152 4915 scope.go:117] "RemoveContainer" containerID="488ec1ee3de8b48b38b433ee5c077db6d3040ee6c11194adb3b5a15ab22d8bf7" Nov 24 21:44:35 crc kubenswrapper[4915]: I1124 21:44:35.210682 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.463832579 podStartE2EDuration="6.210664969s" podCreationTimestamp="2025-11-24 21:44:29 +0000 UTC" firstStartedPulling="2025-11-24 21:44:30.379233669 +0000 UTC m=+1488.695485832" lastFinishedPulling="2025-11-24 21:44:34.126066049 +0000 UTC m=+1492.442318222" observedRunningTime="2025-11-24 21:44:35.207538645 +0000 UTC m=+1493.523790828" watchObservedRunningTime="2025-11-24 21:44:35.210664969 +0000 UTC m=+1493.526917142" Nov 24 21:44:35 crc kubenswrapper[4915]: I1124 21:44:35.243470 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:44:35 crc kubenswrapper[4915]: I1124 21:44:35.260716 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:44:35 crc kubenswrapper[4915]: I1124 21:44:35.273898 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 21:44:35 crc kubenswrapper[4915]: E1124 21:44:35.274537 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3012cd0-1234-4d16-ae47-7eb87738f54d" containerName="nova-api-log" Nov 24 21:44:35 crc kubenswrapper[4915]: I1124 21:44:35.274560 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3012cd0-1234-4d16-ae47-7eb87738f54d" containerName="nova-api-log" Nov 24 21:44:35 crc kubenswrapper[4915]: E1124 21:44:35.274580 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3012cd0-1234-4d16-ae47-7eb87738f54d" containerName="nova-api-api" Nov 24 21:44:35 crc kubenswrapper[4915]: I1124 21:44:35.274588 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3012cd0-1234-4d16-ae47-7eb87738f54d" containerName="nova-api-api" Nov 24 21:44:35 crc kubenswrapper[4915]: I1124 21:44:35.274906 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3012cd0-1234-4d16-ae47-7eb87738f54d" containerName="nova-api-log" Nov 24 21:44:35 crc kubenswrapper[4915]: I1124 21:44:35.274939 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3012cd0-1234-4d16-ae47-7eb87738f54d" containerName="nova-api-api" Nov 24 21:44:35 crc kubenswrapper[4915]: I1124 21:44:35.276480 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:44:35 crc kubenswrapper[4915]: I1124 21:44:35.283027 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 21:44:35 crc kubenswrapper[4915]: I1124 21:44:35.301483 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:44:35 crc kubenswrapper[4915]: I1124 21:44:35.373992 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lwfc\" (UniqueName: \"kubernetes.io/projected/9b8dd7ab-1f63-4543-87cc-583dae222748-kube-api-access-6lwfc\") pod \"nova-api-0\" (UID: \"9b8dd7ab-1f63-4543-87cc-583dae222748\") " pod="openstack/nova-api-0" Nov 24 21:44:35 crc kubenswrapper[4915]: I1124 21:44:35.374152 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8dd7ab-1f63-4543-87cc-583dae222748-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9b8dd7ab-1f63-4543-87cc-583dae222748\") " pod="openstack/nova-api-0" Nov 24 21:44:35 crc kubenswrapper[4915]: I1124 21:44:35.374704 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b8dd7ab-1f63-4543-87cc-583dae222748-logs\") pod \"nova-api-0\" (UID: \"9b8dd7ab-1f63-4543-87cc-583dae222748\") " pod="openstack/nova-api-0" Nov 24 21:44:35 crc kubenswrapper[4915]: I1124 21:44:35.375565 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8dd7ab-1f63-4543-87cc-583dae222748-config-data\") pod \"nova-api-0\" (UID: \"9b8dd7ab-1f63-4543-87cc-583dae222748\") " pod="openstack/nova-api-0" Nov 24 21:44:35 crc kubenswrapper[4915]: I1124 21:44:35.477878 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8dd7ab-1f63-4543-87cc-583dae222748-config-data\") pod \"nova-api-0\" (UID: \"9b8dd7ab-1f63-4543-87cc-583dae222748\") " pod="openstack/nova-api-0" Nov 24 21:44:35 crc kubenswrapper[4915]: I1124 21:44:35.477937 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lwfc\" (UniqueName: \"kubernetes.io/projected/9b8dd7ab-1f63-4543-87cc-583dae222748-kube-api-access-6lwfc\") pod \"nova-api-0\" (UID: \"9b8dd7ab-1f63-4543-87cc-583dae222748\") " pod="openstack/nova-api-0" Nov 24 21:44:35 crc kubenswrapper[4915]: I1124 21:44:35.478006 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8dd7ab-1f63-4543-87cc-583dae222748-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9b8dd7ab-1f63-4543-87cc-583dae222748\") " pod="openstack/nova-api-0" Nov 24 21:44:35 crc kubenswrapper[4915]: I1124 21:44:35.478030 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b8dd7ab-1f63-4543-87cc-583dae222748-logs\") pod \"nova-api-0\" (UID: \"9b8dd7ab-1f63-4543-87cc-583dae222748\") " pod="openstack/nova-api-0" Nov 24 21:44:35 crc kubenswrapper[4915]: I1124 21:44:35.478583 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b8dd7ab-1f63-4543-87cc-583dae222748-logs\") pod \"nova-api-0\" (UID: \"9b8dd7ab-1f63-4543-87cc-583dae222748\") " pod="openstack/nova-api-0" Nov 24 21:44:35 crc kubenswrapper[4915]: I1124 21:44:35.484296 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8dd7ab-1f63-4543-87cc-583dae222748-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9b8dd7ab-1f63-4543-87cc-583dae222748\") " pod="openstack/nova-api-0" Nov 24 21:44:35 crc kubenswrapper[4915]: I1124 21:44:35.494336 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lwfc\" (UniqueName: \"kubernetes.io/projected/9b8dd7ab-1f63-4543-87cc-583dae222748-kube-api-access-6lwfc\") pod \"nova-api-0\" (UID: \"9b8dd7ab-1f63-4543-87cc-583dae222748\") " pod="openstack/nova-api-0" Nov 24 21:44:35 crc kubenswrapper[4915]: I1124 21:44:35.496292 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8dd7ab-1f63-4543-87cc-583dae222748-config-data\") pod \"nova-api-0\" (UID: \"9b8dd7ab-1f63-4543-87cc-583dae222748\") " pod="openstack/nova-api-0" Nov 24 21:44:35 crc kubenswrapper[4915]: I1124 21:44:35.596569 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:44:36 crc kubenswrapper[4915]: I1124 21:44:36.440265 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3012cd0-1234-4d16-ae47-7eb87738f54d" path="/var/lib/kubelet/pods/c3012cd0-1234-4d16-ae47-7eb87738f54d/volumes" Nov 24 21:44:37 crc kubenswrapper[4915]: I1124 21:44:37.450686 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:44:37 crc kubenswrapper[4915]: I1124 21:44:37.555223 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 21:44:37 crc kubenswrapper[4915]: I1124 21:44:37.556433 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 21:44:37 crc kubenswrapper[4915]: I1124 21:44:37.557332 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf56610a-46ba-4937-b001-f3054a401f2f-combined-ca-bundle\") pod \"cf56610a-46ba-4937-b001-f3054a401f2f\" (UID: \"cf56610a-46ba-4937-b001-f3054a401f2f\") " Nov 24 21:44:37 crc kubenswrapper[4915]: I1124 21:44:37.557542 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf56610a-46ba-4937-b001-f3054a401f2f-config-data\") pod \"cf56610a-46ba-4937-b001-f3054a401f2f\" (UID: \"cf56610a-46ba-4937-b001-f3054a401f2f\") " Nov 24 21:44:37 crc kubenswrapper[4915]: I1124 21:44:37.557575 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djkn5\" (UniqueName: \"kubernetes.io/projected/cf56610a-46ba-4937-b001-f3054a401f2f-kube-api-access-djkn5\") pod \"cf56610a-46ba-4937-b001-f3054a401f2f\" (UID: \"cf56610a-46ba-4937-b001-f3054a401f2f\") " Nov 24 21:44:37 crc kubenswrapper[4915]: I1124 21:44:37.568266 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf56610a-46ba-4937-b001-f3054a401f2f-kube-api-access-djkn5" (OuterVolumeSpecName: "kube-api-access-djkn5") pod "cf56610a-46ba-4937-b001-f3054a401f2f" (UID: "cf56610a-46ba-4937-b001-f3054a401f2f"). InnerVolumeSpecName "kube-api-access-djkn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:44:37 crc kubenswrapper[4915]: I1124 21:44:37.681506 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf56610a-46ba-4937-b001-f3054a401f2f-config-data" (OuterVolumeSpecName: "config-data") pod "cf56610a-46ba-4937-b001-f3054a401f2f" (UID: "cf56610a-46ba-4937-b001-f3054a401f2f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:37 crc kubenswrapper[4915]: I1124 21:44:37.683461 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf56610a-46ba-4937-b001-f3054a401f2f-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:37 crc kubenswrapper[4915]: I1124 21:44:37.683493 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djkn5\" (UniqueName: \"kubernetes.io/projected/cf56610a-46ba-4937-b001-f3054a401f2f-kube-api-access-djkn5\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:37 crc kubenswrapper[4915]: I1124 21:44:37.736903 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf56610a-46ba-4937-b001-f3054a401f2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cf56610a-46ba-4937-b001-f3054a401f2f" (UID: "cf56610a-46ba-4937-b001-f3054a401f2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:37 crc kubenswrapper[4915]: I1124 21:44:37.785446 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf56610a-46ba-4937-b001-f3054a401f2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:38 crc kubenswrapper[4915]: I1124 21:44:38.241973 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:44:38 crc kubenswrapper[4915]: I1124 21:44:38.241969 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cf56610a-46ba-4937-b001-f3054a401f2f","Type":"ContainerDied","Data":"abc346e3f067ece41762bf80c03a78ab7fa870619ae8ac4736b5a896ddf6a66c"} Nov 24 21:44:38 crc kubenswrapper[4915]: I1124 21:44:38.293148 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:44:38 crc kubenswrapper[4915]: I1124 21:44:38.313486 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:44:38 crc kubenswrapper[4915]: I1124 21:44:38.331708 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:44:38 crc kubenswrapper[4915]: E1124 21:44:38.332249 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf56610a-46ba-4937-b001-f3054a401f2f" containerName="nova-scheduler-scheduler" Nov 24 21:44:38 crc kubenswrapper[4915]: I1124 21:44:38.332266 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf56610a-46ba-4937-b001-f3054a401f2f" containerName="nova-scheduler-scheduler" Nov 24 21:44:38 crc kubenswrapper[4915]: I1124 21:44:38.332496 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf56610a-46ba-4937-b001-f3054a401f2f" containerName="nova-scheduler-scheduler" Nov 24 21:44:38 crc kubenswrapper[4915]: I1124 21:44:38.333307 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:44:38 crc kubenswrapper[4915]: I1124 21:44:38.335613 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 21:44:38 crc kubenswrapper[4915]: I1124 21:44:38.342210 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:44:38 crc kubenswrapper[4915]: I1124 21:44:38.361036 4915 scope.go:117] "RemoveContainer" containerID="28f62c1e42afb84029f597f292bd6d55dca3a4f11a48141552676a97c68656b7" Nov 24 21:44:38 crc kubenswrapper[4915]: I1124 21:44:38.400615 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd1d036-89ac-47f6-8551-60f27e07700e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8fd1d036-89ac-47f6-8551-60f27e07700e\") " pod="openstack/nova-scheduler-0" Nov 24 21:44:38 crc kubenswrapper[4915]: I1124 21:44:38.400762 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf5bj\" (UniqueName: \"kubernetes.io/projected/8fd1d036-89ac-47f6-8551-60f27e07700e-kube-api-access-bf5bj\") pod \"nova-scheduler-0\" (UID: \"8fd1d036-89ac-47f6-8551-60f27e07700e\") " pod="openstack/nova-scheduler-0" Nov 24 21:44:38 crc kubenswrapper[4915]: I1124 21:44:38.400945 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fd1d036-89ac-47f6-8551-60f27e07700e-config-data\") pod \"nova-scheduler-0\" (UID: \"8fd1d036-89ac-47f6-8551-60f27e07700e\") " pod="openstack/nova-scheduler-0" Nov 24 21:44:38 crc kubenswrapper[4915]: I1124 21:44:38.448463 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf56610a-46ba-4937-b001-f3054a401f2f" path="/var/lib/kubelet/pods/cf56610a-46ba-4937-b001-f3054a401f2f/volumes" Nov 24 21:44:38 crc kubenswrapper[4915]: I1124 21:44:38.502880 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd1d036-89ac-47f6-8551-60f27e07700e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8fd1d036-89ac-47f6-8551-60f27e07700e\") " pod="openstack/nova-scheduler-0" Nov 24 21:44:38 crc kubenswrapper[4915]: I1124 21:44:38.503230 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf5bj\" (UniqueName: \"kubernetes.io/projected/8fd1d036-89ac-47f6-8551-60f27e07700e-kube-api-access-bf5bj\") pod \"nova-scheduler-0\" (UID: \"8fd1d036-89ac-47f6-8551-60f27e07700e\") " pod="openstack/nova-scheduler-0" Nov 24 21:44:38 crc kubenswrapper[4915]: I1124 21:44:38.503274 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fd1d036-89ac-47f6-8551-60f27e07700e-config-data\") pod \"nova-scheduler-0\" (UID: \"8fd1d036-89ac-47f6-8551-60f27e07700e\") " pod="openstack/nova-scheduler-0" Nov 24 21:44:38 crc kubenswrapper[4915]: I1124 21:44:38.511486 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd1d036-89ac-47f6-8551-60f27e07700e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8fd1d036-89ac-47f6-8551-60f27e07700e\") " pod="openstack/nova-scheduler-0" Nov 24 21:44:38 crc kubenswrapper[4915]: I1124 21:44:38.511929 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fd1d036-89ac-47f6-8551-60f27e07700e-config-data\") pod \"nova-scheduler-0\" (UID: \"8fd1d036-89ac-47f6-8551-60f27e07700e\") " pod="openstack/nova-scheduler-0" Nov 24 21:44:38 crc kubenswrapper[4915]: I1124 21:44:38.521233 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf5bj\" (UniqueName: \"kubernetes.io/projected/8fd1d036-89ac-47f6-8551-60f27e07700e-kube-api-access-bf5bj\") pod \"nova-scheduler-0\" (UID: \"8fd1d036-89ac-47f6-8551-60f27e07700e\") " pod="openstack/nova-scheduler-0" Nov 24 21:44:38 crc kubenswrapper[4915]: I1124 21:44:38.660317 4915 scope.go:117] "RemoveContainer" containerID="a788202b84d776934294fe577d610efff3d340bf505787b8ad7def6968f798f2" Nov 24 21:44:38 crc kubenswrapper[4915]: I1124 21:44:38.667051 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:44:38 crc kubenswrapper[4915]: W1124 21:44:38.907728 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b8dd7ab_1f63_4543_87cc_583dae222748.slice/crio-f16fcd2e4f1c88f16ae8b81387e4c844154633b39f1dc99df8471a7b013fa376 WatchSource:0}: Error finding container f16fcd2e4f1c88f16ae8b81387e4c844154633b39f1dc99df8471a7b013fa376: Status 404 returned error can't find the container with id f16fcd2e4f1c88f16ae8b81387e4c844154633b39f1dc99df8471a7b013fa376 Nov 24 21:44:38 crc kubenswrapper[4915]: I1124 21:44:38.911393 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:44:39 crc kubenswrapper[4915]: I1124 21:44:39.197350 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:44:39 crc kubenswrapper[4915]: W1124 21:44:39.201089 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fd1d036_89ac_47f6_8551_60f27e07700e.slice/crio-0e92efdd83ea3e70970d0af9ff746aa5d320b8c17ce03ecf6198288075128a11 WatchSource:0}: Error finding container 0e92efdd83ea3e70970d0af9ff746aa5d320b8c17ce03ecf6198288075128a11: Status 404 returned error can't find the container with id 0e92efdd83ea3e70970d0af9ff746aa5d320b8c17ce03ecf6198288075128a11 Nov 24 21:44:39 crc kubenswrapper[4915]: I1124 21:44:39.254847 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8fd1d036-89ac-47f6-8551-60f27e07700e","Type":"ContainerStarted","Data":"0e92efdd83ea3e70970d0af9ff746aa5d320b8c17ce03ecf6198288075128a11"} Nov 24 21:44:39 crc kubenswrapper[4915]: I1124 21:44:39.258209 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-cxqp9" event={"ID":"6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6","Type":"ContainerStarted","Data":"3235d4b0da129d58e102fd39dc35374f450ebe1a35b11865ee563b25f55364ee"} Nov 24 21:44:39 crc kubenswrapper[4915]: I1124 21:44:39.262971 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b8dd7ab-1f63-4543-87cc-583dae222748","Type":"ContainerStarted","Data":"38c5fbf95086d9b42e3298a1a68e186fb7322fbbebf222d744d7e947e609c504"} Nov 24 21:44:39 crc kubenswrapper[4915]: I1124 21:44:39.263009 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b8dd7ab-1f63-4543-87cc-583dae222748","Type":"ContainerStarted","Data":"f16fcd2e4f1c88f16ae8b81387e4c844154633b39f1dc99df8471a7b013fa376"} Nov 24 21:44:39 crc kubenswrapper[4915]: I1124 21:44:39.283152 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-cxqp9" podStartSLOduration=2.154514362 podStartE2EDuration="8.283136078s" podCreationTimestamp="2025-11-24 21:44:31 +0000 UTC" firstStartedPulling="2025-11-24 21:44:32.301811627 +0000 UTC m=+1490.618063800" lastFinishedPulling="2025-11-24 21:44:38.430433343 +0000 UTC m=+1496.746685516" observedRunningTime="2025-11-24 21:44:39.277246919 +0000 UTC m=+1497.593499092" watchObservedRunningTime="2025-11-24 21:44:39.283136078 +0000 UTC m=+1497.599388251" Nov 24 21:44:40 crc kubenswrapper[4915]: I1124 21:44:40.277707 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b8dd7ab-1f63-4543-87cc-583dae222748","Type":"ContainerStarted","Data":"4c8a638356fdd403a5f2d009fa7513c36a06b3197ceb5d9e6cf9282e031d42c9"} Nov 24 21:44:40 crc kubenswrapper[4915]: I1124 21:44:40.280504 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8fd1d036-89ac-47f6-8551-60f27e07700e","Type":"ContainerStarted","Data":"a4eba0bb468d4d3fdaa6063f5393964bef891aaf13dee453fb35e20c9787be1f"} Nov 24 21:44:40 crc kubenswrapper[4915]: I1124 21:44:40.312892 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=5.31286795 podStartE2EDuration="5.31286795s" podCreationTimestamp="2025-11-24 21:44:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:44:40.298413601 +0000 UTC m=+1498.614665784" watchObservedRunningTime="2025-11-24 21:44:40.31286795 +0000 UTC m=+1498.629120123" Nov 24 21:44:40 crc kubenswrapper[4915]: I1124 21:44:40.334500 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.334475283 podStartE2EDuration="2.334475283s" podCreationTimestamp="2025-11-24 21:44:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:44:40.321147713 +0000 UTC m=+1498.637399886" watchObservedRunningTime="2025-11-24 21:44:40.334475283 +0000 UTC m=+1498.650727476" Nov 24 21:44:40 crc kubenswrapper[4915]: I1124 21:44:40.511140 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 24 21:44:41 crc kubenswrapper[4915]: I1124 21:44:41.290970 4915 generic.go:334] "Generic (PLEG): container finished" podID="6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6" containerID="3235d4b0da129d58e102fd39dc35374f450ebe1a35b11865ee563b25f55364ee" exitCode=0 Nov 24 21:44:41 crc kubenswrapper[4915]: I1124 21:44:41.292124 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-cxqp9" event={"ID":"6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6","Type":"ContainerDied","Data":"3235d4b0da129d58e102fd39dc35374f450ebe1a35b11865ee563b25f55364ee"} Nov 24 21:44:42 crc kubenswrapper[4915]: I1124 21:44:42.549825 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 21:44:42 crc kubenswrapper[4915]: I1124 21:44:42.550141 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 21:44:42 crc kubenswrapper[4915]: I1124 21:44:42.744864 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-cxqp9" Nov 24 21:44:42 crc kubenswrapper[4915]: I1124 21:44:42.803634 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6-config-data\") pod \"6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6\" (UID: \"6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6\") " Nov 24 21:44:42 crc kubenswrapper[4915]: I1124 21:44:42.803803 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmcxx\" (UniqueName: \"kubernetes.io/projected/6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6-kube-api-access-rmcxx\") pod \"6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6\" (UID: \"6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6\") " Nov 24 21:44:42 crc kubenswrapper[4915]: I1124 21:44:42.803941 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6-combined-ca-bundle\") pod \"6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6\" (UID: \"6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6\") " Nov 24 21:44:42 crc kubenswrapper[4915]: I1124 21:44:42.804012 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6-scripts\") pod \"6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6\" (UID: \"6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6\") " Nov 24 21:44:42 crc kubenswrapper[4915]: I1124 21:44:42.810337 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6-kube-api-access-rmcxx" (OuterVolumeSpecName: "kube-api-access-rmcxx") pod "6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6" (UID: "6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6"). InnerVolumeSpecName "kube-api-access-rmcxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:44:42 crc kubenswrapper[4915]: I1124 21:44:42.810822 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6-scripts" (OuterVolumeSpecName: "scripts") pod "6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6" (UID: "6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:42 crc kubenswrapper[4915]: I1124 21:44:42.839260 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6-config-data" (OuterVolumeSpecName: "config-data") pod "6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6" (UID: "6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:42 crc kubenswrapper[4915]: I1124 21:44:42.867026 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6" (UID: "6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:42 crc kubenswrapper[4915]: I1124 21:44:42.905977 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:42 crc kubenswrapper[4915]: I1124 21:44:42.906055 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:42 crc kubenswrapper[4915]: I1124 21:44:42.906069 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:42 crc kubenswrapper[4915]: I1124 21:44:42.906081 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmcxx\" (UniqueName: \"kubernetes.io/projected/6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6-kube-api-access-rmcxx\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:43 crc kubenswrapper[4915]: I1124 21:44:43.324677 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-cxqp9" event={"ID":"6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6","Type":"ContainerDied","Data":"0b1284a95537413d4706b26032331aad9aaa3e7d6c1a9a66ff5c278e661cb6da"} Nov 24 21:44:43 crc kubenswrapper[4915]: I1124 21:44:43.324722 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b1284a95537413d4706b26032331aad9aaa3e7d6c1a9a66ff5c278e661cb6da" Nov 24 21:44:43 crc kubenswrapper[4915]: I1124 21:44:43.324827 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-cxqp9" Nov 24 21:44:43 crc kubenswrapper[4915]: I1124 21:44:43.562995 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fd9c0d54-711e-4173-bc12-063f838129e4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.244:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 21:44:43 crc kubenswrapper[4915]: I1124 21:44:43.563018 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fd9c0d54-711e-4173-bc12-063f838129e4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.244:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 21:44:43 crc kubenswrapper[4915]: I1124 21:44:43.667862 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 21:44:45 crc kubenswrapper[4915]: I1124 21:44:45.597399 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 21:44:45 crc kubenswrapper[4915]: I1124 21:44:45.597690 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 21:44:46 crc kubenswrapper[4915]: I1124 21:44:46.565119 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 24 21:44:46 crc kubenswrapper[4915]: E1124 21:44:46.566054 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6" containerName="aodh-db-sync" Nov 24 21:44:46 crc kubenswrapper[4915]: I1124 21:44:46.566074 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6" containerName="aodh-db-sync" Nov 24 21:44:46 crc kubenswrapper[4915]: I1124 21:44:46.576054 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6" containerName="aodh-db-sync" Nov 24 21:44:46 crc kubenswrapper[4915]: I1124 21:44:46.594288 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 24 21:44:46 crc kubenswrapper[4915]: I1124 21:44:46.594420 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 21:44:46 crc kubenswrapper[4915]: I1124 21:44:46.599752 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 24 21:44:46 crc kubenswrapper[4915]: I1124 21:44:46.600019 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 24 21:44:46 crc kubenswrapper[4915]: I1124 21:44:46.600171 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-497jp" Nov 24 21:44:46 crc kubenswrapper[4915]: I1124 21:44:46.682987 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9b8dd7ab-1f63-4543-87cc-583dae222748" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.245:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 21:44:46 crc kubenswrapper[4915]: I1124 21:44:46.683108 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9b8dd7ab-1f63-4543-87cc-583dae222748" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.245:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 24 21:44:46 crc kubenswrapper[4915]: I1124 21:44:46.699732 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9c9b404-e3bf-41de-a3fe-f79c98113692-config-data\") pod \"aodh-0\" (UID: \"f9c9b404-e3bf-41de-a3fe-f79c98113692\") " pod="openstack/aodh-0" Nov 24 21:44:46 crc kubenswrapper[4915]: I1124 21:44:46.701433 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9c9b404-e3bf-41de-a3fe-f79c98113692-combined-ca-bundle\") pod \"aodh-0\" (UID: \"f9c9b404-e3bf-41de-a3fe-f79c98113692\") " pod="openstack/aodh-0" Nov 24 21:44:46 crc kubenswrapper[4915]: I1124 21:44:46.701653 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st989\" (UniqueName: \"kubernetes.io/projected/f9c9b404-e3bf-41de-a3fe-f79c98113692-kube-api-access-st989\") pod \"aodh-0\" (UID: \"f9c9b404-e3bf-41de-a3fe-f79c98113692\") " pod="openstack/aodh-0" Nov 24 21:44:46 crc kubenswrapper[4915]: I1124 21:44:46.701728 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9c9b404-e3bf-41de-a3fe-f79c98113692-scripts\") pod \"aodh-0\" (UID: \"f9c9b404-e3bf-41de-a3fe-f79c98113692\") " pod="openstack/aodh-0" Nov 24 21:44:46 crc kubenswrapper[4915]: I1124 21:44:46.803542 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9c9b404-e3bf-41de-a3fe-f79c98113692-scripts\") pod \"aodh-0\" (UID: \"f9c9b404-e3bf-41de-a3fe-f79c98113692\") " pod="openstack/aodh-0" Nov 24 21:44:46 crc kubenswrapper[4915]: I1124 21:44:46.803615 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9c9b404-e3bf-41de-a3fe-f79c98113692-config-data\") pod \"aodh-0\" (UID: \"f9c9b404-e3bf-41de-a3fe-f79c98113692\") " pod="openstack/aodh-0" Nov 24 21:44:46 crc kubenswrapper[4915]: I1124 21:44:46.803764 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9c9b404-e3bf-41de-a3fe-f79c98113692-combined-ca-bundle\") pod \"aodh-0\" (UID: \"f9c9b404-e3bf-41de-a3fe-f79c98113692\") " pod="openstack/aodh-0" Nov 24 21:44:46 crc kubenswrapper[4915]: I1124 21:44:46.803992 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st989\" (UniqueName: \"kubernetes.io/projected/f9c9b404-e3bf-41de-a3fe-f79c98113692-kube-api-access-st989\") pod \"aodh-0\" (UID: \"f9c9b404-e3bf-41de-a3fe-f79c98113692\") " pod="openstack/aodh-0" Nov 24 21:44:46 crc kubenswrapper[4915]: I1124 21:44:46.814173 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9c9b404-e3bf-41de-a3fe-f79c98113692-config-data\") pod \"aodh-0\" (UID: \"f9c9b404-e3bf-41de-a3fe-f79c98113692\") " pod="openstack/aodh-0" Nov 24 21:44:46 crc kubenswrapper[4915]: I1124 21:44:46.820847 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9c9b404-e3bf-41de-a3fe-f79c98113692-scripts\") pod \"aodh-0\" (UID: \"f9c9b404-e3bf-41de-a3fe-f79c98113692\") " pod="openstack/aodh-0" Nov 24 21:44:46 crc kubenswrapper[4915]: I1124 21:44:46.821248 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9c9b404-e3bf-41de-a3fe-f79c98113692-combined-ca-bundle\") pod \"aodh-0\" (UID: \"f9c9b404-e3bf-41de-a3fe-f79c98113692\") " pod="openstack/aodh-0" Nov 24 21:44:46 crc kubenswrapper[4915]: I1124 21:44:46.842264 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st989\" (UniqueName: \"kubernetes.io/projected/f9c9b404-e3bf-41de-a3fe-f79c98113692-kube-api-access-st989\") pod \"aodh-0\" (UID: \"f9c9b404-e3bf-41de-a3fe-f79c98113692\") " pod="openstack/aodh-0" Nov 24 21:44:46 crc kubenswrapper[4915]: I1124 21:44:46.922560 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 21:44:47 crc kubenswrapper[4915]: I1124 21:44:47.511674 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 24 21:44:48 crc kubenswrapper[4915]: I1124 21:44:48.421478 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f9c9b404-e3bf-41de-a3fe-f79c98113692","Type":"ContainerStarted","Data":"42b90e436064dd8168cddb993e9dbdef022a94757a87879e5aa20d49ca476381"} Nov 24 21:44:48 crc kubenswrapper[4915]: I1124 21:44:48.421800 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f9c9b404-e3bf-41de-a3fe-f79c98113692","Type":"ContainerStarted","Data":"d29658a14e70e1e1984c6af05ef2500063156005cb1fb87c3d3f159ce56e4d06"} Nov 24 21:44:48 crc kubenswrapper[4915]: I1124 21:44:48.667550 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 21:44:48 crc kubenswrapper[4915]: I1124 21:44:48.715410 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 21:44:49 crc kubenswrapper[4915]: I1124 21:44:49.472166 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 21:44:49 crc kubenswrapper[4915]: I1124 21:44:49.492570 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 24 21:44:49 crc kubenswrapper[4915]: I1124 21:44:49.640023 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:44:49 crc kubenswrapper[4915]: I1124 21:44:49.640470 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e821fdc7-f166-4347-8087-341868680ed0" containerName="ceilometer-central-agent" containerID="cri-o://7d334b2e176b14611bb63d228a184c051d6ff8d39b2a646ab95dce4c91eaded3" gracePeriod=30 Nov 24 21:44:49 crc kubenswrapper[4915]: I1124 21:44:49.640921 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e821fdc7-f166-4347-8087-341868680ed0" containerName="proxy-httpd" containerID="cri-o://8fb222fbac2d203d15ec3549fc8309d9b833504ee6b0080402eecc860aec4373" gracePeriod=30 Nov 24 21:44:49 crc kubenswrapper[4915]: I1124 21:44:49.640975 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e821fdc7-f166-4347-8087-341868680ed0" containerName="sg-core" containerID="cri-o://019c777f7456362868035db56dc52b710b0b59619f1f69e8bd2254d90e518d0f" gracePeriod=30 Nov 24 21:44:49 crc kubenswrapper[4915]: I1124 21:44:49.641007 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e821fdc7-f166-4347-8087-341868680ed0" containerName="ceilometer-notification-agent" containerID="cri-o://78dd86358ee6c0cc1ed1da71f0880114bd2f1e8812ed147732e76fece1820578" gracePeriod=30 Nov 24 21:44:49 crc kubenswrapper[4915]: I1124 21:44:49.650586 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 21:44:50 crc kubenswrapper[4915]: I1124 21:44:50.451335 4915 generic.go:334] "Generic (PLEG): container finished" podID="e821fdc7-f166-4347-8087-341868680ed0" containerID="8fb222fbac2d203d15ec3549fc8309d9b833504ee6b0080402eecc860aec4373" exitCode=0 Nov 24 21:44:50 crc kubenswrapper[4915]: I1124 21:44:50.451650 4915 generic.go:334] "Generic (PLEG): container finished" podID="e821fdc7-f166-4347-8087-341868680ed0" containerID="019c777f7456362868035db56dc52b710b0b59619f1f69e8bd2254d90e518d0f" exitCode=2 Nov 24 21:44:50 crc kubenswrapper[4915]: I1124 21:44:50.451663 4915 generic.go:334] "Generic (PLEG): container finished" podID="e821fdc7-f166-4347-8087-341868680ed0" containerID="7d334b2e176b14611bb63d228a184c051d6ff8d39b2a646ab95dce4c91eaded3" exitCode=0 Nov 24 21:44:50 crc kubenswrapper[4915]: I1124 21:44:50.451417 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e821fdc7-f166-4347-8087-341868680ed0","Type":"ContainerDied","Data":"8fb222fbac2d203d15ec3549fc8309d9b833504ee6b0080402eecc860aec4373"} Nov 24 21:44:50 crc kubenswrapper[4915]: I1124 21:44:50.451790 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e821fdc7-f166-4347-8087-341868680ed0","Type":"ContainerDied","Data":"019c777f7456362868035db56dc52b710b0b59619f1f69e8bd2254d90e518d0f"} Nov 24 21:44:50 crc kubenswrapper[4915]: I1124 21:44:50.451812 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e821fdc7-f166-4347-8087-341868680ed0","Type":"ContainerDied","Data":"7d334b2e176b14611bb63d228a184c051d6ff8d39b2a646ab95dce4c91eaded3"} Nov 24 21:44:51 crc kubenswrapper[4915]: I1124 21:44:51.466394 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f9c9b404-e3bf-41de-a3fe-f79c98113692","Type":"ContainerStarted","Data":"600cf224101aadbc6c4db75669402f568f1967c32b52a621986c15aa44f5a48c"} Nov 24 21:44:52 crc kubenswrapper[4915]: I1124 21:44:52.486841 4915 generic.go:334] "Generic (PLEG): container finished" podID="7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1" containerID="6b0aa3428e2ffd403420cd969f1ed3877c982cf1e85c0b3af3385eb466455007" exitCode=137 Nov 24 21:44:52 crc kubenswrapper[4915]: I1124 21:44:52.487217 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1","Type":"ContainerDied","Data":"6b0aa3428e2ffd403420cd969f1ed3877c982cf1e85c0b3af3385eb466455007"} Nov 24 21:44:52 crc kubenswrapper[4915]: I1124 21:44:52.487241 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1","Type":"ContainerDied","Data":"1f9dbe87485f9f9418413450bd7a0250d4859b7c54b79d581eef245639d2cde3"} Nov 24 21:44:52 crc kubenswrapper[4915]: I1124 21:44:52.487251 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f9dbe87485f9f9418413450bd7a0250d4859b7c54b79d581eef245639d2cde3" Nov 24 21:44:52 crc kubenswrapper[4915]: I1124 21:44:52.492259 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f9c9b404-e3bf-41de-a3fe-f79c98113692","Type":"ContainerStarted","Data":"f67de18064128474d5b61da458198f58eeca5393b6dc5db0fb23457ad1a70f4f"} Nov 24 21:44:52 crc kubenswrapper[4915]: I1124 21:44:52.499355 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:52 crc kubenswrapper[4915]: I1124 21:44:52.555699 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 21:44:52 crc kubenswrapper[4915]: I1124 21:44:52.557725 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 21:44:52 crc kubenswrapper[4915]: I1124 21:44:52.562699 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 21:44:52 crc kubenswrapper[4915]: I1124 21:44:52.679153 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1-config-data\") pod \"7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1\" (UID: \"7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1\") " Nov 24 21:44:52 crc kubenswrapper[4915]: I1124 21:44:52.679291 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkfrh\" (UniqueName: \"kubernetes.io/projected/7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1-kube-api-access-wkfrh\") pod \"7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1\" (UID: \"7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1\") " Nov 24 21:44:52 crc kubenswrapper[4915]: I1124 21:44:52.679332 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1-combined-ca-bundle\") pod \"7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1\" (UID: \"7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1\") " Nov 24 21:44:52 crc kubenswrapper[4915]: I1124 21:44:52.685947 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1-kube-api-access-wkfrh" (OuterVolumeSpecName: "kube-api-access-wkfrh") pod "7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1" (UID: "7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1"). InnerVolumeSpecName "kube-api-access-wkfrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:44:52 crc kubenswrapper[4915]: I1124 21:44:52.712768 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1-config-data" (OuterVolumeSpecName: "config-data") pod "7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1" (UID: "7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:52 crc kubenswrapper[4915]: I1124 21:44:52.721897 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1" (UID: "7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:52 crc kubenswrapper[4915]: I1124 21:44:52.782735 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:52 crc kubenswrapper[4915]: I1124 21:44:52.782795 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkfrh\" (UniqueName: \"kubernetes.io/projected/7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1-kube-api-access-wkfrh\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:52 crc kubenswrapper[4915]: I1124 21:44:52.782805 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:53 crc kubenswrapper[4915]: I1124 21:44:53.515132 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:53 crc kubenswrapper[4915]: I1124 21:44:53.584979 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 21:44:53 crc kubenswrapper[4915]: I1124 21:44:53.606665 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 21:44:53 crc kubenswrapper[4915]: I1124 21:44:53.612300 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 21:44:53 crc kubenswrapper[4915]: I1124 21:44:53.635744 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 21:44:53 crc kubenswrapper[4915]: E1124 21:44:53.636453 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 21:44:53 crc kubenswrapper[4915]: I1124 21:44:53.636478 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 21:44:53 crc kubenswrapper[4915]: I1124 21:44:53.636808 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1" containerName="nova-cell1-novncproxy-novncproxy" Nov 24 21:44:53 crc kubenswrapper[4915]: I1124 21:44:53.637832 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:53 crc kubenswrapper[4915]: I1124 21:44:53.645414 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 24 21:44:53 crc kubenswrapper[4915]: I1124 21:44:53.645650 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 24 21:44:53 crc kubenswrapper[4915]: I1124 21:44:53.645890 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 24 21:44:53 crc kubenswrapper[4915]: I1124 21:44:53.652142 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 21:44:53 crc kubenswrapper[4915]: I1124 21:44:53.818885 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2b977b2-72cc-4b96-9a12-85155332319b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b2b977b2-72cc-4b96-9a12-85155332319b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:53 crc kubenswrapper[4915]: I1124 21:44:53.819473 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2b977b2-72cc-4b96-9a12-85155332319b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b2b977b2-72cc-4b96-9a12-85155332319b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:53 crc kubenswrapper[4915]: I1124 21:44:53.819558 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2b977b2-72cc-4b96-9a12-85155332319b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b2b977b2-72cc-4b96-9a12-85155332319b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:53 crc kubenswrapper[4915]: I1124 21:44:53.819669 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbwdv\" (UniqueName: \"kubernetes.io/projected/b2b977b2-72cc-4b96-9a12-85155332319b-kube-api-access-gbwdv\") pod \"nova-cell1-novncproxy-0\" (UID: \"b2b977b2-72cc-4b96-9a12-85155332319b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:53 crc kubenswrapper[4915]: I1124 21:44:53.819855 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2b977b2-72cc-4b96-9a12-85155332319b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b2b977b2-72cc-4b96-9a12-85155332319b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:53 crc kubenswrapper[4915]: I1124 21:44:53.922067 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2b977b2-72cc-4b96-9a12-85155332319b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b2b977b2-72cc-4b96-9a12-85155332319b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:53 crc kubenswrapper[4915]: I1124 21:44:53.922566 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2b977b2-72cc-4b96-9a12-85155332319b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b2b977b2-72cc-4b96-9a12-85155332319b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:53 crc kubenswrapper[4915]: I1124 21:44:53.922888 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2b977b2-72cc-4b96-9a12-85155332319b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b2b977b2-72cc-4b96-9a12-85155332319b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:53 crc kubenswrapper[4915]: I1124 21:44:53.923017 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2b977b2-72cc-4b96-9a12-85155332319b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b2b977b2-72cc-4b96-9a12-85155332319b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:53 crc kubenswrapper[4915]: I1124 21:44:53.923184 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbwdv\" (UniqueName: \"kubernetes.io/projected/b2b977b2-72cc-4b96-9a12-85155332319b-kube-api-access-gbwdv\") pod \"nova-cell1-novncproxy-0\" (UID: \"b2b977b2-72cc-4b96-9a12-85155332319b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:53 crc kubenswrapper[4915]: I1124 21:44:53.934248 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2b977b2-72cc-4b96-9a12-85155332319b-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b2b977b2-72cc-4b96-9a12-85155332319b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:53 crc kubenswrapper[4915]: I1124 21:44:53.935089 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2b977b2-72cc-4b96-9a12-85155332319b-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b2b977b2-72cc-4b96-9a12-85155332319b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:53 crc kubenswrapper[4915]: I1124 21:44:53.938310 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2b977b2-72cc-4b96-9a12-85155332319b-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b2b977b2-72cc-4b96-9a12-85155332319b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:53 crc kubenswrapper[4915]: I1124 21:44:53.939941 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2b977b2-72cc-4b96-9a12-85155332319b-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b2b977b2-72cc-4b96-9a12-85155332319b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:53 crc kubenswrapper[4915]: I1124 21:44:53.969494 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbwdv\" (UniqueName: \"kubernetes.io/projected/b2b977b2-72cc-4b96-9a12-85155332319b-kube-api-access-gbwdv\") pod \"nova-cell1-novncproxy-0\" (UID: \"b2b977b2-72cc-4b96-9a12-85155332319b\") " pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:54 crc kubenswrapper[4915]: I1124 21:44:54.010084 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:54 crc kubenswrapper[4915]: I1124 21:44:54.328072 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:44:54 crc kubenswrapper[4915]: I1124 21:44:54.328418 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:44:54 crc kubenswrapper[4915]: I1124 21:44:54.444050 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1" path="/var/lib/kubelet/pods/7b8c9069-7ecf-4a7b-9758-f91edbf0fdd1/volumes" Nov 24 21:44:54 crc kubenswrapper[4915]: W1124 21:44:54.581415 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2b977b2_72cc_4b96_9a12_85155332319b.slice/crio-6604125757e8f9488412d3504ee36933719adb8a362bc68213e53c775a10bd04 WatchSource:0}: Error finding container 6604125757e8f9488412d3504ee36933719adb8a362bc68213e53c775a10bd04: Status 404 returned error can't find the container with id 6604125757e8f9488412d3504ee36933719adb8a362bc68213e53c775a10bd04 Nov 24 21:44:54 crc kubenswrapper[4915]: I1124 21:44:54.589384 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.438852 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.541649 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b2b977b2-72cc-4b96-9a12-85155332319b","Type":"ContainerStarted","Data":"6604125757e8f9488412d3504ee36933719adb8a362bc68213e53c775a10bd04"} Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.546345 4915 generic.go:334] "Generic (PLEG): container finished" podID="e821fdc7-f166-4347-8087-341868680ed0" containerID="78dd86358ee6c0cc1ed1da71f0880114bd2f1e8812ed147732e76fece1820578" exitCode=0 Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.546409 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e821fdc7-f166-4347-8087-341868680ed0","Type":"ContainerDied","Data":"78dd86358ee6c0cc1ed1da71f0880114bd2f1e8812ed147732e76fece1820578"} Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.546461 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e821fdc7-f166-4347-8087-341868680ed0","Type":"ContainerDied","Data":"cdcfbf740bc09231d322579dde74446ff45bbd7c6f9084d8085911877f7ffcdd"} Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.546462 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.546476 4915 scope.go:117] "RemoveContainer" containerID="8fb222fbac2d203d15ec3549fc8309d9b833504ee6b0080402eecc860aec4373" Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.569308 4915 scope.go:117] "RemoveContainer" containerID="019c777f7456362868035db56dc52b710b0b59619f1f69e8bd2254d90e518d0f" Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.570912 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e821fdc7-f166-4347-8087-341868680ed0-config-data\") pod \"e821fdc7-f166-4347-8087-341868680ed0\" (UID: \"e821fdc7-f166-4347-8087-341868680ed0\") " Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.570996 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e821fdc7-f166-4347-8087-341868680ed0-run-httpd\") pod \"e821fdc7-f166-4347-8087-341868680ed0\" (UID: \"e821fdc7-f166-4347-8087-341868680ed0\") " Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.571209 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e821fdc7-f166-4347-8087-341868680ed0-scripts\") pod \"e821fdc7-f166-4347-8087-341868680ed0\" (UID: \"e821fdc7-f166-4347-8087-341868680ed0\") " Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.571241 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7vmh\" (UniqueName: \"kubernetes.io/projected/e821fdc7-f166-4347-8087-341868680ed0-kube-api-access-s7vmh\") pod \"e821fdc7-f166-4347-8087-341868680ed0\" (UID: \"e821fdc7-f166-4347-8087-341868680ed0\") " Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.571334 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e821fdc7-f166-4347-8087-341868680ed0-combined-ca-bundle\") pod \"e821fdc7-f166-4347-8087-341868680ed0\" (UID: \"e821fdc7-f166-4347-8087-341868680ed0\") " Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.571403 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e821fdc7-f166-4347-8087-341868680ed0-sg-core-conf-yaml\") pod \"e821fdc7-f166-4347-8087-341868680ed0\" (UID: \"e821fdc7-f166-4347-8087-341868680ed0\") " Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.571404 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e821fdc7-f166-4347-8087-341868680ed0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e821fdc7-f166-4347-8087-341868680ed0" (UID: "e821fdc7-f166-4347-8087-341868680ed0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.571498 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e821fdc7-f166-4347-8087-341868680ed0-log-httpd\") pod \"e821fdc7-f166-4347-8087-341868680ed0\" (UID: \"e821fdc7-f166-4347-8087-341868680ed0\") " Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.572388 4915 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e821fdc7-f166-4347-8087-341868680ed0-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.573135 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e821fdc7-f166-4347-8087-341868680ed0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e821fdc7-f166-4347-8087-341868680ed0" (UID: "e821fdc7-f166-4347-8087-341868680ed0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.585980 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e821fdc7-f166-4347-8087-341868680ed0-kube-api-access-s7vmh" (OuterVolumeSpecName: "kube-api-access-s7vmh") pod "e821fdc7-f166-4347-8087-341868680ed0" (UID: "e821fdc7-f166-4347-8087-341868680ed0"). InnerVolumeSpecName "kube-api-access-s7vmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.590730 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e821fdc7-f166-4347-8087-341868680ed0-scripts" (OuterVolumeSpecName: "scripts") pod "e821fdc7-f166-4347-8087-341868680ed0" (UID: "e821fdc7-f166-4347-8087-341868680ed0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.607303 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.615019 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.615345 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.625345 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.649317 4915 scope.go:117] "RemoveContainer" containerID="78dd86358ee6c0cc1ed1da71f0880114bd2f1e8812ed147732e76fece1820578" Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.674938 4915 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e821fdc7-f166-4347-8087-341868680ed0-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.674978 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e821fdc7-f166-4347-8087-341868680ed0-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.674994 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7vmh\" (UniqueName: \"kubernetes.io/projected/e821fdc7-f166-4347-8087-341868680ed0-kube-api-access-s7vmh\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.701968 4915 scope.go:117] "RemoveContainer" containerID="7d334b2e176b14611bb63d228a184c051d6ff8d39b2a646ab95dce4c91eaded3" Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.742422 4915 scope.go:117] "RemoveContainer" containerID="8fb222fbac2d203d15ec3549fc8309d9b833504ee6b0080402eecc860aec4373" Nov 24 21:44:55 crc kubenswrapper[4915]: E1124 21:44:55.743379 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fb222fbac2d203d15ec3549fc8309d9b833504ee6b0080402eecc860aec4373\": container with ID starting with 8fb222fbac2d203d15ec3549fc8309d9b833504ee6b0080402eecc860aec4373 not found: ID does not exist" containerID="8fb222fbac2d203d15ec3549fc8309d9b833504ee6b0080402eecc860aec4373" Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.743418 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fb222fbac2d203d15ec3549fc8309d9b833504ee6b0080402eecc860aec4373"} err="failed to get container status \"8fb222fbac2d203d15ec3549fc8309d9b833504ee6b0080402eecc860aec4373\": rpc error: code = NotFound desc = could not find container \"8fb222fbac2d203d15ec3549fc8309d9b833504ee6b0080402eecc860aec4373\": container with ID starting with 8fb222fbac2d203d15ec3549fc8309d9b833504ee6b0080402eecc860aec4373 not found: ID does not exist" Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.743446 4915 scope.go:117] "RemoveContainer" containerID="019c777f7456362868035db56dc52b710b0b59619f1f69e8bd2254d90e518d0f" Nov 24 21:44:55 crc kubenswrapper[4915]: E1124 21:44:55.743820 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"019c777f7456362868035db56dc52b710b0b59619f1f69e8bd2254d90e518d0f\": container with ID starting with 019c777f7456362868035db56dc52b710b0b59619f1f69e8bd2254d90e518d0f not found: ID does not exist" containerID="019c777f7456362868035db56dc52b710b0b59619f1f69e8bd2254d90e518d0f" Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.743841 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"019c777f7456362868035db56dc52b710b0b59619f1f69e8bd2254d90e518d0f"} err="failed to get container status \"019c777f7456362868035db56dc52b710b0b59619f1f69e8bd2254d90e518d0f\": rpc error: code = NotFound desc = could not find container \"019c777f7456362868035db56dc52b710b0b59619f1f69e8bd2254d90e518d0f\": container with ID starting with 019c777f7456362868035db56dc52b710b0b59619f1f69e8bd2254d90e518d0f not found: ID does not exist" Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.743857 4915 scope.go:117] "RemoveContainer" containerID="78dd86358ee6c0cc1ed1da71f0880114bd2f1e8812ed147732e76fece1820578" Nov 24 21:44:55 crc kubenswrapper[4915]: E1124 21:44:55.744072 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78dd86358ee6c0cc1ed1da71f0880114bd2f1e8812ed147732e76fece1820578\": container with ID starting with 78dd86358ee6c0cc1ed1da71f0880114bd2f1e8812ed147732e76fece1820578 not found: ID does not exist" containerID="78dd86358ee6c0cc1ed1da71f0880114bd2f1e8812ed147732e76fece1820578" Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.744099 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78dd86358ee6c0cc1ed1da71f0880114bd2f1e8812ed147732e76fece1820578"} err="failed to get container status \"78dd86358ee6c0cc1ed1da71f0880114bd2f1e8812ed147732e76fece1820578\": rpc error: code = NotFound desc = could not find container \"78dd86358ee6c0cc1ed1da71f0880114bd2f1e8812ed147732e76fece1820578\": container with ID starting with 78dd86358ee6c0cc1ed1da71f0880114bd2f1e8812ed147732e76fece1820578 not found: ID does not exist" Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.744116 4915 scope.go:117] "RemoveContainer" containerID="7d334b2e176b14611bb63d228a184c051d6ff8d39b2a646ab95dce4c91eaded3" Nov 24 21:44:55 crc kubenswrapper[4915]: E1124 21:44:55.744317 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d334b2e176b14611bb63d228a184c051d6ff8d39b2a646ab95dce4c91eaded3\": container with ID starting with 7d334b2e176b14611bb63d228a184c051d6ff8d39b2a646ab95dce4c91eaded3 not found: ID does not exist" containerID="7d334b2e176b14611bb63d228a184c051d6ff8d39b2a646ab95dce4c91eaded3" Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.744343 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d334b2e176b14611bb63d228a184c051d6ff8d39b2a646ab95dce4c91eaded3"} err="failed to get container status \"7d334b2e176b14611bb63d228a184c051d6ff8d39b2a646ab95dce4c91eaded3\": rpc error: code = NotFound desc = could not find container \"7d334b2e176b14611bb63d228a184c051d6ff8d39b2a646ab95dce4c91eaded3\": container with ID starting with 7d334b2e176b14611bb63d228a184c051d6ff8d39b2a646ab95dce4c91eaded3 not found: ID does not exist" Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.757884 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e821fdc7-f166-4347-8087-341868680ed0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e821fdc7-f166-4347-8087-341868680ed0" (UID: "e821fdc7-f166-4347-8087-341868680ed0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.777423 4915 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e821fdc7-f166-4347-8087-341868680ed0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.839477 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e821fdc7-f166-4347-8087-341868680ed0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e821fdc7-f166-4347-8087-341868680ed0" (UID: "e821fdc7-f166-4347-8087-341868680ed0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.869056 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e821fdc7-f166-4347-8087-341868680ed0-config-data" (OuterVolumeSpecName: "config-data") pod "e821fdc7-f166-4347-8087-341868680ed0" (UID: "e821fdc7-f166-4347-8087-341868680ed0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.879210 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e821fdc7-f166-4347-8087-341868680ed0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:55 crc kubenswrapper[4915]: I1124 21:44:55.879237 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e821fdc7-f166-4347-8087-341868680ed0-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.181413 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.196626 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.207330 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:44:56 crc kubenswrapper[4915]: E1124 21:44:56.207885 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e821fdc7-f166-4347-8087-341868680ed0" containerName="ceilometer-notification-agent" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.207905 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e821fdc7-f166-4347-8087-341868680ed0" containerName="ceilometer-notification-agent" Nov 24 21:44:56 crc kubenswrapper[4915]: E1124 21:44:56.207917 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e821fdc7-f166-4347-8087-341868680ed0" containerName="ceilometer-central-agent" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.207923 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e821fdc7-f166-4347-8087-341868680ed0" containerName="ceilometer-central-agent" Nov 24 21:44:56 crc kubenswrapper[4915]: E1124 21:44:56.207961 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e821fdc7-f166-4347-8087-341868680ed0" containerName="sg-core" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.207968 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e821fdc7-f166-4347-8087-341868680ed0" containerName="sg-core" Nov 24 21:44:56 crc kubenswrapper[4915]: E1124 21:44:56.207993 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e821fdc7-f166-4347-8087-341868680ed0" containerName="proxy-httpd" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.208000 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e821fdc7-f166-4347-8087-341868680ed0" containerName="proxy-httpd" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.208205 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="e821fdc7-f166-4347-8087-341868680ed0" containerName="proxy-httpd" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.208225 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="e821fdc7-f166-4347-8087-341868680ed0" containerName="ceilometer-central-agent" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.208241 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="e821fdc7-f166-4347-8087-341868680ed0" containerName="ceilometer-notification-agent" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.208265 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="e821fdc7-f166-4347-8087-341868680ed0" containerName="sg-core" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.210301 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.212600 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.212815 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.219601 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.287601 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\") " pod="openstack/ceilometer-0" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.287681 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-scripts\") pod \"ceilometer-0\" (UID: \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\") " pod="openstack/ceilometer-0" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.287920 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-run-httpd\") pod \"ceilometer-0\" (UID: \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\") " pod="openstack/ceilometer-0" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.288076 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-log-httpd\") pod \"ceilometer-0\" (UID: \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\") " pod="openstack/ceilometer-0" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.288234 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\") " pod="openstack/ceilometer-0" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.288538 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-config-data\") pod \"ceilometer-0\" (UID: \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\") " pod="openstack/ceilometer-0" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.288662 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79zl7\" (UniqueName: \"kubernetes.io/projected/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-kube-api-access-79zl7\") pod \"ceilometer-0\" (UID: \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\") " pod="openstack/ceilometer-0" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.390864 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79zl7\" (UniqueName: \"kubernetes.io/projected/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-kube-api-access-79zl7\") pod \"ceilometer-0\" (UID: \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\") " pod="openstack/ceilometer-0" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.391290 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\") " pod="openstack/ceilometer-0" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.391360 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-scripts\") pod \"ceilometer-0\" (UID: \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\") " pod="openstack/ceilometer-0" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.391544 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-run-httpd\") pod \"ceilometer-0\" (UID: \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\") " pod="openstack/ceilometer-0" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.391877 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-run-httpd\") pod \"ceilometer-0\" (UID: \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\") " pod="openstack/ceilometer-0" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.392100 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-log-httpd\") pod \"ceilometer-0\" (UID: \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\") " pod="openstack/ceilometer-0" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.392181 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\") " pod="openstack/ceilometer-0" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.392357 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-config-data\") pod \"ceilometer-0\" (UID: \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\") " pod="openstack/ceilometer-0" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.392371 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-log-httpd\") pod \"ceilometer-0\" (UID: \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\") " pod="openstack/ceilometer-0" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.459588 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e821fdc7-f166-4347-8087-341868680ed0" path="/var/lib/kubelet/pods/e821fdc7-f166-4347-8087-341868680ed0/volumes" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.550566 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\") " pod="openstack/ceilometer-0" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.552118 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\") " pod="openstack/ceilometer-0" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.552925 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-config-data\") pod \"ceilometer-0\" (UID: \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\") " pod="openstack/ceilometer-0" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.553467 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79zl7\" (UniqueName: \"kubernetes.io/projected/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-kube-api-access-79zl7\") pod \"ceilometer-0\" (UID: \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\") " pod="openstack/ceilometer-0" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.555387 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-scripts\") pod \"ceilometer-0\" (UID: \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\") " pod="openstack/ceilometer-0" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.632615 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f9c9b404-e3bf-41de-a3fe-f79c98113692","Type":"ContainerStarted","Data":"4cc36742319d5eb23e607f2e6c7adfd85aa5f6db934c9620dfb4eeba8e8343e2"} Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.632903 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f9c9b404-e3bf-41de-a3fe-f79c98113692" containerName="aodh-api" containerID="cri-o://42b90e436064dd8168cddb993e9dbdef022a94757a87879e5aa20d49ca476381" gracePeriod=30 Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.633754 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f9c9b404-e3bf-41de-a3fe-f79c98113692" containerName="aodh-listener" containerID="cri-o://4cc36742319d5eb23e607f2e6c7adfd85aa5f6db934c9620dfb4eeba8e8343e2" gracePeriod=30 Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.633868 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f9c9b404-e3bf-41de-a3fe-f79c98113692" containerName="aodh-notifier" containerID="cri-o://f67de18064128474d5b61da458198f58eeca5393b6dc5db0fb23457ad1a70f4f" gracePeriod=30 Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.633939 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f9c9b404-e3bf-41de-a3fe-f79c98113692" containerName="aodh-evaluator" containerID="cri-o://600cf224101aadbc6c4db75669402f568f1967c32b52a621986c15aa44f5a48c" gracePeriod=30 Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.750011 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.04229307 podStartE2EDuration="10.749991813s" podCreationTimestamp="2025-11-24 21:44:46 +0000 UTC" firstStartedPulling="2025-11-24 21:44:47.519974581 +0000 UTC m=+1505.836226754" lastFinishedPulling="2025-11-24 21:44:55.227673294 +0000 UTC m=+1513.543925497" observedRunningTime="2025-11-24 21:44:56.708556697 +0000 UTC m=+1515.024808880" watchObservedRunningTime="2025-11-24 21:44:56.749991813 +0000 UTC m=+1515.066243986" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.752271 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b2b977b2-72cc-4b96-9a12-85155332319b","Type":"ContainerStarted","Data":"8786970f03de0c512db48108e410de762f560d9aca8b5ba0cda1c49990dff99c"} Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.754517 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.763265 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.831030 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:44:56 crc kubenswrapper[4915]: I1124 21:44:56.833408 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.833389568 podStartE2EDuration="3.833389568s" podCreationTimestamp="2025-11-24 21:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:44:56.777929015 +0000 UTC m=+1515.094181208" watchObservedRunningTime="2025-11-24 21:44:56.833389568 +0000 UTC m=+1515.149641761" Nov 24 21:44:57 crc kubenswrapper[4915]: I1124 21:44:57.085414 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-jnbbb"] Nov 24 21:44:57 crc kubenswrapper[4915]: I1124 21:44:57.088244 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-jnbbb" Nov 24 21:44:57 crc kubenswrapper[4915]: I1124 21:44:57.119142 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-jnbbb"] Nov 24 21:44:57 crc kubenswrapper[4915]: I1124 21:44:57.188405 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d5f3f02-da73-476f-944d-3149838eb7e6-config\") pod \"dnsmasq-dns-f84f9ccf-jnbbb\" (UID: \"3d5f3f02-da73-476f-944d-3149838eb7e6\") " pod="openstack/dnsmasq-dns-f84f9ccf-jnbbb" Nov 24 21:44:57 crc kubenswrapper[4915]: I1124 21:44:57.188464 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d5f3f02-da73-476f-944d-3149838eb7e6-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-jnbbb\" (UID: \"3d5f3f02-da73-476f-944d-3149838eb7e6\") " pod="openstack/dnsmasq-dns-f84f9ccf-jnbbb" Nov 24 21:44:57 crc kubenswrapper[4915]: I1124 21:44:57.188504 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d5f3f02-da73-476f-944d-3149838eb7e6-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-jnbbb\" (UID: \"3d5f3f02-da73-476f-944d-3149838eb7e6\") " pod="openstack/dnsmasq-dns-f84f9ccf-jnbbb" Nov 24 21:44:57 crc kubenswrapper[4915]: I1124 21:44:57.188568 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d5f3f02-da73-476f-944d-3149838eb7e6-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-jnbbb\" (UID: \"3d5f3f02-da73-476f-944d-3149838eb7e6\") " pod="openstack/dnsmasq-dns-f84f9ccf-jnbbb" Nov 24 21:44:57 crc kubenswrapper[4915]: I1124 21:44:57.188833 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d5f3f02-da73-476f-944d-3149838eb7e6-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-jnbbb\" (UID: \"3d5f3f02-da73-476f-944d-3149838eb7e6\") " pod="openstack/dnsmasq-dns-f84f9ccf-jnbbb" Nov 24 21:44:57 crc kubenswrapper[4915]: I1124 21:44:57.188943 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqkjc\" (UniqueName: \"kubernetes.io/projected/3d5f3f02-da73-476f-944d-3149838eb7e6-kube-api-access-pqkjc\") pod \"dnsmasq-dns-f84f9ccf-jnbbb\" (UID: \"3d5f3f02-da73-476f-944d-3149838eb7e6\") " pod="openstack/dnsmasq-dns-f84f9ccf-jnbbb" Nov 24 21:44:57 crc kubenswrapper[4915]: I1124 21:44:57.293087 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d5f3f02-da73-476f-944d-3149838eb7e6-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-jnbbb\" (UID: \"3d5f3f02-da73-476f-944d-3149838eb7e6\") " pod="openstack/dnsmasq-dns-f84f9ccf-jnbbb" Nov 24 21:44:57 crc kubenswrapper[4915]: I1124 21:44:57.293168 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqkjc\" (UniqueName: \"kubernetes.io/projected/3d5f3f02-da73-476f-944d-3149838eb7e6-kube-api-access-pqkjc\") pod \"dnsmasq-dns-f84f9ccf-jnbbb\" (UID: \"3d5f3f02-da73-476f-944d-3149838eb7e6\") " pod="openstack/dnsmasq-dns-f84f9ccf-jnbbb" Nov 24 21:44:57 crc kubenswrapper[4915]: I1124 21:44:57.293239 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d5f3f02-da73-476f-944d-3149838eb7e6-config\") pod \"dnsmasq-dns-f84f9ccf-jnbbb\" (UID: \"3d5f3f02-da73-476f-944d-3149838eb7e6\") " pod="openstack/dnsmasq-dns-f84f9ccf-jnbbb" Nov 24 21:44:57 crc kubenswrapper[4915]: I1124 21:44:57.293261 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d5f3f02-da73-476f-944d-3149838eb7e6-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-jnbbb\" (UID: \"3d5f3f02-da73-476f-944d-3149838eb7e6\") " pod="openstack/dnsmasq-dns-f84f9ccf-jnbbb" Nov 24 21:44:57 crc kubenswrapper[4915]: I1124 21:44:57.293282 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d5f3f02-da73-476f-944d-3149838eb7e6-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-jnbbb\" (UID: \"3d5f3f02-da73-476f-944d-3149838eb7e6\") " pod="openstack/dnsmasq-dns-f84f9ccf-jnbbb" Nov 24 21:44:57 crc kubenswrapper[4915]: I1124 21:44:57.293315 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d5f3f02-da73-476f-944d-3149838eb7e6-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-jnbbb\" (UID: \"3d5f3f02-da73-476f-944d-3149838eb7e6\") " pod="openstack/dnsmasq-dns-f84f9ccf-jnbbb" Nov 24 21:44:57 crc kubenswrapper[4915]: I1124 21:44:57.294904 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d5f3f02-da73-476f-944d-3149838eb7e6-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-jnbbb\" (UID: \"3d5f3f02-da73-476f-944d-3149838eb7e6\") " pod="openstack/dnsmasq-dns-f84f9ccf-jnbbb" Nov 24 21:44:57 crc kubenswrapper[4915]: I1124 21:44:57.295101 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d5f3f02-da73-476f-944d-3149838eb7e6-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-jnbbb\" (UID: \"3d5f3f02-da73-476f-944d-3149838eb7e6\") " pod="openstack/dnsmasq-dns-f84f9ccf-jnbbb" Nov 24 21:44:57 crc kubenswrapper[4915]: I1124 21:44:57.296680 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d5f3f02-da73-476f-944d-3149838eb7e6-config\") pod \"dnsmasq-dns-f84f9ccf-jnbbb\" (UID: \"3d5f3f02-da73-476f-944d-3149838eb7e6\") " pod="openstack/dnsmasq-dns-f84f9ccf-jnbbb" Nov 24 21:44:57 crc kubenswrapper[4915]: I1124 21:44:57.296922 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d5f3f02-da73-476f-944d-3149838eb7e6-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-jnbbb\" (UID: \"3d5f3f02-da73-476f-944d-3149838eb7e6\") " pod="openstack/dnsmasq-dns-f84f9ccf-jnbbb" Nov 24 21:44:57 crc kubenswrapper[4915]: I1124 21:44:57.300158 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d5f3f02-da73-476f-944d-3149838eb7e6-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-jnbbb\" (UID: \"3d5f3f02-da73-476f-944d-3149838eb7e6\") " pod="openstack/dnsmasq-dns-f84f9ccf-jnbbb" Nov 24 21:44:57 crc kubenswrapper[4915]: I1124 21:44:57.316727 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqkjc\" (UniqueName: \"kubernetes.io/projected/3d5f3f02-da73-476f-944d-3149838eb7e6-kube-api-access-pqkjc\") pod \"dnsmasq-dns-f84f9ccf-jnbbb\" (UID: \"3d5f3f02-da73-476f-944d-3149838eb7e6\") " pod="openstack/dnsmasq-dns-f84f9ccf-jnbbb" Nov 24 21:44:57 crc kubenswrapper[4915]: I1124 21:44:57.426682 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-jnbbb" Nov 24 21:44:57 crc kubenswrapper[4915]: I1124 21:44:57.703134 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:44:57 crc kubenswrapper[4915]: I1124 21:44:57.786765 4915 generic.go:334] "Generic (PLEG): container finished" podID="f9c9b404-e3bf-41de-a3fe-f79c98113692" containerID="42b90e436064dd8168cddb993e9dbdef022a94757a87879e5aa20d49ca476381" exitCode=0 Nov 24 21:44:57 crc kubenswrapper[4915]: I1124 21:44:57.786843 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f9c9b404-e3bf-41de-a3fe-f79c98113692","Type":"ContainerDied","Data":"42b90e436064dd8168cddb993e9dbdef022a94757a87879e5aa20d49ca476381"} Nov 24 21:44:57 crc kubenswrapper[4915]: I1124 21:44:57.789001 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"420410a4-f0bc-4c3f-a9b5-90c05ce6de57","Type":"ContainerStarted","Data":"be3273c7128671a871b3089243cf9805f404d55d4dbfb85bad640bf8a9e762a7"} Nov 24 21:44:58 crc kubenswrapper[4915]: W1124 21:44:58.358433 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d5f3f02_da73_476f_944d_3149838eb7e6.slice/crio-a094200b26899fc2565c935de8a06a28b907ee7aa77292433a04da16881f9361 WatchSource:0}: Error finding container a094200b26899fc2565c935de8a06a28b907ee7aa77292433a04da16881f9361: Status 404 returned error can't find the container with id a094200b26899fc2565c935de8a06a28b907ee7aa77292433a04da16881f9361 Nov 24 21:44:58 crc kubenswrapper[4915]: I1124 21:44:58.361731 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-jnbbb"] Nov 24 21:44:58 crc kubenswrapper[4915]: I1124 21:44:58.804183 4915 generic.go:334] "Generic (PLEG): container finished" podID="f9c9b404-e3bf-41de-a3fe-f79c98113692" containerID="600cf224101aadbc6c4db75669402f568f1967c32b52a621986c15aa44f5a48c" exitCode=0 Nov 24 21:44:58 crc kubenswrapper[4915]: I1124 21:44:58.804600 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f9c9b404-e3bf-41de-a3fe-f79c98113692","Type":"ContainerDied","Data":"600cf224101aadbc6c4db75669402f568f1967c32b52a621986c15aa44f5a48c"} Nov 24 21:44:58 crc kubenswrapper[4915]: I1124 21:44:58.806401 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"420410a4-f0bc-4c3f-a9b5-90c05ce6de57","Type":"ContainerStarted","Data":"b8d05d985c03924be76a33666745e38b628061ff079931cd4f8e435a3ae002c0"} Nov 24 21:44:58 crc kubenswrapper[4915]: I1124 21:44:58.809680 4915 generic.go:334] "Generic (PLEG): container finished" podID="3d5f3f02-da73-476f-944d-3149838eb7e6" containerID="0c5f8996c93e5cb8bd20845d4a98ef7a1a1d016a818e5d666a05dafc1fbc447b" exitCode=0 Nov 24 21:44:58 crc kubenswrapper[4915]: I1124 21:44:58.811503 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-jnbbb" event={"ID":"3d5f3f02-da73-476f-944d-3149838eb7e6","Type":"ContainerDied","Data":"0c5f8996c93e5cb8bd20845d4a98ef7a1a1d016a818e5d666a05dafc1fbc447b"} Nov 24 21:44:58 crc kubenswrapper[4915]: I1124 21:44:58.811533 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-jnbbb" event={"ID":"3d5f3f02-da73-476f-944d-3149838eb7e6","Type":"ContainerStarted","Data":"a094200b26899fc2565c935de8a06a28b907ee7aa77292433a04da16881f9361"} Nov 24 21:44:59 crc kubenswrapper[4915]: I1124 21:44:59.012120 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:44:59 crc kubenswrapper[4915]: I1124 21:44:59.245872 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:44:59 crc kubenswrapper[4915]: I1124 21:44:59.246129 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="f841b28e-565b-41e8-9288-582e862cdceb" containerName="kube-state-metrics" containerID="cri-o://2cd5bdd8a6c0ab6a2ca80a5d03dd2fb0fe37c72fa61140b570070d9ff5f3eb08" gracePeriod=30 Nov 24 21:44:59 crc kubenswrapper[4915]: I1124 21:44:59.309519 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 24 21:44:59 crc kubenswrapper[4915]: I1124 21:44:59.310049 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="43a390f0-1b26-42d4-bee7-df0e425cc7bf" containerName="mysqld-exporter" containerID="cri-o://7a8c5db09fd8c6c74407980359318c5a0bf7c0fbc2e8ef895ee300dcbb2d6260" gracePeriod=30 Nov 24 21:44:59 crc kubenswrapper[4915]: I1124 21:44:59.856751 4915 generic.go:334] "Generic (PLEG): container finished" podID="43a390f0-1b26-42d4-bee7-df0e425cc7bf" containerID="7a8c5db09fd8c6c74407980359318c5a0bf7c0fbc2e8ef895ee300dcbb2d6260" exitCode=2 Nov 24 21:44:59 crc kubenswrapper[4915]: I1124 21:44:59.857123 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"43a390f0-1b26-42d4-bee7-df0e425cc7bf","Type":"ContainerDied","Data":"7a8c5db09fd8c6c74407980359318c5a0bf7c0fbc2e8ef895ee300dcbb2d6260"} Nov 24 21:44:59 crc kubenswrapper[4915]: I1124 21:44:59.863475 4915 generic.go:334] "Generic (PLEG): container finished" podID="f841b28e-565b-41e8-9288-582e862cdceb" containerID="2cd5bdd8a6c0ab6a2ca80a5d03dd2fb0fe37c72fa61140b570070d9ff5f3eb08" exitCode=2 Nov 24 21:44:59 crc kubenswrapper[4915]: I1124 21:44:59.863519 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f841b28e-565b-41e8-9288-582e862cdceb","Type":"ContainerDied","Data":"2cd5bdd8a6c0ab6a2ca80a5d03dd2fb0fe37c72fa61140b570070d9ff5f3eb08"} Nov 24 21:44:59 crc kubenswrapper[4915]: I1124 21:44:59.885884 4915 generic.go:334] "Generic (PLEG): container finished" podID="f9c9b404-e3bf-41de-a3fe-f79c98113692" containerID="f67de18064128474d5b61da458198f58eeca5393b6dc5db0fb23457ad1a70f4f" exitCode=0 Nov 24 21:44:59 crc kubenswrapper[4915]: I1124 21:44:59.885947 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f9c9b404-e3bf-41de-a3fe-f79c98113692","Type":"ContainerDied","Data":"f67de18064128474d5b61da458198f58eeca5393b6dc5db0fb23457ad1a70f4f"} Nov 24 21:44:59 crc kubenswrapper[4915]: I1124 21:44:59.887887 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"420410a4-f0bc-4c3f-a9b5-90c05ce6de57","Type":"ContainerStarted","Data":"2c3c0ffe43c82efed1fed6746480af89199ffe36901542566d4424989d05b51d"} Nov 24 21:44:59 crc kubenswrapper[4915]: I1124 21:44:59.891682 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-jnbbb" event={"ID":"3d5f3f02-da73-476f-944d-3149838eb7e6","Type":"ContainerStarted","Data":"68260b105130add9f5ea03994ff35072ea9a4287b7e2d5cf285e2f8e681f8e45"} Nov 24 21:44:59 crc kubenswrapper[4915]: I1124 21:44:59.893100 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f84f9ccf-jnbbb" Nov 24 21:44:59 crc kubenswrapper[4915]: I1124 21:44:59.939869 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:44:59 crc kubenswrapper[4915]: I1124 21:44:59.940155 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9b8dd7ab-1f63-4543-87cc-583dae222748" containerName="nova-api-log" containerID="cri-o://38c5fbf95086d9b42e3298a1a68e186fb7322fbbebf222d744d7e947e609c504" gracePeriod=30 Nov 24 21:44:59 crc kubenswrapper[4915]: I1124 21:44:59.940826 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9b8dd7ab-1f63-4543-87cc-583dae222748" containerName="nova-api-api" containerID="cri-o://4c8a638356fdd403a5f2d009fa7513c36a06b3197ceb5d9e6cf9282e031d42c9" gracePeriod=30 Nov 24 21:44:59 crc kubenswrapper[4915]: I1124 21:44:59.961946 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f84f9ccf-jnbbb" podStartSLOduration=2.9619227649999997 podStartE2EDuration="2.961922765s" podCreationTimestamp="2025-11-24 21:44:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:44:59.91527494 +0000 UTC m=+1518.231527113" watchObservedRunningTime="2025-11-24 21:44:59.961922765 +0000 UTC m=+1518.278174938" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.020448 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.134952 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.183806 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43a390f0-1b26-42d4-bee7-df0e425cc7bf-config-data\") pod \"43a390f0-1b26-42d4-bee7-df0e425cc7bf\" (UID: \"43a390f0-1b26-42d4-bee7-df0e425cc7bf\") " Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.184199 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43a390f0-1b26-42d4-bee7-df0e425cc7bf-combined-ca-bundle\") pod \"43a390f0-1b26-42d4-bee7-df0e425cc7bf\" (UID: \"43a390f0-1b26-42d4-bee7-df0e425cc7bf\") " Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.184476 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vd62\" (UniqueName: \"kubernetes.io/projected/43a390f0-1b26-42d4-bee7-df0e425cc7bf-kube-api-access-4vd62\") pod \"43a390f0-1b26-42d4-bee7-df0e425cc7bf\" (UID: \"43a390f0-1b26-42d4-bee7-df0e425cc7bf\") " Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.212390 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43a390f0-1b26-42d4-bee7-df0e425cc7bf-kube-api-access-4vd62" (OuterVolumeSpecName: "kube-api-access-4vd62") pod "43a390f0-1b26-42d4-bee7-df0e425cc7bf" (UID: "43a390f0-1b26-42d4-bee7-df0e425cc7bf"). InnerVolumeSpecName "kube-api-access-4vd62". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.224923 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400345-6jvqv"] Nov 24 21:45:00 crc kubenswrapper[4915]: E1124 21:45:00.225531 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f841b28e-565b-41e8-9288-582e862cdceb" containerName="kube-state-metrics" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.225551 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f841b28e-565b-41e8-9288-582e862cdceb" containerName="kube-state-metrics" Nov 24 21:45:00 crc kubenswrapper[4915]: E1124 21:45:00.225586 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a390f0-1b26-42d4-bee7-df0e425cc7bf" containerName="mysqld-exporter" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.225596 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a390f0-1b26-42d4-bee7-df0e425cc7bf" containerName="mysqld-exporter" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.225936 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="43a390f0-1b26-42d4-bee7-df0e425cc7bf" containerName="mysqld-exporter" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.225960 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f841b28e-565b-41e8-9288-582e862cdceb" containerName="kube-state-metrics" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.226844 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-6jvqv" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.231376 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.232796 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400345-6jvqv"] Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.235789 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.267018 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43a390f0-1b26-42d4-bee7-df0e425cc7bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43a390f0-1b26-42d4-bee7-df0e425cc7bf" (UID: "43a390f0-1b26-42d4-bee7-df0e425cc7bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.303659 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43a390f0-1b26-42d4-bee7-df0e425cc7bf-config-data" (OuterVolumeSpecName: "config-data") pod "43a390f0-1b26-42d4-bee7-df0e425cc7bf" (UID: "43a390f0-1b26-42d4-bee7-df0e425cc7bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.313834 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5j7q\" (UniqueName: \"kubernetes.io/projected/f841b28e-565b-41e8-9288-582e862cdceb-kube-api-access-f5j7q\") pod \"f841b28e-565b-41e8-9288-582e862cdceb\" (UID: \"f841b28e-565b-41e8-9288-582e862cdceb\") " Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.314743 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vd62\" (UniqueName: \"kubernetes.io/projected/43a390f0-1b26-42d4-bee7-df0e425cc7bf-kube-api-access-4vd62\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.314783 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43a390f0-1b26-42d4-bee7-df0e425cc7bf-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.314796 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43a390f0-1b26-42d4-bee7-df0e425cc7bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.337728 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f841b28e-565b-41e8-9288-582e862cdceb-kube-api-access-f5j7q" (OuterVolumeSpecName: "kube-api-access-f5j7q") pod "f841b28e-565b-41e8-9288-582e862cdceb" (UID: "f841b28e-565b-41e8-9288-582e862cdceb"). InnerVolumeSpecName "kube-api-access-f5j7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.430278 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpjh4\" (UniqueName: \"kubernetes.io/projected/c25290f4-4f85-47fd-af5d-f141d5bb80a1-kube-api-access-xpjh4\") pod \"collect-profiles-29400345-6jvqv\" (UID: \"c25290f4-4f85-47fd-af5d-f141d5bb80a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-6jvqv" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.430372 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c25290f4-4f85-47fd-af5d-f141d5bb80a1-config-volume\") pod \"collect-profiles-29400345-6jvqv\" (UID: \"c25290f4-4f85-47fd-af5d-f141d5bb80a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-6jvqv" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.430445 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c25290f4-4f85-47fd-af5d-f141d5bb80a1-secret-volume\") pod \"collect-profiles-29400345-6jvqv\" (UID: \"c25290f4-4f85-47fd-af5d-f141d5bb80a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-6jvqv" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.430502 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5j7q\" (UniqueName: \"kubernetes.io/projected/f841b28e-565b-41e8-9288-582e862cdceb-kube-api-access-f5j7q\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.532236 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c25290f4-4f85-47fd-af5d-f141d5bb80a1-secret-volume\") pod \"collect-profiles-29400345-6jvqv\" (UID: \"c25290f4-4f85-47fd-af5d-f141d5bb80a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-6jvqv" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.532389 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpjh4\" (UniqueName: \"kubernetes.io/projected/c25290f4-4f85-47fd-af5d-f141d5bb80a1-kube-api-access-xpjh4\") pod \"collect-profiles-29400345-6jvqv\" (UID: \"c25290f4-4f85-47fd-af5d-f141d5bb80a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-6jvqv" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.532508 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c25290f4-4f85-47fd-af5d-f141d5bb80a1-config-volume\") pod \"collect-profiles-29400345-6jvqv\" (UID: \"c25290f4-4f85-47fd-af5d-f141d5bb80a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-6jvqv" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.533641 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c25290f4-4f85-47fd-af5d-f141d5bb80a1-config-volume\") pod \"collect-profiles-29400345-6jvqv\" (UID: \"c25290f4-4f85-47fd-af5d-f141d5bb80a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-6jvqv" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.535737 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c25290f4-4f85-47fd-af5d-f141d5bb80a1-secret-volume\") pod \"collect-profiles-29400345-6jvqv\" (UID: \"c25290f4-4f85-47fd-af5d-f141d5bb80a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-6jvqv" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.548642 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpjh4\" (UniqueName: \"kubernetes.io/projected/c25290f4-4f85-47fd-af5d-f141d5bb80a1-kube-api-access-xpjh4\") pod \"collect-profiles-29400345-6jvqv\" (UID: \"c25290f4-4f85-47fd-af5d-f141d5bb80a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-6jvqv" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.578078 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-6jvqv" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.905718 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"43a390f0-1b26-42d4-bee7-df0e425cc7bf","Type":"ContainerDied","Data":"0095eb0da3aee9919f780e69c49a9f51dd592260dd05790d0feb345f5727db6c"} Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.906334 4915 scope.go:117] "RemoveContainer" containerID="7a8c5db09fd8c6c74407980359318c5a0bf7c0fbc2e8ef895ee300dcbb2d6260" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.905820 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.910669 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"f841b28e-565b-41e8-9288-582e862cdceb","Type":"ContainerDied","Data":"5ea929aca1b2f9a811a01fedcdf6b8c67c43e716c1e30dbb79a6af374ca4ab72"} Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.910707 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.915138 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"420410a4-f0bc-4c3f-a9b5-90c05ce6de57","Type":"ContainerStarted","Data":"12623aaea0585834cf3d636af0a2517a7058863cf041570d5e01ecaaddc8ffe6"} Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.918213 4915 generic.go:334] "Generic (PLEG): container finished" podID="9b8dd7ab-1f63-4543-87cc-583dae222748" containerID="38c5fbf95086d9b42e3298a1a68e186fb7322fbbebf222d744d7e947e609c504" exitCode=143 Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.918756 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b8dd7ab-1f63-4543-87cc-583dae222748","Type":"ContainerDied","Data":"38c5fbf95086d9b42e3298a1a68e186fb7322fbbebf222d744d7e947e609c504"} Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.952178 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.962628 4915 scope.go:117] "RemoveContainer" containerID="2cd5bdd8a6c0ab6a2ca80a5d03dd2fb0fe37c72fa61140b570070d9ff5f3eb08" Nov 24 21:45:00 crc kubenswrapper[4915]: I1124 21:45:00.980265 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.009064 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.021335 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.023355 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.025623 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.025770 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.034966 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.049483 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.059186 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.060884 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.062485 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.063074 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.077953 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.146985 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m85p\" (UniqueName: \"kubernetes.io/projected/d3319417-d53f-48fa-bffa-fa20dfabccd4-kube-api-access-2m85p\") pod \"mysqld-exporter-0\" (UID: \"d3319417-d53f-48fa-bffa-fa20dfabccd4\") " pod="openstack/mysqld-exporter-0" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.147101 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3319417-d53f-48fa-bffa-fa20dfabccd4-config-data\") pod \"mysqld-exporter-0\" (UID: \"d3319417-d53f-48fa-bffa-fa20dfabccd4\") " pod="openstack/mysqld-exporter-0" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.147170 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3319417-d53f-48fa-bffa-fa20dfabccd4-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"d3319417-d53f-48fa-bffa-fa20dfabccd4\") " pod="openstack/mysqld-exporter-0" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.147220 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3319417-d53f-48fa-bffa-fa20dfabccd4-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"d3319417-d53f-48fa-bffa-fa20dfabccd4\") " pod="openstack/mysqld-exporter-0" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.158858 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400345-6jvqv"] Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.250225 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m85p\" (UniqueName: \"kubernetes.io/projected/d3319417-d53f-48fa-bffa-fa20dfabccd4-kube-api-access-2m85p\") pod \"mysqld-exporter-0\" (UID: \"d3319417-d53f-48fa-bffa-fa20dfabccd4\") " pod="openstack/mysqld-exporter-0" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.250611 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/daa86f21-5aec-48b3-833d-8d2c99e96028-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"daa86f21-5aec-48b3-833d-8d2c99e96028\") " pod="openstack/kube-state-metrics-0" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.250680 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daa86f21-5aec-48b3-833d-8d2c99e96028-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"daa86f21-5aec-48b3-833d-8d2c99e96028\") " pod="openstack/kube-state-metrics-0" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.250804 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5ztx\" (UniqueName: \"kubernetes.io/projected/daa86f21-5aec-48b3-833d-8d2c99e96028-kube-api-access-b5ztx\") pod \"kube-state-metrics-0\" (UID: \"daa86f21-5aec-48b3-833d-8d2c99e96028\") " pod="openstack/kube-state-metrics-0" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.250849 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3319417-d53f-48fa-bffa-fa20dfabccd4-config-data\") pod \"mysqld-exporter-0\" (UID: \"d3319417-d53f-48fa-bffa-fa20dfabccd4\") " pod="openstack/mysqld-exporter-0" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.251964 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3319417-d53f-48fa-bffa-fa20dfabccd4-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"d3319417-d53f-48fa-bffa-fa20dfabccd4\") " pod="openstack/mysqld-exporter-0" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.252029 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3319417-d53f-48fa-bffa-fa20dfabccd4-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"d3319417-d53f-48fa-bffa-fa20dfabccd4\") " pod="openstack/mysqld-exporter-0" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.252071 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/daa86f21-5aec-48b3-833d-8d2c99e96028-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"daa86f21-5aec-48b3-833d-8d2c99e96028\") " pod="openstack/kube-state-metrics-0" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.259633 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3319417-d53f-48fa-bffa-fa20dfabccd4-config-data\") pod \"mysqld-exporter-0\" (UID: \"d3319417-d53f-48fa-bffa-fa20dfabccd4\") " pod="openstack/mysqld-exporter-0" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.260254 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3319417-d53f-48fa-bffa-fa20dfabccd4-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"d3319417-d53f-48fa-bffa-fa20dfabccd4\") " pod="openstack/mysqld-exporter-0" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.260741 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3319417-d53f-48fa-bffa-fa20dfabccd4-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"d3319417-d53f-48fa-bffa-fa20dfabccd4\") " pod="openstack/mysqld-exporter-0" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.278476 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m85p\" (UniqueName: \"kubernetes.io/projected/d3319417-d53f-48fa-bffa-fa20dfabccd4-kube-api-access-2m85p\") pod \"mysqld-exporter-0\" (UID: \"d3319417-d53f-48fa-bffa-fa20dfabccd4\") " pod="openstack/mysqld-exporter-0" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.349285 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.354374 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/daa86f21-5aec-48b3-833d-8d2c99e96028-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"daa86f21-5aec-48b3-833d-8d2c99e96028\") " pod="openstack/kube-state-metrics-0" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.354435 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daa86f21-5aec-48b3-833d-8d2c99e96028-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"daa86f21-5aec-48b3-833d-8d2c99e96028\") " pod="openstack/kube-state-metrics-0" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.354507 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5ztx\" (UniqueName: \"kubernetes.io/projected/daa86f21-5aec-48b3-833d-8d2c99e96028-kube-api-access-b5ztx\") pod \"kube-state-metrics-0\" (UID: \"daa86f21-5aec-48b3-833d-8d2c99e96028\") " pod="openstack/kube-state-metrics-0" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.354658 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/daa86f21-5aec-48b3-833d-8d2c99e96028-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"daa86f21-5aec-48b3-833d-8d2c99e96028\") " pod="openstack/kube-state-metrics-0" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.359981 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/daa86f21-5aec-48b3-833d-8d2c99e96028-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"daa86f21-5aec-48b3-833d-8d2c99e96028\") " pod="openstack/kube-state-metrics-0" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.364428 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daa86f21-5aec-48b3-833d-8d2c99e96028-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"daa86f21-5aec-48b3-833d-8d2c99e96028\") " pod="openstack/kube-state-metrics-0" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.372836 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5ztx\" (UniqueName: \"kubernetes.io/projected/daa86f21-5aec-48b3-833d-8d2c99e96028-kube-api-access-b5ztx\") pod \"kube-state-metrics-0\" (UID: \"daa86f21-5aec-48b3-833d-8d2c99e96028\") " pod="openstack/kube-state-metrics-0" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.377347 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/daa86f21-5aec-48b3-833d-8d2c99e96028-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"daa86f21-5aec-48b3-833d-8d2c99e96028\") " pod="openstack/kube-state-metrics-0" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.385198 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.934942 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"420410a4-f0bc-4c3f-a9b5-90c05ce6de57","Type":"ContainerStarted","Data":"1c3f85f030574530b5c274ca7f6f63ab46f05581ab8ecf7c49a7576b90dfcf46"} Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.935298 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.939022 4915 generic.go:334] "Generic (PLEG): container finished" podID="c25290f4-4f85-47fd-af5d-f141d5bb80a1" containerID="4f7e26fb86d3d63488f9eed5b8c0cc93f186bd4474fca2541112fe8c5e504d03" exitCode=0 Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.939092 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-6jvqv" event={"ID":"c25290f4-4f85-47fd-af5d-f141d5bb80a1","Type":"ContainerDied","Data":"4f7e26fb86d3d63488f9eed5b8c0cc93f186bd4474fca2541112fe8c5e504d03"} Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.939120 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-6jvqv" event={"ID":"c25290f4-4f85-47fd-af5d-f141d5bb80a1","Type":"ContainerStarted","Data":"51baf6ea8745285cbcf70d5acdf675c581e0e62898abd961616f8badf5411537"} Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.940931 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Nov 24 21:45:01 crc kubenswrapper[4915]: I1124 21:45:01.968679 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.453495872 podStartE2EDuration="5.968636801s" podCreationTimestamp="2025-11-24 21:44:56 +0000 UTC" firstStartedPulling="2025-11-24 21:44:57.735939016 +0000 UTC m=+1516.052191179" lastFinishedPulling="2025-11-24 21:45:01.251079935 +0000 UTC m=+1519.567332108" observedRunningTime="2025-11-24 21:45:01.95859948 +0000 UTC m=+1520.274851663" watchObservedRunningTime="2025-11-24 21:45:01.968636801 +0000 UTC m=+1520.284888994" Nov 24 21:45:02 crc kubenswrapper[4915]: I1124 21:45:02.038456 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 24 21:45:02 crc kubenswrapper[4915]: I1124 21:45:02.513745 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43a390f0-1b26-42d4-bee7-df0e425cc7bf" path="/var/lib/kubelet/pods/43a390f0-1b26-42d4-bee7-df0e425cc7bf/volumes" Nov 24 21:45:02 crc kubenswrapper[4915]: I1124 21:45:02.518252 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f841b28e-565b-41e8-9288-582e862cdceb" path="/var/lib/kubelet/pods/f841b28e-565b-41e8-9288-582e862cdceb/volumes" Nov 24 21:45:02 crc kubenswrapper[4915]: I1124 21:45:02.552570 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:45:02 crc kubenswrapper[4915]: I1124 21:45:02.970200 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"daa86f21-5aec-48b3-833d-8d2c99e96028","Type":"ContainerStarted","Data":"9b422b02db51af232cc97c6bbd2a8e597658debacb94dad620e6e4f21bec02f8"} Nov 24 21:45:02 crc kubenswrapper[4915]: I1124 21:45:02.970572 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"daa86f21-5aec-48b3-833d-8d2c99e96028","Type":"ContainerStarted","Data":"8b24a67020856aa246fee915936ef59d3a45093cd3fd830e56d8098fadb475e2"} Nov 24 21:45:02 crc kubenswrapper[4915]: I1124 21:45:02.970637 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 24 21:45:02 crc kubenswrapper[4915]: I1124 21:45:02.974043 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"d3319417-d53f-48fa-bffa-fa20dfabccd4","Type":"ContainerStarted","Data":"dd03903dc0eeea88f12c297b9c2d3d72222c8b0a0f2de9c6e2f50a5bcb95a5e7"} Nov 24 21:45:02 crc kubenswrapper[4915]: I1124 21:45:02.974152 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"d3319417-d53f-48fa-bffa-fa20dfabccd4","Type":"ContainerStarted","Data":"64c63bb9391e4eb36cc01657b944eb3fd98f73213752b89293b69207e9e70ac5"} Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.075348 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.725860854 podStartE2EDuration="3.075327186s" podCreationTimestamp="2025-11-24 21:45:00 +0000 UTC" firstStartedPulling="2025-11-24 21:45:02.035066489 +0000 UTC m=+1520.351318662" lastFinishedPulling="2025-11-24 21:45:02.384532821 +0000 UTC m=+1520.700784994" observedRunningTime="2025-11-24 21:45:02.996143493 +0000 UTC m=+1521.312395666" watchObservedRunningTime="2025-11-24 21:45:03.075327186 +0000 UTC m=+1521.391579359" Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.205564 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.592287496 podStartE2EDuration="3.205533962s" podCreationTimestamp="2025-11-24 21:45:00 +0000 UTC" firstStartedPulling="2025-11-24 21:45:01.945397414 +0000 UTC m=+1520.261649587" lastFinishedPulling="2025-11-24 21:45:02.55864388 +0000 UTC m=+1520.874896053" observedRunningTime="2025-11-24 21:45:03.076666491 +0000 UTC m=+1521.392918664" watchObservedRunningTime="2025-11-24 21:45:03.205533962 +0000 UTC m=+1521.521786135" Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.719040 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-6jvqv" Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.825722 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c25290f4-4f85-47fd-af5d-f141d5bb80a1-secret-volume\") pod \"c25290f4-4f85-47fd-af5d-f141d5bb80a1\" (UID: \"c25290f4-4f85-47fd-af5d-f141d5bb80a1\") " Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.825807 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpjh4\" (UniqueName: \"kubernetes.io/projected/c25290f4-4f85-47fd-af5d-f141d5bb80a1-kube-api-access-xpjh4\") pod \"c25290f4-4f85-47fd-af5d-f141d5bb80a1\" (UID: \"c25290f4-4f85-47fd-af5d-f141d5bb80a1\") " Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.825863 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c25290f4-4f85-47fd-af5d-f141d5bb80a1-config-volume\") pod \"c25290f4-4f85-47fd-af5d-f141d5bb80a1\" (UID: \"c25290f4-4f85-47fd-af5d-f141d5bb80a1\") " Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.827201 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c25290f4-4f85-47fd-af5d-f141d5bb80a1-config-volume" (OuterVolumeSpecName: "config-volume") pod "c25290f4-4f85-47fd-af5d-f141d5bb80a1" (UID: "c25290f4-4f85-47fd-af5d-f141d5bb80a1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.832165 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c25290f4-4f85-47fd-af5d-f141d5bb80a1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c25290f4-4f85-47fd-af5d-f141d5bb80a1" (UID: "c25290f4-4f85-47fd-af5d-f141d5bb80a1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.832419 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c25290f4-4f85-47fd-af5d-f141d5bb80a1-kube-api-access-xpjh4" (OuterVolumeSpecName: "kube-api-access-xpjh4") pod "c25290f4-4f85-47fd-af5d-f141d5bb80a1" (UID: "c25290f4-4f85-47fd-af5d-f141d5bb80a1"). InnerVolumeSpecName "kube-api-access-xpjh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.891356 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.927600 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8dd7ab-1f63-4543-87cc-583dae222748-config-data\") pod \"9b8dd7ab-1f63-4543-87cc-583dae222748\" (UID: \"9b8dd7ab-1f63-4543-87cc-583dae222748\") " Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.927689 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lwfc\" (UniqueName: \"kubernetes.io/projected/9b8dd7ab-1f63-4543-87cc-583dae222748-kube-api-access-6lwfc\") pod \"9b8dd7ab-1f63-4543-87cc-583dae222748\" (UID: \"9b8dd7ab-1f63-4543-87cc-583dae222748\") " Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.927964 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b8dd7ab-1f63-4543-87cc-583dae222748-logs\") pod \"9b8dd7ab-1f63-4543-87cc-583dae222748\" (UID: \"9b8dd7ab-1f63-4543-87cc-583dae222748\") " Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.928067 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8dd7ab-1f63-4543-87cc-583dae222748-combined-ca-bundle\") pod \"9b8dd7ab-1f63-4543-87cc-583dae222748\" (UID: \"9b8dd7ab-1f63-4543-87cc-583dae222748\") " Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.928618 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b8dd7ab-1f63-4543-87cc-583dae222748-logs" (OuterVolumeSpecName: "logs") pod "9b8dd7ab-1f63-4543-87cc-583dae222748" (UID: "9b8dd7ab-1f63-4543-87cc-583dae222748"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.928990 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b8dd7ab-1f63-4543-87cc-583dae222748-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.929010 4915 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c25290f4-4f85-47fd-af5d-f141d5bb80a1-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.929020 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpjh4\" (UniqueName: \"kubernetes.io/projected/c25290f4-4f85-47fd-af5d-f141d5bb80a1-kube-api-access-xpjh4\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.929030 4915 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c25290f4-4f85-47fd-af5d-f141d5bb80a1-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.933149 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b8dd7ab-1f63-4543-87cc-583dae222748-kube-api-access-6lwfc" (OuterVolumeSpecName: "kube-api-access-6lwfc") pod "9b8dd7ab-1f63-4543-87cc-583dae222748" (UID: "9b8dd7ab-1f63-4543-87cc-583dae222748"). InnerVolumeSpecName "kube-api-access-6lwfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.966435 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b8dd7ab-1f63-4543-87cc-583dae222748-config-data" (OuterVolumeSpecName: "config-data") pod "9b8dd7ab-1f63-4543-87cc-583dae222748" (UID: "9b8dd7ab-1f63-4543-87cc-583dae222748"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.988621 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-6jvqv" event={"ID":"c25290f4-4f85-47fd-af5d-f141d5bb80a1","Type":"ContainerDied","Data":"51baf6ea8745285cbcf70d5acdf675c581e0e62898abd961616f8badf5411537"} Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.988651 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400345-6jvqv" Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.988660 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51baf6ea8745285cbcf70d5acdf675c581e0e62898abd961616f8badf5411537" Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.990700 4915 generic.go:334] "Generic (PLEG): container finished" podID="9b8dd7ab-1f63-4543-87cc-583dae222748" containerID="4c8a638356fdd403a5f2d009fa7513c36a06b3197ceb5d9e6cf9282e031d42c9" exitCode=0 Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.990946 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b8dd7ab-1f63-4543-87cc-583dae222748","Type":"ContainerDied","Data":"4c8a638356fdd403a5f2d009fa7513c36a06b3197ceb5d9e6cf9282e031d42c9"} Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.990980 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b8dd7ab-1f63-4543-87cc-583dae222748","Type":"ContainerDied","Data":"f16fcd2e4f1c88f16ae8b81387e4c844154633b39f1dc99df8471a7b013fa376"} Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.991001 4915 scope.go:117] "RemoveContainer" containerID="4c8a638356fdd403a5f2d009fa7513c36a06b3197ceb5d9e6cf9282e031d42c9" Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.991168 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.993916 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="420410a4-f0bc-4c3f-a9b5-90c05ce6de57" containerName="ceilometer-central-agent" containerID="cri-o://b8d05d985c03924be76a33666745e38b628061ff079931cd4f8e435a3ae002c0" gracePeriod=30 Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.994028 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="420410a4-f0bc-4c3f-a9b5-90c05ce6de57" containerName="proxy-httpd" containerID="cri-o://1c3f85f030574530b5c274ca7f6f63ab46f05581ab8ecf7c49a7576b90dfcf46" gracePeriod=30 Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.994067 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="420410a4-f0bc-4c3f-a9b5-90c05ce6de57" containerName="sg-core" containerID="cri-o://12623aaea0585834cf3d636af0a2517a7058863cf041570d5e01ecaaddc8ffe6" gracePeriod=30 Nov 24 21:45:03 crc kubenswrapper[4915]: I1124 21:45:03.994102 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="420410a4-f0bc-4c3f-a9b5-90c05ce6de57" containerName="ceilometer-notification-agent" containerID="cri-o://2c3c0ffe43c82efed1fed6746480af89199ffe36901542566d4424989d05b51d" gracePeriod=30 Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.002979 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b8dd7ab-1f63-4543-87cc-583dae222748-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b8dd7ab-1f63-4543-87cc-583dae222748" (UID: "9b8dd7ab-1f63-4543-87cc-583dae222748"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.011349 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.032262 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b8dd7ab-1f63-4543-87cc-583dae222748-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.032299 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b8dd7ab-1f63-4543-87cc-583dae222748-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.032312 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lwfc\" (UniqueName: \"kubernetes.io/projected/9b8dd7ab-1f63-4543-87cc-583dae222748-kube-api-access-6lwfc\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.037605 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.039849 4915 scope.go:117] "RemoveContainer" containerID="38c5fbf95086d9b42e3298a1a68e186fb7322fbbebf222d744d7e947e609c504" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.078412 4915 scope.go:117] "RemoveContainer" containerID="4c8a638356fdd403a5f2d009fa7513c36a06b3197ceb5d9e6cf9282e031d42c9" Nov 24 21:45:04 crc kubenswrapper[4915]: E1124 21:45:04.079010 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c8a638356fdd403a5f2d009fa7513c36a06b3197ceb5d9e6cf9282e031d42c9\": container with ID starting with 4c8a638356fdd403a5f2d009fa7513c36a06b3197ceb5d9e6cf9282e031d42c9 not found: ID does not exist" containerID="4c8a638356fdd403a5f2d009fa7513c36a06b3197ceb5d9e6cf9282e031d42c9" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.079054 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c8a638356fdd403a5f2d009fa7513c36a06b3197ceb5d9e6cf9282e031d42c9"} err="failed to get container status \"4c8a638356fdd403a5f2d009fa7513c36a06b3197ceb5d9e6cf9282e031d42c9\": rpc error: code = NotFound desc = could not find container \"4c8a638356fdd403a5f2d009fa7513c36a06b3197ceb5d9e6cf9282e031d42c9\": container with ID starting with 4c8a638356fdd403a5f2d009fa7513c36a06b3197ceb5d9e6cf9282e031d42c9 not found: ID does not exist" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.079078 4915 scope.go:117] "RemoveContainer" containerID="38c5fbf95086d9b42e3298a1a68e186fb7322fbbebf222d744d7e947e609c504" Nov 24 21:45:04 crc kubenswrapper[4915]: E1124 21:45:04.079339 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38c5fbf95086d9b42e3298a1a68e186fb7322fbbebf222d744d7e947e609c504\": container with ID starting with 38c5fbf95086d9b42e3298a1a68e186fb7322fbbebf222d744d7e947e609c504 not found: ID does not exist" containerID="38c5fbf95086d9b42e3298a1a68e186fb7322fbbebf222d744d7e947e609c504" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.079416 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38c5fbf95086d9b42e3298a1a68e186fb7322fbbebf222d744d7e947e609c504"} err="failed to get container status \"38c5fbf95086d9b42e3298a1a68e186fb7322fbbebf222d744d7e947e609c504\": rpc error: code = NotFound desc = could not find container \"38c5fbf95086d9b42e3298a1a68e186fb7322fbbebf222d744d7e947e609c504\": container with ID starting with 38c5fbf95086d9b42e3298a1a68e186fb7322fbbebf222d744d7e947e609c504 not found: ID does not exist" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.348160 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.369154 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.384380 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 21:45:04 crc kubenswrapper[4915]: E1124 21:45:04.385160 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c25290f4-4f85-47fd-af5d-f141d5bb80a1" containerName="collect-profiles" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.385180 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c25290f4-4f85-47fd-af5d-f141d5bb80a1" containerName="collect-profiles" Nov 24 21:45:04 crc kubenswrapper[4915]: E1124 21:45:04.385193 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b8dd7ab-1f63-4543-87cc-583dae222748" containerName="nova-api-log" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.385200 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b8dd7ab-1f63-4543-87cc-583dae222748" containerName="nova-api-log" Nov 24 21:45:04 crc kubenswrapper[4915]: E1124 21:45:04.385216 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b8dd7ab-1f63-4543-87cc-583dae222748" containerName="nova-api-api" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.385223 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b8dd7ab-1f63-4543-87cc-583dae222748" containerName="nova-api-api" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.385418 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b8dd7ab-1f63-4543-87cc-583dae222748" containerName="nova-api-log" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.385456 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b8dd7ab-1f63-4543-87cc-583dae222748" containerName="nova-api-api" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.385464 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c25290f4-4f85-47fd-af5d-f141d5bb80a1" containerName="collect-profiles" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.386689 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.390102 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.396903 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.397461 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.400751 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.439525 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-config-data\") pod \"nova-api-0\" (UID: \"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed\") " pod="openstack/nova-api-0" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.439569 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mjdk\" (UniqueName: \"kubernetes.io/projected/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-kube-api-access-4mjdk\") pod \"nova-api-0\" (UID: \"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed\") " pod="openstack/nova-api-0" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.439622 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed\") " pod="openstack/nova-api-0" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.439718 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed\") " pod="openstack/nova-api-0" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.439749 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-logs\") pod \"nova-api-0\" (UID: \"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed\") " pod="openstack/nova-api-0" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.439812 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-public-tls-certs\") pod \"nova-api-0\" (UID: \"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed\") " pod="openstack/nova-api-0" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.446723 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b8dd7ab-1f63-4543-87cc-583dae222748" path="/var/lib/kubelet/pods/9b8dd7ab-1f63-4543-87cc-583dae222748/volumes" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.542500 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-config-data\") pod \"nova-api-0\" (UID: \"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed\") " pod="openstack/nova-api-0" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.542596 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mjdk\" (UniqueName: \"kubernetes.io/projected/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-kube-api-access-4mjdk\") pod \"nova-api-0\" (UID: \"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed\") " pod="openstack/nova-api-0" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.542674 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed\") " pod="openstack/nova-api-0" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.542944 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed\") " pod="openstack/nova-api-0" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.542990 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-logs\") pod \"nova-api-0\" (UID: \"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed\") " pod="openstack/nova-api-0" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.543367 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-public-tls-certs\") pod \"nova-api-0\" (UID: \"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed\") " pod="openstack/nova-api-0" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.543713 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-logs\") pod \"nova-api-0\" (UID: \"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed\") " pod="openstack/nova-api-0" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.548970 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-internal-tls-certs\") pod \"nova-api-0\" (UID: \"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed\") " pod="openstack/nova-api-0" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.550363 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed\") " pod="openstack/nova-api-0" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.557035 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-config-data\") pod \"nova-api-0\" (UID: \"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed\") " pod="openstack/nova-api-0" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.557550 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-public-tls-certs\") pod \"nova-api-0\" (UID: \"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed\") " pod="openstack/nova-api-0" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.559562 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mjdk\" (UniqueName: \"kubernetes.io/projected/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-kube-api-access-4mjdk\") pod \"nova-api-0\" (UID: \"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed\") " pod="openstack/nova-api-0" Nov 24 21:45:04 crc kubenswrapper[4915]: I1124 21:45:04.772195 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:45:05 crc kubenswrapper[4915]: I1124 21:45:05.013747 4915 generic.go:334] "Generic (PLEG): container finished" podID="420410a4-f0bc-4c3f-a9b5-90c05ce6de57" containerID="1c3f85f030574530b5c274ca7f6f63ab46f05581ab8ecf7c49a7576b90dfcf46" exitCode=0 Nov 24 21:45:05 crc kubenswrapper[4915]: I1124 21:45:05.014112 4915 generic.go:334] "Generic (PLEG): container finished" podID="420410a4-f0bc-4c3f-a9b5-90c05ce6de57" containerID="12623aaea0585834cf3d636af0a2517a7058863cf041570d5e01ecaaddc8ffe6" exitCode=2 Nov 24 21:45:05 crc kubenswrapper[4915]: I1124 21:45:05.014124 4915 generic.go:334] "Generic (PLEG): container finished" podID="420410a4-f0bc-4c3f-a9b5-90c05ce6de57" containerID="2c3c0ffe43c82efed1fed6746480af89199ffe36901542566d4424989d05b51d" exitCode=0 Nov 24 21:45:05 crc kubenswrapper[4915]: I1124 21:45:05.014069 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"420410a4-f0bc-4c3f-a9b5-90c05ce6de57","Type":"ContainerDied","Data":"1c3f85f030574530b5c274ca7f6f63ab46f05581ab8ecf7c49a7576b90dfcf46"} Nov 24 21:45:05 crc kubenswrapper[4915]: I1124 21:45:05.014203 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"420410a4-f0bc-4c3f-a9b5-90c05ce6de57","Type":"ContainerDied","Data":"12623aaea0585834cf3d636af0a2517a7058863cf041570d5e01ecaaddc8ffe6"} Nov 24 21:45:05 crc kubenswrapper[4915]: I1124 21:45:05.014217 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"420410a4-f0bc-4c3f-a9b5-90c05ce6de57","Type":"ContainerDied","Data":"2c3c0ffe43c82efed1fed6746480af89199ffe36901542566d4424989d05b51d"} Nov 24 21:45:05 crc kubenswrapper[4915]: I1124 21:45:05.046556 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 24 21:45:05 crc kubenswrapper[4915]: I1124 21:45:05.214289 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-8gn7w"] Nov 24 21:45:05 crc kubenswrapper[4915]: I1124 21:45:05.215884 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8gn7w" Nov 24 21:45:05 crc kubenswrapper[4915]: I1124 21:45:05.219161 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 24 21:45:05 crc kubenswrapper[4915]: I1124 21:45:05.223621 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 24 21:45:05 crc kubenswrapper[4915]: I1124 21:45:05.226499 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-8gn7w"] Nov 24 21:45:05 crc kubenswrapper[4915]: I1124 21:45:05.258131 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2-scripts\") pod \"nova-cell1-cell-mapping-8gn7w\" (UID: \"1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2\") " pod="openstack/nova-cell1-cell-mapping-8gn7w" Nov 24 21:45:05 crc kubenswrapper[4915]: I1124 21:45:05.258180 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz4h7\" (UniqueName: \"kubernetes.io/projected/1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2-kube-api-access-pz4h7\") pod \"nova-cell1-cell-mapping-8gn7w\" (UID: \"1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2\") " pod="openstack/nova-cell1-cell-mapping-8gn7w" Nov 24 21:45:05 crc kubenswrapper[4915]: I1124 21:45:05.258241 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8gn7w\" (UID: \"1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2\") " pod="openstack/nova-cell1-cell-mapping-8gn7w" Nov 24 21:45:05 crc kubenswrapper[4915]: I1124 21:45:05.258270 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2-config-data\") pod \"nova-cell1-cell-mapping-8gn7w\" (UID: \"1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2\") " pod="openstack/nova-cell1-cell-mapping-8gn7w" Nov 24 21:45:05 crc kubenswrapper[4915]: I1124 21:45:05.301097 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:45:05 crc kubenswrapper[4915]: I1124 21:45:05.360853 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2-config-data\") pod \"nova-cell1-cell-mapping-8gn7w\" (UID: \"1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2\") " pod="openstack/nova-cell1-cell-mapping-8gn7w" Nov 24 21:45:05 crc kubenswrapper[4915]: I1124 21:45:05.361174 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2-scripts\") pod \"nova-cell1-cell-mapping-8gn7w\" (UID: \"1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2\") " pod="openstack/nova-cell1-cell-mapping-8gn7w" Nov 24 21:45:05 crc kubenswrapper[4915]: I1124 21:45:05.361215 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz4h7\" (UniqueName: \"kubernetes.io/projected/1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2-kube-api-access-pz4h7\") pod \"nova-cell1-cell-mapping-8gn7w\" (UID: \"1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2\") " pod="openstack/nova-cell1-cell-mapping-8gn7w" Nov 24 21:45:05 crc kubenswrapper[4915]: I1124 21:45:05.361293 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8gn7w\" (UID: \"1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2\") " pod="openstack/nova-cell1-cell-mapping-8gn7w" Nov 24 21:45:05 crc kubenswrapper[4915]: I1124 21:45:05.367641 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2-scripts\") pod \"nova-cell1-cell-mapping-8gn7w\" (UID: \"1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2\") " pod="openstack/nova-cell1-cell-mapping-8gn7w" Nov 24 21:45:05 crc kubenswrapper[4915]: I1124 21:45:05.369554 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2-config-data\") pod \"nova-cell1-cell-mapping-8gn7w\" (UID: \"1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2\") " pod="openstack/nova-cell1-cell-mapping-8gn7w" Nov 24 21:45:05 crc kubenswrapper[4915]: I1124 21:45:05.372184 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8gn7w\" (UID: \"1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2\") " pod="openstack/nova-cell1-cell-mapping-8gn7w" Nov 24 21:45:05 crc kubenswrapper[4915]: I1124 21:45:05.377468 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz4h7\" (UniqueName: \"kubernetes.io/projected/1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2-kube-api-access-pz4h7\") pod \"nova-cell1-cell-mapping-8gn7w\" (UID: \"1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2\") " pod="openstack/nova-cell1-cell-mapping-8gn7w" Nov 24 21:45:05 crc kubenswrapper[4915]: I1124 21:45:05.538066 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8gn7w" Nov 24 21:45:06 crc kubenswrapper[4915]: I1124 21:45:06.034653 4915 generic.go:334] "Generic (PLEG): container finished" podID="420410a4-f0bc-4c3f-a9b5-90c05ce6de57" containerID="b8d05d985c03924be76a33666745e38b628061ff079931cd4f8e435a3ae002c0" exitCode=0 Nov 24 21:45:06 crc kubenswrapper[4915]: I1124 21:45:06.034741 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"420410a4-f0bc-4c3f-a9b5-90c05ce6de57","Type":"ContainerDied","Data":"b8d05d985c03924be76a33666745e38b628061ff079931cd4f8e435a3ae002c0"} Nov 24 21:45:06 crc kubenswrapper[4915]: I1124 21:45:06.041321 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed","Type":"ContainerStarted","Data":"d295cc71c58af053ca1b6f4462b146390fd5f91c28aed82cd221283d0a3599a3"} Nov 24 21:45:06 crc kubenswrapper[4915]: I1124 21:45:06.041356 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed","Type":"ContainerStarted","Data":"c7cf7d70a86bd4c7ffc0371296be395780501263887475b2a4dba0a8e2480750"} Nov 24 21:45:06 crc kubenswrapper[4915]: I1124 21:45:06.041368 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed","Type":"ContainerStarted","Data":"70377ea00fda678285345d9205f95558816994b06985019481b5b392a8a8024f"} Nov 24 21:45:06 crc kubenswrapper[4915]: I1124 21:45:06.075842 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.075823795 podStartE2EDuration="2.075823795s" podCreationTimestamp="2025-11-24 21:45:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:45:06.063263786 +0000 UTC m=+1524.379515959" watchObservedRunningTime="2025-11-24 21:45:06.075823795 +0000 UTC m=+1524.392075968" Nov 24 21:45:06 crc kubenswrapper[4915]: I1124 21:45:06.111214 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-8gn7w"] Nov 24 21:45:06 crc kubenswrapper[4915]: I1124 21:45:06.395735 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:45:06 crc kubenswrapper[4915]: I1124 21:45:06.500666 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-run-httpd\") pod \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\" (UID: \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\") " Nov 24 21:45:06 crc kubenswrapper[4915]: I1124 21:45:06.500907 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79zl7\" (UniqueName: \"kubernetes.io/projected/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-kube-api-access-79zl7\") pod \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\" (UID: \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\") " Nov 24 21:45:06 crc kubenswrapper[4915]: I1124 21:45:06.500968 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-log-httpd\") pod \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\" (UID: \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\") " Nov 24 21:45:06 crc kubenswrapper[4915]: I1124 21:45:06.500986 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-scripts\") pod \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\" (UID: \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\") " Nov 24 21:45:06 crc kubenswrapper[4915]: I1124 21:45:06.501001 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-sg-core-conf-yaml\") pod \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\" (UID: \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\") " Nov 24 21:45:06 crc kubenswrapper[4915]: I1124 21:45:06.501024 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-config-data\") pod \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\" (UID: \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\") " Nov 24 21:45:06 crc kubenswrapper[4915]: I1124 21:45:06.501047 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-combined-ca-bundle\") pod \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\" (UID: \"420410a4-f0bc-4c3f-a9b5-90c05ce6de57\") " Nov 24 21:45:06 crc kubenswrapper[4915]: I1124 21:45:06.501352 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "420410a4-f0bc-4c3f-a9b5-90c05ce6de57" (UID: "420410a4-f0bc-4c3f-a9b5-90c05ce6de57"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:45:06 crc kubenswrapper[4915]: I1124 21:45:06.502260 4915 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:06 crc kubenswrapper[4915]: I1124 21:45:06.503525 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "420410a4-f0bc-4c3f-a9b5-90c05ce6de57" (UID: "420410a4-f0bc-4c3f-a9b5-90c05ce6de57"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:45:06 crc kubenswrapper[4915]: I1124 21:45:06.508254 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-scripts" (OuterVolumeSpecName: "scripts") pod "420410a4-f0bc-4c3f-a9b5-90c05ce6de57" (UID: "420410a4-f0bc-4c3f-a9b5-90c05ce6de57"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:45:06 crc kubenswrapper[4915]: I1124 21:45:06.508341 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-kube-api-access-79zl7" (OuterVolumeSpecName: "kube-api-access-79zl7") pod "420410a4-f0bc-4c3f-a9b5-90c05ce6de57" (UID: "420410a4-f0bc-4c3f-a9b5-90c05ce6de57"). InnerVolumeSpecName "kube-api-access-79zl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:45:06 crc kubenswrapper[4915]: I1124 21:45:06.567646 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "420410a4-f0bc-4c3f-a9b5-90c05ce6de57" (UID: "420410a4-f0bc-4c3f-a9b5-90c05ce6de57"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:45:06 crc kubenswrapper[4915]: I1124 21:45:06.605442 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79zl7\" (UniqueName: \"kubernetes.io/projected/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-kube-api-access-79zl7\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:06 crc kubenswrapper[4915]: I1124 21:45:06.605488 4915 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:06 crc kubenswrapper[4915]: I1124 21:45:06.605531 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:06 crc kubenswrapper[4915]: I1124 21:45:06.605545 4915 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:06 crc kubenswrapper[4915]: I1124 21:45:06.626220 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "420410a4-f0bc-4c3f-a9b5-90c05ce6de57" (UID: "420410a4-f0bc-4c3f-a9b5-90c05ce6de57"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:45:06 crc kubenswrapper[4915]: I1124 21:45:06.647798 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-config-data" (OuterVolumeSpecName: "config-data") pod "420410a4-f0bc-4c3f-a9b5-90c05ce6de57" (UID: "420410a4-f0bc-4c3f-a9b5-90c05ce6de57"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:45:06 crc kubenswrapper[4915]: I1124 21:45:06.707727 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:06 crc kubenswrapper[4915]: I1124 21:45:06.707929 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/420410a4-f0bc-4c3f-a9b5-90c05ce6de57-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.068706 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"420410a4-f0bc-4c3f-a9b5-90c05ce6de57","Type":"ContainerDied","Data":"be3273c7128671a871b3089243cf9805f404d55d4dbfb85bad640bf8a9e762a7"} Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.068788 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.070031 4915 scope.go:117] "RemoveContainer" containerID="1c3f85f030574530b5c274ca7f6f63ab46f05581ab8ecf7c49a7576b90dfcf46" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.071391 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8gn7w" event={"ID":"1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2","Type":"ContainerStarted","Data":"8851b4319d1ff78ba7dbe3b800e3d65ae76886d09a7c90672b009bf9fbbb94e2"} Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.071442 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8gn7w" event={"ID":"1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2","Type":"ContainerStarted","Data":"10a708237165e52e6e38c44093f7d8aeca964b072ab4b0bd32dc7a06a6c3e4ed"} Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.097051 4915 scope.go:117] "RemoveContainer" containerID="12623aaea0585834cf3d636af0a2517a7058863cf041570d5e01ecaaddc8ffe6" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.097727 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-8gn7w" podStartSLOduration=2.097710125 podStartE2EDuration="2.097710125s" podCreationTimestamp="2025-11-24 21:45:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:45:07.091864358 +0000 UTC m=+1525.408116531" watchObservedRunningTime="2025-11-24 21:45:07.097710125 +0000 UTC m=+1525.413962298" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.123016 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.131624 4915 scope.go:117] "RemoveContainer" containerID="2c3c0ffe43c82efed1fed6746480af89199ffe36901542566d4424989d05b51d" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.153904 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.162321 4915 scope.go:117] "RemoveContainer" containerID="b8d05d985c03924be76a33666745e38b628061ff079931cd4f8e435a3ae002c0" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.176169 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:45:07 crc kubenswrapper[4915]: E1124 21:45:07.176758 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="420410a4-f0bc-4c3f-a9b5-90c05ce6de57" containerName="ceilometer-central-agent" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.176799 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="420410a4-f0bc-4c3f-a9b5-90c05ce6de57" containerName="ceilometer-central-agent" Nov 24 21:45:07 crc kubenswrapper[4915]: E1124 21:45:07.176848 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="420410a4-f0bc-4c3f-a9b5-90c05ce6de57" containerName="proxy-httpd" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.176860 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="420410a4-f0bc-4c3f-a9b5-90c05ce6de57" containerName="proxy-httpd" Nov 24 21:45:07 crc kubenswrapper[4915]: E1124 21:45:07.176892 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="420410a4-f0bc-4c3f-a9b5-90c05ce6de57" containerName="sg-core" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.176900 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="420410a4-f0bc-4c3f-a9b5-90c05ce6de57" containerName="sg-core" Nov 24 21:45:07 crc kubenswrapper[4915]: E1124 21:45:07.176928 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="420410a4-f0bc-4c3f-a9b5-90c05ce6de57" containerName="ceilometer-notification-agent" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.176936 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="420410a4-f0bc-4c3f-a9b5-90c05ce6de57" containerName="ceilometer-notification-agent" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.177252 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="420410a4-f0bc-4c3f-a9b5-90c05ce6de57" containerName="ceilometer-notification-agent" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.177289 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="420410a4-f0bc-4c3f-a9b5-90c05ce6de57" containerName="sg-core" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.177307 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="420410a4-f0bc-4c3f-a9b5-90c05ce6de57" containerName="ceilometer-central-agent" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.177330 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="420410a4-f0bc-4c3f-a9b5-90c05ce6de57" containerName="proxy-httpd" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.180589 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.183733 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.183975 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.184186 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.189304 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.219977 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75360af7-e79c-4a47-8a96-a78d7bc8804e-config-data\") pod \"ceilometer-0\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " pod="openstack/ceilometer-0" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.220079 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75360af7-e79c-4a47-8a96-a78d7bc8804e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " pod="openstack/ceilometer-0" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.220163 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t55qn\" (UniqueName: \"kubernetes.io/projected/75360af7-e79c-4a47-8a96-a78d7bc8804e-kube-api-access-t55qn\") pod \"ceilometer-0\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " pod="openstack/ceilometer-0" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.220291 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75360af7-e79c-4a47-8a96-a78d7bc8804e-log-httpd\") pod \"ceilometer-0\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " pod="openstack/ceilometer-0" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.220604 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75360af7-e79c-4a47-8a96-a78d7bc8804e-run-httpd\") pod \"ceilometer-0\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " pod="openstack/ceilometer-0" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.220649 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/75360af7-e79c-4a47-8a96-a78d7bc8804e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " pod="openstack/ceilometer-0" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.220724 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75360af7-e79c-4a47-8a96-a78d7bc8804e-scripts\") pod \"ceilometer-0\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " pod="openstack/ceilometer-0" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.220788 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75360af7-e79c-4a47-8a96-a78d7bc8804e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " pod="openstack/ceilometer-0" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.323277 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75360af7-e79c-4a47-8a96-a78d7bc8804e-run-httpd\") pod \"ceilometer-0\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " pod="openstack/ceilometer-0" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.323335 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/75360af7-e79c-4a47-8a96-a78d7bc8804e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " pod="openstack/ceilometer-0" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.323375 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75360af7-e79c-4a47-8a96-a78d7bc8804e-scripts\") pod \"ceilometer-0\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " pod="openstack/ceilometer-0" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.323398 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75360af7-e79c-4a47-8a96-a78d7bc8804e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " pod="openstack/ceilometer-0" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.323467 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75360af7-e79c-4a47-8a96-a78d7bc8804e-config-data\") pod \"ceilometer-0\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " pod="openstack/ceilometer-0" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.323518 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75360af7-e79c-4a47-8a96-a78d7bc8804e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " pod="openstack/ceilometer-0" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.323570 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t55qn\" (UniqueName: \"kubernetes.io/projected/75360af7-e79c-4a47-8a96-a78d7bc8804e-kube-api-access-t55qn\") pod \"ceilometer-0\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " pod="openstack/ceilometer-0" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.323634 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75360af7-e79c-4a47-8a96-a78d7bc8804e-log-httpd\") pod \"ceilometer-0\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " pod="openstack/ceilometer-0" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.323733 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75360af7-e79c-4a47-8a96-a78d7bc8804e-run-httpd\") pod \"ceilometer-0\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " pod="openstack/ceilometer-0" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.324447 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75360af7-e79c-4a47-8a96-a78d7bc8804e-log-httpd\") pod \"ceilometer-0\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " pod="openstack/ceilometer-0" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.327723 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75360af7-e79c-4a47-8a96-a78d7bc8804e-scripts\") pod \"ceilometer-0\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " pod="openstack/ceilometer-0" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.327813 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/75360af7-e79c-4a47-8a96-a78d7bc8804e-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " pod="openstack/ceilometer-0" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.328551 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75360af7-e79c-4a47-8a96-a78d7bc8804e-config-data\") pod \"ceilometer-0\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " pod="openstack/ceilometer-0" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.329434 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75360af7-e79c-4a47-8a96-a78d7bc8804e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " pod="openstack/ceilometer-0" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.329559 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75360af7-e79c-4a47-8a96-a78d7bc8804e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " pod="openstack/ceilometer-0" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.340266 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t55qn\" (UniqueName: \"kubernetes.io/projected/75360af7-e79c-4a47-8a96-a78d7bc8804e-kube-api-access-t55qn\") pod \"ceilometer-0\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " pod="openstack/ceilometer-0" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.429940 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f84f9ccf-jnbbb" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.521083 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.529576 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-m2d7d"] Nov 24 21:45:07 crc kubenswrapper[4915]: I1124 21:45:07.529977 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-568d7fd7cf-m2d7d" podUID="8641d657-17ff-4ab2-a915-d7b945a8e7bf" containerName="dnsmasq-dns" containerID="cri-o://b0ada4a0a9bb059b84f137507a9ad84920888af4f10277f2370926934a5114c9" gracePeriod=10 Nov 24 21:45:08 crc kubenswrapper[4915]: I1124 21:45:08.088491 4915 generic.go:334] "Generic (PLEG): container finished" podID="8641d657-17ff-4ab2-a915-d7b945a8e7bf" containerID="b0ada4a0a9bb059b84f137507a9ad84920888af4f10277f2370926934a5114c9" exitCode=0 Nov 24 21:45:08 crc kubenswrapper[4915]: I1124 21:45:08.089001 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-m2d7d" event={"ID":"8641d657-17ff-4ab2-a915-d7b945a8e7bf","Type":"ContainerDied","Data":"b0ada4a0a9bb059b84f137507a9ad84920888af4f10277f2370926934a5114c9"} Nov 24 21:45:08 crc kubenswrapper[4915]: I1124 21:45:08.149524 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 21:45:08 crc kubenswrapper[4915]: I1124 21:45:08.210661 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-m2d7d" Nov 24 21:45:08 crc kubenswrapper[4915]: I1124 21:45:08.259514 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8641d657-17ff-4ab2-a915-d7b945a8e7bf-ovsdbserver-nb\") pod \"8641d657-17ff-4ab2-a915-d7b945a8e7bf\" (UID: \"8641d657-17ff-4ab2-a915-d7b945a8e7bf\") " Nov 24 21:45:08 crc kubenswrapper[4915]: I1124 21:45:08.259737 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8641d657-17ff-4ab2-a915-d7b945a8e7bf-ovsdbserver-sb\") pod \"8641d657-17ff-4ab2-a915-d7b945a8e7bf\" (UID: \"8641d657-17ff-4ab2-a915-d7b945a8e7bf\") " Nov 24 21:45:08 crc kubenswrapper[4915]: I1124 21:45:08.259853 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8641d657-17ff-4ab2-a915-d7b945a8e7bf-config\") pod \"8641d657-17ff-4ab2-a915-d7b945a8e7bf\" (UID: \"8641d657-17ff-4ab2-a915-d7b945a8e7bf\") " Nov 24 21:45:08 crc kubenswrapper[4915]: I1124 21:45:08.259961 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8641d657-17ff-4ab2-a915-d7b945a8e7bf-dns-swift-storage-0\") pod \"8641d657-17ff-4ab2-a915-d7b945a8e7bf\" (UID: \"8641d657-17ff-4ab2-a915-d7b945a8e7bf\") " Nov 24 21:45:08 crc kubenswrapper[4915]: I1124 21:45:08.260063 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8641d657-17ff-4ab2-a915-d7b945a8e7bf-dns-svc\") pod \"8641d657-17ff-4ab2-a915-d7b945a8e7bf\" (UID: \"8641d657-17ff-4ab2-a915-d7b945a8e7bf\") " Nov 24 21:45:08 crc kubenswrapper[4915]: I1124 21:45:08.260153 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqprf\" (UniqueName: \"kubernetes.io/projected/8641d657-17ff-4ab2-a915-d7b945a8e7bf-kube-api-access-rqprf\") pod \"8641d657-17ff-4ab2-a915-d7b945a8e7bf\" (UID: \"8641d657-17ff-4ab2-a915-d7b945a8e7bf\") " Nov 24 21:45:08 crc kubenswrapper[4915]: I1124 21:45:08.265761 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8641d657-17ff-4ab2-a915-d7b945a8e7bf-kube-api-access-rqprf" (OuterVolumeSpecName: "kube-api-access-rqprf") pod "8641d657-17ff-4ab2-a915-d7b945a8e7bf" (UID: "8641d657-17ff-4ab2-a915-d7b945a8e7bf"). InnerVolumeSpecName "kube-api-access-rqprf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:45:08 crc kubenswrapper[4915]: I1124 21:45:08.330641 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8641d657-17ff-4ab2-a915-d7b945a8e7bf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8641d657-17ff-4ab2-a915-d7b945a8e7bf" (UID: "8641d657-17ff-4ab2-a915-d7b945a8e7bf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:45:08 crc kubenswrapper[4915]: I1124 21:45:08.335356 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8641d657-17ff-4ab2-a915-d7b945a8e7bf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8641d657-17ff-4ab2-a915-d7b945a8e7bf" (UID: "8641d657-17ff-4ab2-a915-d7b945a8e7bf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:45:08 crc kubenswrapper[4915]: I1124 21:45:08.347001 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8641d657-17ff-4ab2-a915-d7b945a8e7bf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8641d657-17ff-4ab2-a915-d7b945a8e7bf" (UID: "8641d657-17ff-4ab2-a915-d7b945a8e7bf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:45:08 crc kubenswrapper[4915]: I1124 21:45:08.351718 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8641d657-17ff-4ab2-a915-d7b945a8e7bf-config" (OuterVolumeSpecName: "config") pod "8641d657-17ff-4ab2-a915-d7b945a8e7bf" (UID: "8641d657-17ff-4ab2-a915-d7b945a8e7bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:45:08 crc kubenswrapper[4915]: I1124 21:45:08.354067 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8641d657-17ff-4ab2-a915-d7b945a8e7bf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8641d657-17ff-4ab2-a915-d7b945a8e7bf" (UID: "8641d657-17ff-4ab2-a915-d7b945a8e7bf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:45:08 crc kubenswrapper[4915]: I1124 21:45:08.363757 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8641d657-17ff-4ab2-a915-d7b945a8e7bf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:08 crc kubenswrapper[4915]: I1124 21:45:08.363817 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8641d657-17ff-4ab2-a915-d7b945a8e7bf-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:08 crc kubenswrapper[4915]: I1124 21:45:08.363828 4915 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8641d657-17ff-4ab2-a915-d7b945a8e7bf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:08 crc kubenswrapper[4915]: I1124 21:45:08.363837 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8641d657-17ff-4ab2-a915-d7b945a8e7bf-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:08 crc kubenswrapper[4915]: I1124 21:45:08.363847 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqprf\" (UniqueName: \"kubernetes.io/projected/8641d657-17ff-4ab2-a915-d7b945a8e7bf-kube-api-access-rqprf\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:08 crc kubenswrapper[4915]: I1124 21:45:08.363856 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8641d657-17ff-4ab2-a915-d7b945a8e7bf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:08 crc kubenswrapper[4915]: I1124 21:45:08.446686 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="420410a4-f0bc-4c3f-a9b5-90c05ce6de57" path="/var/lib/kubelet/pods/420410a4-f0bc-4c3f-a9b5-90c05ce6de57/volumes" Nov 24 21:45:09 crc kubenswrapper[4915]: I1124 21:45:09.100943 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75360af7-e79c-4a47-8a96-a78d7bc8804e","Type":"ContainerStarted","Data":"bf9a47410fd5a8eb428cbca71d045f2646449c29197c40e7e4d1927da0e5bf12"} Nov 24 21:45:09 crc kubenswrapper[4915]: I1124 21:45:09.101276 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75360af7-e79c-4a47-8a96-a78d7bc8804e","Type":"ContainerStarted","Data":"484a867d9866479fe14ed331dae27f49c5c71eeb7caaac8da8bf49f2c98255f0"} Nov 24 21:45:09 crc kubenswrapper[4915]: I1124 21:45:09.105553 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-m2d7d" event={"ID":"8641d657-17ff-4ab2-a915-d7b945a8e7bf","Type":"ContainerDied","Data":"39aaaf5c99efbcaaff78ed994539dbf2fe4abf23f9f08beb61681d9c1e03410b"} Nov 24 21:45:09 crc kubenswrapper[4915]: I1124 21:45:09.105625 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-m2d7d" Nov 24 21:45:09 crc kubenswrapper[4915]: I1124 21:45:09.105645 4915 scope.go:117] "RemoveContainer" containerID="b0ada4a0a9bb059b84f137507a9ad84920888af4f10277f2370926934a5114c9" Nov 24 21:45:09 crc kubenswrapper[4915]: I1124 21:45:09.128695 4915 scope.go:117] "RemoveContainer" containerID="6757784d5b17baf5e0989902d8ff83b28deb761d4564fe5916b71dbb6e4aa8ec" Nov 24 21:45:09 crc kubenswrapper[4915]: I1124 21:45:09.129944 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-m2d7d"] Nov 24 21:45:09 crc kubenswrapper[4915]: I1124 21:45:09.143843 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-m2d7d"] Nov 24 21:45:10 crc kubenswrapper[4915]: I1124 21:45:10.148506 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75360af7-e79c-4a47-8a96-a78d7bc8804e","Type":"ContainerStarted","Data":"82dcbee9b7e1fb1a9a4d25e4ee19835d5ffbc3c819d239e724e07701099583b2"} Nov 24 21:45:10 crc kubenswrapper[4915]: I1124 21:45:10.441287 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8641d657-17ff-4ab2-a915-d7b945a8e7bf" path="/var/lib/kubelet/pods/8641d657-17ff-4ab2-a915-d7b945a8e7bf/volumes" Nov 24 21:45:11 crc kubenswrapper[4915]: I1124 21:45:11.166235 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75360af7-e79c-4a47-8a96-a78d7bc8804e","Type":"ContainerStarted","Data":"61430822e8f31012173bc908c827a32a76a01d10caee1907bb2b9bc10d0bf8db"} Nov 24 21:45:11 crc kubenswrapper[4915]: I1124 21:45:11.394161 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 24 21:45:12 crc kubenswrapper[4915]: I1124 21:45:12.178878 4915 generic.go:334] "Generic (PLEG): container finished" podID="1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2" containerID="8851b4319d1ff78ba7dbe3b800e3d65ae76886d09a7c90672b009bf9fbbb94e2" exitCode=0 Nov 24 21:45:12 crc kubenswrapper[4915]: I1124 21:45:12.178945 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8gn7w" event={"ID":"1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2","Type":"ContainerDied","Data":"8851b4319d1ff78ba7dbe3b800e3d65ae76886d09a7c90672b009bf9fbbb94e2"} Nov 24 21:45:12 crc kubenswrapper[4915]: I1124 21:45:12.184048 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75360af7-e79c-4a47-8a96-a78d7bc8804e","Type":"ContainerStarted","Data":"9dabca5d88c714043ed268f9b66254f3b245a2fbb06b21d6b3fb55dbc1923a98"} Nov 24 21:45:12 crc kubenswrapper[4915]: I1124 21:45:12.184368 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 21:45:12 crc kubenswrapper[4915]: I1124 21:45:12.230606 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.192538453 podStartE2EDuration="5.230584124s" podCreationTimestamp="2025-11-24 21:45:07 +0000 UTC" firstStartedPulling="2025-11-24 21:45:08.150084218 +0000 UTC m=+1526.466336391" lastFinishedPulling="2025-11-24 21:45:11.188129889 +0000 UTC m=+1529.504382062" observedRunningTime="2025-11-24 21:45:12.22081445 +0000 UTC m=+1530.537066633" watchObservedRunningTime="2025-11-24 21:45:12.230584124 +0000 UTC m=+1530.546836317" Nov 24 21:45:13 crc kubenswrapper[4915]: I1124 21:45:13.662136 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8gn7w" Nov 24 21:45:13 crc kubenswrapper[4915]: I1124 21:45:13.695818 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2-scripts\") pod \"1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2\" (UID: \"1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2\") " Nov 24 21:45:13 crc kubenswrapper[4915]: I1124 21:45:13.696001 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz4h7\" (UniqueName: \"kubernetes.io/projected/1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2-kube-api-access-pz4h7\") pod \"1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2\" (UID: \"1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2\") " Nov 24 21:45:13 crc kubenswrapper[4915]: I1124 21:45:13.696091 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2-combined-ca-bundle\") pod \"1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2\" (UID: \"1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2\") " Nov 24 21:45:13 crc kubenswrapper[4915]: I1124 21:45:13.696182 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2-config-data\") pod \"1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2\" (UID: \"1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2\") " Nov 24 21:45:13 crc kubenswrapper[4915]: I1124 21:45:13.706166 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2-kube-api-access-pz4h7" (OuterVolumeSpecName: "kube-api-access-pz4h7") pod "1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2" (UID: "1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2"). InnerVolumeSpecName "kube-api-access-pz4h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:45:13 crc kubenswrapper[4915]: I1124 21:45:13.706199 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2-scripts" (OuterVolumeSpecName: "scripts") pod "1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2" (UID: "1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:45:13 crc kubenswrapper[4915]: I1124 21:45:13.742027 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2" (UID: "1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:45:13 crc kubenswrapper[4915]: I1124 21:45:13.757383 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2-config-data" (OuterVolumeSpecName: "config-data") pod "1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2" (UID: "1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:45:13 crc kubenswrapper[4915]: I1124 21:45:13.798687 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz4h7\" (UniqueName: \"kubernetes.io/projected/1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2-kube-api-access-pz4h7\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:13 crc kubenswrapper[4915]: I1124 21:45:13.798725 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:13 crc kubenswrapper[4915]: I1124 21:45:13.798735 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:13 crc kubenswrapper[4915]: I1124 21:45:13.798744 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:14 crc kubenswrapper[4915]: I1124 21:45:14.206349 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8gn7w" event={"ID":"1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2","Type":"ContainerDied","Data":"10a708237165e52e6e38c44093f7d8aeca964b072ab4b0bd32dc7a06a6c3e4ed"} Nov 24 21:45:14 crc kubenswrapper[4915]: I1124 21:45:14.206578 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10a708237165e52e6e38c44093f7d8aeca964b072ab4b0bd32dc7a06a6c3e4ed" Nov 24 21:45:14 crc kubenswrapper[4915]: I1124 21:45:14.206411 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8gn7w" Nov 24 21:45:14 crc kubenswrapper[4915]: I1124 21:45:14.405000 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:45:14 crc kubenswrapper[4915]: I1124 21:45:14.405365 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed" containerName="nova-api-log" containerID="cri-o://c7cf7d70a86bd4c7ffc0371296be395780501263887475b2a4dba0a8e2480750" gracePeriod=30 Nov 24 21:45:14 crc kubenswrapper[4915]: I1124 21:45:14.405535 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed" containerName="nova-api-api" containerID="cri-o://d295cc71c58af053ca1b6f4462b146390fd5f91c28aed82cd221283d0a3599a3" gracePeriod=30 Nov 24 21:45:14 crc kubenswrapper[4915]: I1124 21:45:14.450237 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:45:14 crc kubenswrapper[4915]: I1124 21:45:14.450478 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="8fd1d036-89ac-47f6-8551-60f27e07700e" containerName="nova-scheduler-scheduler" containerID="cri-o://a4eba0bb468d4d3fdaa6063f5393964bef891aaf13dee453fb35e20c9787be1f" gracePeriod=30 Nov 24 21:45:14 crc kubenswrapper[4915]: I1124 21:45:14.459953 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:45:14 crc kubenswrapper[4915]: I1124 21:45:14.460235 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fd9c0d54-711e-4173-bc12-063f838129e4" containerName="nova-metadata-log" containerID="cri-o://9a8e6378a26a819d1c386971aa686b62d52e803cc3f083df6ef55beda11dcd0d" gracePeriod=30 Nov 24 21:45:14 crc kubenswrapper[4915]: I1124 21:45:14.460416 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fd9c0d54-711e-4173-bc12-063f838129e4" containerName="nova-metadata-metadata" containerID="cri-o://4d191e4ffe24bf8103bc844b3369351ac9597be595150c6a693ec88108a5ac59" gracePeriod=30 Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.173481 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.235649 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-public-tls-certs\") pod \"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed\" (UID: \"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed\") " Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.235879 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mjdk\" (UniqueName: \"kubernetes.io/projected/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-kube-api-access-4mjdk\") pod \"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed\" (UID: \"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed\") " Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.235936 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-combined-ca-bundle\") pod \"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed\" (UID: \"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed\") " Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.235984 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-internal-tls-certs\") pod \"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed\" (UID: \"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed\") " Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.236062 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-config-data\") pod \"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed\" (UID: \"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed\") " Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.236179 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-logs\") pod \"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed\" (UID: \"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed\") " Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.236572 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-logs" (OuterVolumeSpecName: "logs") pod "dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed" (UID: "dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.237046 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.241019 4915 generic.go:334] "Generic (PLEG): container finished" podID="dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed" containerID="d295cc71c58af053ca1b6f4462b146390fd5f91c28aed82cd221283d0a3599a3" exitCode=0 Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.241048 4915 generic.go:334] "Generic (PLEG): container finished" podID="dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed" containerID="c7cf7d70a86bd4c7ffc0371296be395780501263887475b2a4dba0a8e2480750" exitCode=143 Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.241096 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed","Type":"ContainerDied","Data":"d295cc71c58af053ca1b6f4462b146390fd5f91c28aed82cd221283d0a3599a3"} Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.241109 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.241134 4915 scope.go:117] "RemoveContainer" containerID="d295cc71c58af053ca1b6f4462b146390fd5f91c28aed82cd221283d0a3599a3" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.241123 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed","Type":"ContainerDied","Data":"c7cf7d70a86bd4c7ffc0371296be395780501263887475b2a4dba0a8e2480750"} Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.241603 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed","Type":"ContainerDied","Data":"70377ea00fda678285345d9205f95558816994b06985019481b5b392a8a8024f"} Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.247441 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-kube-api-access-4mjdk" (OuterVolumeSpecName: "kube-api-access-4mjdk") pod "dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed" (UID: "dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed"). InnerVolumeSpecName "kube-api-access-4mjdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.249602 4915 generic.go:334] "Generic (PLEG): container finished" podID="fd9c0d54-711e-4173-bc12-063f838129e4" containerID="9a8e6378a26a819d1c386971aa686b62d52e803cc3f083df6ef55beda11dcd0d" exitCode=143 Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.250013 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd9c0d54-711e-4173-bc12-063f838129e4","Type":"ContainerDied","Data":"9a8e6378a26a819d1c386971aa686b62d52e803cc3f083df6ef55beda11dcd0d"} Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.282492 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-config-data" (OuterVolumeSpecName: "config-data") pod "dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed" (UID: "dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.295030 4915 scope.go:117] "RemoveContainer" containerID="c7cf7d70a86bd4c7ffc0371296be395780501263887475b2a4dba0a8e2480750" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.327741 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed" (UID: "dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.335907 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed" (UID: "dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.339803 4915 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.339839 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mjdk\" (UniqueName: \"kubernetes.io/projected/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-kube-api-access-4mjdk\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.339853 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.339870 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.353827 4915 scope.go:117] "RemoveContainer" containerID="d295cc71c58af053ca1b6f4462b146390fd5f91c28aed82cd221283d0a3599a3" Nov 24 21:45:15 crc kubenswrapper[4915]: E1124 21:45:15.354280 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d295cc71c58af053ca1b6f4462b146390fd5f91c28aed82cd221283d0a3599a3\": container with ID starting with d295cc71c58af053ca1b6f4462b146390fd5f91c28aed82cd221283d0a3599a3 not found: ID does not exist" containerID="d295cc71c58af053ca1b6f4462b146390fd5f91c28aed82cd221283d0a3599a3" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.354331 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d295cc71c58af053ca1b6f4462b146390fd5f91c28aed82cd221283d0a3599a3"} err="failed to get container status \"d295cc71c58af053ca1b6f4462b146390fd5f91c28aed82cd221283d0a3599a3\": rpc error: code = NotFound desc = could not find container \"d295cc71c58af053ca1b6f4462b146390fd5f91c28aed82cd221283d0a3599a3\": container with ID starting with d295cc71c58af053ca1b6f4462b146390fd5f91c28aed82cd221283d0a3599a3 not found: ID does not exist" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.354365 4915 scope.go:117] "RemoveContainer" containerID="c7cf7d70a86bd4c7ffc0371296be395780501263887475b2a4dba0a8e2480750" Nov 24 21:45:15 crc kubenswrapper[4915]: E1124 21:45:15.354620 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7cf7d70a86bd4c7ffc0371296be395780501263887475b2a4dba0a8e2480750\": container with ID starting with c7cf7d70a86bd4c7ffc0371296be395780501263887475b2a4dba0a8e2480750 not found: ID does not exist" containerID="c7cf7d70a86bd4c7ffc0371296be395780501263887475b2a4dba0a8e2480750" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.354646 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7cf7d70a86bd4c7ffc0371296be395780501263887475b2a4dba0a8e2480750"} err="failed to get container status \"c7cf7d70a86bd4c7ffc0371296be395780501263887475b2a4dba0a8e2480750\": rpc error: code = NotFound desc = could not find container \"c7cf7d70a86bd4c7ffc0371296be395780501263887475b2a4dba0a8e2480750\": container with ID starting with c7cf7d70a86bd4c7ffc0371296be395780501263887475b2a4dba0a8e2480750 not found: ID does not exist" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.354665 4915 scope.go:117] "RemoveContainer" containerID="d295cc71c58af053ca1b6f4462b146390fd5f91c28aed82cd221283d0a3599a3" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.355036 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d295cc71c58af053ca1b6f4462b146390fd5f91c28aed82cd221283d0a3599a3"} err="failed to get container status \"d295cc71c58af053ca1b6f4462b146390fd5f91c28aed82cd221283d0a3599a3\": rpc error: code = NotFound desc = could not find container \"d295cc71c58af053ca1b6f4462b146390fd5f91c28aed82cd221283d0a3599a3\": container with ID starting with d295cc71c58af053ca1b6f4462b146390fd5f91c28aed82cd221283d0a3599a3 not found: ID does not exist" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.355056 4915 scope.go:117] "RemoveContainer" containerID="c7cf7d70a86bd4c7ffc0371296be395780501263887475b2a4dba0a8e2480750" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.355275 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7cf7d70a86bd4c7ffc0371296be395780501263887475b2a4dba0a8e2480750"} err="failed to get container status \"c7cf7d70a86bd4c7ffc0371296be395780501263887475b2a4dba0a8e2480750\": rpc error: code = NotFound desc = could not find container \"c7cf7d70a86bd4c7ffc0371296be395780501263887475b2a4dba0a8e2480750\": container with ID starting with c7cf7d70a86bd4c7ffc0371296be395780501263887475b2a4dba0a8e2480750 not found: ID does not exist" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.365423 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed" (UID: "dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.441604 4915 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.659993 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.694826 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.720949 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 24 21:45:15 crc kubenswrapper[4915]: E1124 21:45:15.721686 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed" containerName="nova-api-log" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.721716 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed" containerName="nova-api-log" Nov 24 21:45:15 crc kubenswrapper[4915]: E1124 21:45:15.721790 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8641d657-17ff-4ab2-a915-d7b945a8e7bf" containerName="init" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.721801 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8641d657-17ff-4ab2-a915-d7b945a8e7bf" containerName="init" Nov 24 21:45:15 crc kubenswrapper[4915]: E1124 21:45:15.721826 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed" containerName="nova-api-api" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.721834 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed" containerName="nova-api-api" Nov 24 21:45:15 crc kubenswrapper[4915]: E1124 21:45:15.721859 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8641d657-17ff-4ab2-a915-d7b945a8e7bf" containerName="dnsmasq-dns" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.721867 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8641d657-17ff-4ab2-a915-d7b945a8e7bf" containerName="dnsmasq-dns" Nov 24 21:45:15 crc kubenswrapper[4915]: E1124 21:45:15.721877 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2" containerName="nova-manage" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.721885 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2" containerName="nova-manage" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.722196 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="8641d657-17ff-4ab2-a915-d7b945a8e7bf" containerName="dnsmasq-dns" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.722239 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed" containerName="nova-api-api" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.722259 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2" containerName="nova-manage" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.722292 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed" containerName="nova-api-log" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.724352 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.726605 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.726688 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.727127 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.732610 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.752692 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb-logs\") pod \"nova-api-0\" (UID: \"b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb\") " pod="openstack/nova-api-0" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.752918 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb\") " pod="openstack/nova-api-0" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.753115 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb\") " pod="openstack/nova-api-0" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.753170 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb-config-data\") pod \"nova-api-0\" (UID: \"b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb\") " pod="openstack/nova-api-0" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.753320 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb-public-tls-certs\") pod \"nova-api-0\" (UID: \"b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb\") " pod="openstack/nova-api-0" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.753410 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vflp2\" (UniqueName: \"kubernetes.io/projected/b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb-kube-api-access-vflp2\") pod \"nova-api-0\" (UID: \"b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb\") " pod="openstack/nova-api-0" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.855286 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb\") " pod="openstack/nova-api-0" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.855439 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb\") " pod="openstack/nova-api-0" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.855469 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb-config-data\") pod \"nova-api-0\" (UID: \"b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb\") " pod="openstack/nova-api-0" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.855503 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb-public-tls-certs\") pod \"nova-api-0\" (UID: \"b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb\") " pod="openstack/nova-api-0" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.855528 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vflp2\" (UniqueName: \"kubernetes.io/projected/b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb-kube-api-access-vflp2\") pod \"nova-api-0\" (UID: \"b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb\") " pod="openstack/nova-api-0" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.855568 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb-logs\") pod \"nova-api-0\" (UID: \"b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb\") " pod="openstack/nova-api-0" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.855962 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb-logs\") pod \"nova-api-0\" (UID: \"b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb\") " pod="openstack/nova-api-0" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.860243 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb\") " pod="openstack/nova-api-0" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.861811 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb\") " pod="openstack/nova-api-0" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.864591 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb-public-tls-certs\") pod \"nova-api-0\" (UID: \"b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb\") " pod="openstack/nova-api-0" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.865094 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb-config-data\") pod \"nova-api-0\" (UID: \"b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb\") " pod="openstack/nova-api-0" Nov 24 21:45:15 crc kubenswrapper[4915]: I1124 21:45:15.886663 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vflp2\" (UniqueName: \"kubernetes.io/projected/b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb-kube-api-access-vflp2\") pod \"nova-api-0\" (UID: \"b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb\") " pod="openstack/nova-api-0" Nov 24 21:45:16 crc kubenswrapper[4915]: I1124 21:45:16.046065 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 24 21:45:16 crc kubenswrapper[4915]: I1124 21:45:16.441077 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed" path="/var/lib/kubelet/pods/dbb23916-9a94-4ec9-b93e-e8cf3c4c26ed/volumes" Nov 24 21:45:16 crc kubenswrapper[4915]: I1124 21:45:16.618124 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 24 21:45:17 crc kubenswrapper[4915]: I1124 21:45:17.285769 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb","Type":"ContainerStarted","Data":"a7039af04d93657cbdbe6c8b048a5e4a4cb29992494fb3ae4b57018daa325681"} Nov 24 21:45:17 crc kubenswrapper[4915]: I1124 21:45:17.286332 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb","Type":"ContainerStarted","Data":"2e9196c774ec2a7a8c7299c9074be13c37232149c3637b779d8d66add6a78fe6"} Nov 24 21:45:17 crc kubenswrapper[4915]: I1124 21:45:17.614811 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="fd9c0d54-711e-4173-bc12-063f838129e4" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.244:8775/\": read tcp 10.217.0.2:35484->10.217.0.244:8775: read: connection reset by peer" Nov 24 21:45:17 crc kubenswrapper[4915]: I1124 21:45:17.614811 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="fd9c0d54-711e-4173-bc12-063f838129e4" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.244:8775/\": read tcp 10.217.0.2:35492->10.217.0.244:8775: read: connection reset by peer" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.235527 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.241240 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.302212 4915 generic.go:334] "Generic (PLEG): container finished" podID="8fd1d036-89ac-47f6-8551-60f27e07700e" containerID="a4eba0bb468d4d3fdaa6063f5393964bef891aaf13dee453fb35e20c9787be1f" exitCode=0 Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.302297 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8fd1d036-89ac-47f6-8551-60f27e07700e","Type":"ContainerDied","Data":"a4eba0bb468d4d3fdaa6063f5393964bef891aaf13dee453fb35e20c9787be1f"} Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.302330 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8fd1d036-89ac-47f6-8551-60f27e07700e","Type":"ContainerDied","Data":"0e92efdd83ea3e70970d0af9ff746aa5d320b8c17ce03ecf6198288075128a11"} Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.302354 4915 scope.go:117] "RemoveContainer" containerID="a4eba0bb468d4d3fdaa6063f5393964bef891aaf13dee453fb35e20c9787be1f" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.302366 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.311068 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb","Type":"ContainerStarted","Data":"bff0b891bd98ab886b9a290c21a94d357c3acb0fbdd37f436da9dc644ee505ad"} Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.315904 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9c0d54-711e-4173-bc12-063f838129e4-combined-ca-bundle\") pod \"fd9c0d54-711e-4173-bc12-063f838129e4\" (UID: \"fd9c0d54-711e-4173-bc12-063f838129e4\") " Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.316189 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd9c0d54-711e-4173-bc12-063f838129e4-logs\") pod \"fd9c0d54-711e-4173-bc12-063f838129e4\" (UID: \"fd9c0d54-711e-4173-bc12-063f838129e4\") " Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.316267 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd1d036-89ac-47f6-8551-60f27e07700e-combined-ca-bundle\") pod \"8fd1d036-89ac-47f6-8551-60f27e07700e\" (UID: \"8fd1d036-89ac-47f6-8551-60f27e07700e\") " Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.316307 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf5bj\" (UniqueName: \"kubernetes.io/projected/8fd1d036-89ac-47f6-8551-60f27e07700e-kube-api-access-bf5bj\") pod \"8fd1d036-89ac-47f6-8551-60f27e07700e\" (UID: \"8fd1d036-89ac-47f6-8551-60f27e07700e\") " Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.317084 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mblzn\" (UniqueName: \"kubernetes.io/projected/fd9c0d54-711e-4173-bc12-063f838129e4-kube-api-access-mblzn\") pod \"fd9c0d54-711e-4173-bc12-063f838129e4\" (UID: \"fd9c0d54-711e-4173-bc12-063f838129e4\") " Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.317185 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd9c0d54-711e-4173-bc12-063f838129e4-nova-metadata-tls-certs\") pod \"fd9c0d54-711e-4173-bc12-063f838129e4\" (UID: \"fd9c0d54-711e-4173-bc12-063f838129e4\") " Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.317272 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd9c0d54-711e-4173-bc12-063f838129e4-config-data\") pod \"fd9c0d54-711e-4173-bc12-063f838129e4\" (UID: \"fd9c0d54-711e-4173-bc12-063f838129e4\") " Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.317298 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fd1d036-89ac-47f6-8551-60f27e07700e-config-data\") pod \"8fd1d036-89ac-47f6-8551-60f27e07700e\" (UID: \"8fd1d036-89ac-47f6-8551-60f27e07700e\") " Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.319684 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd9c0d54-711e-4173-bc12-063f838129e4-logs" (OuterVolumeSpecName: "logs") pod "fd9c0d54-711e-4173-bc12-063f838129e4" (UID: "fd9c0d54-711e-4173-bc12-063f838129e4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.321009 4915 generic.go:334] "Generic (PLEG): container finished" podID="fd9c0d54-711e-4173-bc12-063f838129e4" containerID="4d191e4ffe24bf8103bc844b3369351ac9597be595150c6a693ec88108a5ac59" exitCode=0 Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.321055 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd9c0d54-711e-4173-bc12-063f838129e4","Type":"ContainerDied","Data":"4d191e4ffe24bf8103bc844b3369351ac9597be595150c6a693ec88108a5ac59"} Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.321066 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.321086 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fd9c0d54-711e-4173-bc12-063f838129e4","Type":"ContainerDied","Data":"57af5ac38b2217d4ef840c36711fcd9138dc4058e88dba668c32a86eac92b561"} Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.334907 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd9c0d54-711e-4173-bc12-063f838129e4-kube-api-access-mblzn" (OuterVolumeSpecName: "kube-api-access-mblzn") pod "fd9c0d54-711e-4173-bc12-063f838129e4" (UID: "fd9c0d54-711e-4173-bc12-063f838129e4"). InnerVolumeSpecName "kube-api-access-mblzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.343465 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.343446254 podStartE2EDuration="3.343446254s" podCreationTimestamp="2025-11-24 21:45:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:45:18.338318526 +0000 UTC m=+1536.654570699" watchObservedRunningTime="2025-11-24 21:45:18.343446254 +0000 UTC m=+1536.659698427" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.347439 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fd1d036-89ac-47f6-8551-60f27e07700e-kube-api-access-bf5bj" (OuterVolumeSpecName: "kube-api-access-bf5bj") pod "8fd1d036-89ac-47f6-8551-60f27e07700e" (UID: "8fd1d036-89ac-47f6-8551-60f27e07700e"). InnerVolumeSpecName "kube-api-access-bf5bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.350441 4915 scope.go:117] "RemoveContainer" containerID="a4eba0bb468d4d3fdaa6063f5393964bef891aaf13dee453fb35e20c9787be1f" Nov 24 21:45:18 crc kubenswrapper[4915]: E1124 21:45:18.351329 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4eba0bb468d4d3fdaa6063f5393964bef891aaf13dee453fb35e20c9787be1f\": container with ID starting with a4eba0bb468d4d3fdaa6063f5393964bef891aaf13dee453fb35e20c9787be1f not found: ID does not exist" containerID="a4eba0bb468d4d3fdaa6063f5393964bef891aaf13dee453fb35e20c9787be1f" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.351365 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4eba0bb468d4d3fdaa6063f5393964bef891aaf13dee453fb35e20c9787be1f"} err="failed to get container status \"a4eba0bb468d4d3fdaa6063f5393964bef891aaf13dee453fb35e20c9787be1f\": rpc error: code = NotFound desc = could not find container \"a4eba0bb468d4d3fdaa6063f5393964bef891aaf13dee453fb35e20c9787be1f\": container with ID starting with a4eba0bb468d4d3fdaa6063f5393964bef891aaf13dee453fb35e20c9787be1f not found: ID does not exist" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.351388 4915 scope.go:117] "RemoveContainer" containerID="4d191e4ffe24bf8103bc844b3369351ac9597be595150c6a693ec88108a5ac59" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.364633 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd1d036-89ac-47f6-8551-60f27e07700e-config-data" (OuterVolumeSpecName: "config-data") pod "8fd1d036-89ac-47f6-8551-60f27e07700e" (UID: "8fd1d036-89ac-47f6-8551-60f27e07700e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.367459 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd9c0d54-711e-4173-bc12-063f838129e4-config-data" (OuterVolumeSpecName: "config-data") pod "fd9c0d54-711e-4173-bc12-063f838129e4" (UID: "fd9c0d54-711e-4173-bc12-063f838129e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.368102 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd9c0d54-711e-4173-bc12-063f838129e4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd9c0d54-711e-4173-bc12-063f838129e4" (UID: "fd9c0d54-711e-4173-bc12-063f838129e4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.369067 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd1d036-89ac-47f6-8551-60f27e07700e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8fd1d036-89ac-47f6-8551-60f27e07700e" (UID: "8fd1d036-89ac-47f6-8551-60f27e07700e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.407883 4915 scope.go:117] "RemoveContainer" containerID="9a8e6378a26a819d1c386971aa686b62d52e803cc3f083df6ef55beda11dcd0d" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.413949 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd9c0d54-711e-4173-bc12-063f838129e4-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "fd9c0d54-711e-4173-bc12-063f838129e4" (UID: "fd9c0d54-711e-4173-bc12-063f838129e4"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.421205 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd9c0d54-711e-4173-bc12-063f838129e4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.421245 4915 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd9c0d54-711e-4173-bc12-063f838129e4-logs\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.421259 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd1d036-89ac-47f6-8551-60f27e07700e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.421272 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf5bj\" (UniqueName: \"kubernetes.io/projected/8fd1d036-89ac-47f6-8551-60f27e07700e-kube-api-access-bf5bj\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.421285 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mblzn\" (UniqueName: \"kubernetes.io/projected/fd9c0d54-711e-4173-bc12-063f838129e4-kube-api-access-mblzn\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.421300 4915 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd9c0d54-711e-4173-bc12-063f838129e4-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.421313 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd9c0d54-711e-4173-bc12-063f838129e4-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.421326 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fd1d036-89ac-47f6-8551-60f27e07700e-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.434352 4915 scope.go:117] "RemoveContainer" containerID="4d191e4ffe24bf8103bc844b3369351ac9597be595150c6a693ec88108a5ac59" Nov 24 21:45:18 crc kubenswrapper[4915]: E1124 21:45:18.436525 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d191e4ffe24bf8103bc844b3369351ac9597be595150c6a693ec88108a5ac59\": container with ID starting with 4d191e4ffe24bf8103bc844b3369351ac9597be595150c6a693ec88108a5ac59 not found: ID does not exist" containerID="4d191e4ffe24bf8103bc844b3369351ac9597be595150c6a693ec88108a5ac59" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.436573 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d191e4ffe24bf8103bc844b3369351ac9597be595150c6a693ec88108a5ac59"} err="failed to get container status \"4d191e4ffe24bf8103bc844b3369351ac9597be595150c6a693ec88108a5ac59\": rpc error: code = NotFound desc = could not find container \"4d191e4ffe24bf8103bc844b3369351ac9597be595150c6a693ec88108a5ac59\": container with ID starting with 4d191e4ffe24bf8103bc844b3369351ac9597be595150c6a693ec88108a5ac59 not found: ID does not exist" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.436602 4915 scope.go:117] "RemoveContainer" containerID="9a8e6378a26a819d1c386971aa686b62d52e803cc3f083df6ef55beda11dcd0d" Nov 24 21:45:18 crc kubenswrapper[4915]: E1124 21:45:18.439018 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a8e6378a26a819d1c386971aa686b62d52e803cc3f083df6ef55beda11dcd0d\": container with ID starting with 9a8e6378a26a819d1c386971aa686b62d52e803cc3f083df6ef55beda11dcd0d not found: ID does not exist" containerID="9a8e6378a26a819d1c386971aa686b62d52e803cc3f083df6ef55beda11dcd0d" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.439088 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a8e6378a26a819d1c386971aa686b62d52e803cc3f083df6ef55beda11dcd0d"} err="failed to get container status \"9a8e6378a26a819d1c386971aa686b62d52e803cc3f083df6ef55beda11dcd0d\": rpc error: code = NotFound desc = could not find container \"9a8e6378a26a819d1c386971aa686b62d52e803cc3f083df6ef55beda11dcd0d\": container with ID starting with 9a8e6378a26a819d1c386971aa686b62d52e803cc3f083df6ef55beda11dcd0d not found: ID does not exist" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.633263 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.650205 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.666718 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.682088 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.699871 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:45:18 crc kubenswrapper[4915]: E1124 21:45:18.700546 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd9c0d54-711e-4173-bc12-063f838129e4" containerName="nova-metadata-log" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.700573 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd9c0d54-711e-4173-bc12-063f838129e4" containerName="nova-metadata-log" Nov 24 21:45:18 crc kubenswrapper[4915]: E1124 21:45:18.700595 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd1d036-89ac-47f6-8551-60f27e07700e" containerName="nova-scheduler-scheduler" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.700605 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd1d036-89ac-47f6-8551-60f27e07700e" containerName="nova-scheduler-scheduler" Nov 24 21:45:18 crc kubenswrapper[4915]: E1124 21:45:18.700649 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd9c0d54-711e-4173-bc12-063f838129e4" containerName="nova-metadata-metadata" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.700658 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd9c0d54-711e-4173-bc12-063f838129e4" containerName="nova-metadata-metadata" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.700987 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fd1d036-89ac-47f6-8551-60f27e07700e" containerName="nova-scheduler-scheduler" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.701008 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd9c0d54-711e-4173-bc12-063f838129e4" containerName="nova-metadata-metadata" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.701032 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd9c0d54-711e-4173-bc12-063f838129e4" containerName="nova-metadata-log" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.702008 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.708512 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.715470 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.726961 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.729433 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.731275 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.731341 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.736514 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eb4f336-48eb-4d67-b147-cb401de61753-config-data\") pod \"nova-scheduler-0\" (UID: \"2eb4f336-48eb-4d67-b147-cb401de61753\") " pod="openstack/nova-scheduler-0" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.736703 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5qqz\" (UniqueName: \"kubernetes.io/projected/2eb4f336-48eb-4d67-b147-cb401de61753-kube-api-access-z5qqz\") pod \"nova-scheduler-0\" (UID: \"2eb4f336-48eb-4d67-b147-cb401de61753\") " pod="openstack/nova-scheduler-0" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.737223 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eb4f336-48eb-4d67-b147-cb401de61753-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2eb4f336-48eb-4d67-b147-cb401de61753\") " pod="openstack/nova-scheduler-0" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.760597 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.841046 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5qqz\" (UniqueName: \"kubernetes.io/projected/2eb4f336-48eb-4d67-b147-cb401de61753-kube-api-access-z5qqz\") pod \"nova-scheduler-0\" (UID: \"2eb4f336-48eb-4d67-b147-cb401de61753\") " pod="openstack/nova-scheduler-0" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.841166 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63d6368-a68c-4520-835c-2799c2d64673-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b63d6368-a68c-4520-835c-2799c2d64673\") " pod="openstack/nova-metadata-0" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.841346 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eb4f336-48eb-4d67-b147-cb401de61753-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2eb4f336-48eb-4d67-b147-cb401de61753\") " pod="openstack/nova-scheduler-0" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.841527 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b63d6368-a68c-4520-835c-2799c2d64673-config-data\") pod \"nova-metadata-0\" (UID: \"b63d6368-a68c-4520-835c-2799c2d64673\") " pod="openstack/nova-metadata-0" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.841588 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b63d6368-a68c-4520-835c-2799c2d64673-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b63d6368-a68c-4520-835c-2799c2d64673\") " pod="openstack/nova-metadata-0" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.841629 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b63d6368-a68c-4520-835c-2799c2d64673-logs\") pod \"nova-metadata-0\" (UID: \"b63d6368-a68c-4520-835c-2799c2d64673\") " pod="openstack/nova-metadata-0" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.841829 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2p7r\" (UniqueName: \"kubernetes.io/projected/b63d6368-a68c-4520-835c-2799c2d64673-kube-api-access-k2p7r\") pod \"nova-metadata-0\" (UID: \"b63d6368-a68c-4520-835c-2799c2d64673\") " pod="openstack/nova-metadata-0" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.841862 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eb4f336-48eb-4d67-b147-cb401de61753-config-data\") pod \"nova-scheduler-0\" (UID: \"2eb4f336-48eb-4d67-b147-cb401de61753\") " pod="openstack/nova-scheduler-0" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.845454 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2eb4f336-48eb-4d67-b147-cb401de61753-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2eb4f336-48eb-4d67-b147-cb401de61753\") " pod="openstack/nova-scheduler-0" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.848754 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2eb4f336-48eb-4d67-b147-cb401de61753-config-data\") pod \"nova-scheduler-0\" (UID: \"2eb4f336-48eb-4d67-b147-cb401de61753\") " pod="openstack/nova-scheduler-0" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.856945 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5qqz\" (UniqueName: \"kubernetes.io/projected/2eb4f336-48eb-4d67-b147-cb401de61753-kube-api-access-z5qqz\") pod \"nova-scheduler-0\" (UID: \"2eb4f336-48eb-4d67-b147-cb401de61753\") " pod="openstack/nova-scheduler-0" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.944223 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b63d6368-a68c-4520-835c-2799c2d64673-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b63d6368-a68c-4520-835c-2799c2d64673\") " pod="openstack/nova-metadata-0" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.944281 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b63d6368-a68c-4520-835c-2799c2d64673-logs\") pod \"nova-metadata-0\" (UID: \"b63d6368-a68c-4520-835c-2799c2d64673\") " pod="openstack/nova-metadata-0" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.944379 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2p7r\" (UniqueName: \"kubernetes.io/projected/b63d6368-a68c-4520-835c-2799c2d64673-kube-api-access-k2p7r\") pod \"nova-metadata-0\" (UID: \"b63d6368-a68c-4520-835c-2799c2d64673\") " pod="openstack/nova-metadata-0" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.944517 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63d6368-a68c-4520-835c-2799c2d64673-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b63d6368-a68c-4520-835c-2799c2d64673\") " pod="openstack/nova-metadata-0" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.944594 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b63d6368-a68c-4520-835c-2799c2d64673-config-data\") pod \"nova-metadata-0\" (UID: \"b63d6368-a68c-4520-835c-2799c2d64673\") " pod="openstack/nova-metadata-0" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.946001 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b63d6368-a68c-4520-835c-2799c2d64673-logs\") pod \"nova-metadata-0\" (UID: \"b63d6368-a68c-4520-835c-2799c2d64673\") " pod="openstack/nova-metadata-0" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.949446 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b63d6368-a68c-4520-835c-2799c2d64673-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b63d6368-a68c-4520-835c-2799c2d64673\") " pod="openstack/nova-metadata-0" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.949464 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b63d6368-a68c-4520-835c-2799c2d64673-config-data\") pod \"nova-metadata-0\" (UID: \"b63d6368-a68c-4520-835c-2799c2d64673\") " pod="openstack/nova-metadata-0" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.950331 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b63d6368-a68c-4520-835c-2799c2d64673-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b63d6368-a68c-4520-835c-2799c2d64673\") " pod="openstack/nova-metadata-0" Nov 24 21:45:18 crc kubenswrapper[4915]: I1124 21:45:18.966756 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2p7r\" (UniqueName: \"kubernetes.io/projected/b63d6368-a68c-4520-835c-2799c2d64673-kube-api-access-k2p7r\") pod \"nova-metadata-0\" (UID: \"b63d6368-a68c-4520-835c-2799c2d64673\") " pod="openstack/nova-metadata-0" Nov 24 21:45:19 crc kubenswrapper[4915]: I1124 21:45:19.094272 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 24 21:45:19 crc kubenswrapper[4915]: I1124 21:45:19.102823 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 24 21:45:19 crc kubenswrapper[4915]: I1124 21:45:19.638109 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 24 21:45:19 crc kubenswrapper[4915]: W1124 21:45:19.660960 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb63d6368_a68c_4520_835c_2799c2d64673.slice/crio-7f3985535c6b255fa138c19d0f814b120a848975a8be538423e296d646c38400 WatchSource:0}: Error finding container 7f3985535c6b255fa138c19d0f814b120a848975a8be538423e296d646c38400: Status 404 returned error can't find the container with id 7f3985535c6b255fa138c19d0f814b120a848975a8be538423e296d646c38400 Nov 24 21:45:19 crc kubenswrapper[4915]: I1124 21:45:19.672183 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 24 21:45:20 crc kubenswrapper[4915]: I1124 21:45:20.365631 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2eb4f336-48eb-4d67-b147-cb401de61753","Type":"ContainerStarted","Data":"283ee8dc6df04c972381448dd260b8e73d8c62fda7aed8521f1af69e3119d047"} Nov 24 21:45:20 crc kubenswrapper[4915]: I1124 21:45:20.365991 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2eb4f336-48eb-4d67-b147-cb401de61753","Type":"ContainerStarted","Data":"0e422cb5ca218d1e60a0b101a85a12be4444e1a5dbf68c9c535ad8242975e2f5"} Nov 24 21:45:20 crc kubenswrapper[4915]: I1124 21:45:20.370037 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b63d6368-a68c-4520-835c-2799c2d64673","Type":"ContainerStarted","Data":"4a4ae32be5f5ef9644e906a00de5b4a22c6580de826ba226688a67e812405187"} Nov 24 21:45:20 crc kubenswrapper[4915]: I1124 21:45:20.370073 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b63d6368-a68c-4520-835c-2799c2d64673","Type":"ContainerStarted","Data":"bb23fb93f68f0af98e8c025d9ea0614e5a64354d8c817e216e54d194497f14d0"} Nov 24 21:45:20 crc kubenswrapper[4915]: I1124 21:45:20.370082 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b63d6368-a68c-4520-835c-2799c2d64673","Type":"ContainerStarted","Data":"7f3985535c6b255fa138c19d0f814b120a848975a8be538423e296d646c38400"} Nov 24 21:45:20 crc kubenswrapper[4915]: I1124 21:45:20.394248 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.394233336 podStartE2EDuration="2.394233336s" podCreationTimestamp="2025-11-24 21:45:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:45:20.382186001 +0000 UTC m=+1538.698438174" watchObservedRunningTime="2025-11-24 21:45:20.394233336 +0000 UTC m=+1538.710485509" Nov 24 21:45:20 crc kubenswrapper[4915]: I1124 21:45:20.423927 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.423906534 podStartE2EDuration="2.423906534s" podCreationTimestamp="2025-11-24 21:45:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:45:20.41110248 +0000 UTC m=+1538.727354683" watchObservedRunningTime="2025-11-24 21:45:20.423906534 +0000 UTC m=+1538.740158717" Nov 24 21:45:20 crc kubenswrapper[4915]: I1124 21:45:20.439441 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fd1d036-89ac-47f6-8551-60f27e07700e" path="/var/lib/kubelet/pods/8fd1d036-89ac-47f6-8551-60f27e07700e/volumes" Nov 24 21:45:20 crc kubenswrapper[4915]: I1124 21:45:20.440033 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd9c0d54-711e-4173-bc12-063f838129e4" path="/var/lib/kubelet/pods/fd9c0d54-711e-4173-bc12-063f838129e4/volumes" Nov 24 21:45:24 crc kubenswrapper[4915]: I1124 21:45:24.094637 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 24 21:45:24 crc kubenswrapper[4915]: I1124 21:45:24.102988 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 21:45:24 crc kubenswrapper[4915]: I1124 21:45:24.103076 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 24 21:45:24 crc kubenswrapper[4915]: I1124 21:45:24.327265 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:45:24 crc kubenswrapper[4915]: I1124 21:45:24.327607 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:45:26 crc kubenswrapper[4915]: I1124 21:45:26.046666 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 21:45:26 crc kubenswrapper[4915]: I1124 21:45:26.047724 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 24 21:45:27 crc kubenswrapper[4915]: I1124 21:45:27.063065 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.1:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 21:45:27 crc kubenswrapper[4915]: I1124 21:45:27.063076 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.1:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 21:45:27 crc kubenswrapper[4915]: I1124 21:45:27.462022 4915 generic.go:334] "Generic (PLEG): container finished" podID="f9c9b404-e3bf-41de-a3fe-f79c98113692" containerID="4cc36742319d5eb23e607f2e6c7adfd85aa5f6db934c9620dfb4eeba8e8343e2" exitCode=137 Nov 24 21:45:27 crc kubenswrapper[4915]: I1124 21:45:27.462072 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f9c9b404-e3bf-41de-a3fe-f79c98113692","Type":"ContainerDied","Data":"4cc36742319d5eb23e607f2e6c7adfd85aa5f6db934c9620dfb4eeba8e8343e2"} Nov 24 21:45:27 crc kubenswrapper[4915]: I1124 21:45:27.632086 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 21:45:27 crc kubenswrapper[4915]: I1124 21:45:27.682054 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9c9b404-e3bf-41de-a3fe-f79c98113692-config-data\") pod \"f9c9b404-e3bf-41de-a3fe-f79c98113692\" (UID: \"f9c9b404-e3bf-41de-a3fe-f79c98113692\") " Nov 24 21:45:27 crc kubenswrapper[4915]: I1124 21:45:27.682310 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9c9b404-e3bf-41de-a3fe-f79c98113692-scripts\") pod \"f9c9b404-e3bf-41de-a3fe-f79c98113692\" (UID: \"f9c9b404-e3bf-41de-a3fe-f79c98113692\") " Nov 24 21:45:27 crc kubenswrapper[4915]: I1124 21:45:27.682335 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9c9b404-e3bf-41de-a3fe-f79c98113692-combined-ca-bundle\") pod \"f9c9b404-e3bf-41de-a3fe-f79c98113692\" (UID: \"f9c9b404-e3bf-41de-a3fe-f79c98113692\") " Nov 24 21:45:27 crc kubenswrapper[4915]: I1124 21:45:27.682399 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st989\" (UniqueName: \"kubernetes.io/projected/f9c9b404-e3bf-41de-a3fe-f79c98113692-kube-api-access-st989\") pod \"f9c9b404-e3bf-41de-a3fe-f79c98113692\" (UID: \"f9c9b404-e3bf-41de-a3fe-f79c98113692\") " Nov 24 21:45:27 crc kubenswrapper[4915]: I1124 21:45:27.688366 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9c9b404-e3bf-41de-a3fe-f79c98113692-scripts" (OuterVolumeSpecName: "scripts") pod "f9c9b404-e3bf-41de-a3fe-f79c98113692" (UID: "f9c9b404-e3bf-41de-a3fe-f79c98113692"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:45:27 crc kubenswrapper[4915]: I1124 21:45:27.719005 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9c9b404-e3bf-41de-a3fe-f79c98113692-kube-api-access-st989" (OuterVolumeSpecName: "kube-api-access-st989") pod "f9c9b404-e3bf-41de-a3fe-f79c98113692" (UID: "f9c9b404-e3bf-41de-a3fe-f79c98113692"). InnerVolumeSpecName "kube-api-access-st989". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:45:27 crc kubenswrapper[4915]: I1124 21:45:27.788050 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9c9b404-e3bf-41de-a3fe-f79c98113692-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:27 crc kubenswrapper[4915]: I1124 21:45:27.788310 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st989\" (UniqueName: \"kubernetes.io/projected/f9c9b404-e3bf-41de-a3fe-f79c98113692-kube-api-access-st989\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:27 crc kubenswrapper[4915]: I1124 21:45:27.837808 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9c9b404-e3bf-41de-a3fe-f79c98113692-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9c9b404-e3bf-41de-a3fe-f79c98113692" (UID: "f9c9b404-e3bf-41de-a3fe-f79c98113692"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:45:27 crc kubenswrapper[4915]: I1124 21:45:27.867447 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9c9b404-e3bf-41de-a3fe-f79c98113692-config-data" (OuterVolumeSpecName: "config-data") pod "f9c9b404-e3bf-41de-a3fe-f79c98113692" (UID: "f9c9b404-e3bf-41de-a3fe-f79c98113692"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:45:27 crc kubenswrapper[4915]: I1124 21:45:27.889927 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9c9b404-e3bf-41de-a3fe-f79c98113692-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:27 crc kubenswrapper[4915]: I1124 21:45:27.889972 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9c9b404-e3bf-41de-a3fe-f79c98113692-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.476996 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f9c9b404-e3bf-41de-a3fe-f79c98113692","Type":"ContainerDied","Data":"d29658a14e70e1e1984c6af05ef2500063156005cb1fb87c3d3f159ce56e4d06"} Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.477360 4915 scope.go:117] "RemoveContainer" containerID="4cc36742319d5eb23e607f2e6c7adfd85aa5f6db934c9620dfb4eeba8e8343e2" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.477090 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.508735 4915 scope.go:117] "RemoveContainer" containerID="f67de18064128474d5b61da458198f58eeca5393b6dc5db0fb23457ad1a70f4f" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.527844 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.541724 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.565506 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 24 21:45:28 crc kubenswrapper[4915]: E1124 21:45:28.566102 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9c9b404-e3bf-41de-a3fe-f79c98113692" containerName="aodh-listener" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.566121 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9c9b404-e3bf-41de-a3fe-f79c98113692" containerName="aodh-listener" Nov 24 21:45:28 crc kubenswrapper[4915]: E1124 21:45:28.566140 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9c9b404-e3bf-41de-a3fe-f79c98113692" containerName="aodh-notifier" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.566146 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9c9b404-e3bf-41de-a3fe-f79c98113692" containerName="aodh-notifier" Nov 24 21:45:28 crc kubenswrapper[4915]: E1124 21:45:28.566189 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9c9b404-e3bf-41de-a3fe-f79c98113692" containerName="aodh-evaluator" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.566197 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9c9b404-e3bf-41de-a3fe-f79c98113692" containerName="aodh-evaluator" Nov 24 21:45:28 crc kubenswrapper[4915]: E1124 21:45:28.566216 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9c9b404-e3bf-41de-a3fe-f79c98113692" containerName="aodh-api" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.566226 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9c9b404-e3bf-41de-a3fe-f79c98113692" containerName="aodh-api" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.566485 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9c9b404-e3bf-41de-a3fe-f79c98113692" containerName="aodh-notifier" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.566509 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9c9b404-e3bf-41de-a3fe-f79c98113692" containerName="aodh-api" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.566527 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9c9b404-e3bf-41de-a3fe-f79c98113692" containerName="aodh-evaluator" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.566544 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9c9b404-e3bf-41de-a3fe-f79c98113692" containerName="aodh-listener" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.568822 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.582178 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.583201 4915 scope.go:117] "RemoveContainer" containerID="600cf224101aadbc6c4db75669402f568f1967c32b52a621986c15aa44f5a48c" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.583579 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.584193 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-497jp" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.584198 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.584305 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.584869 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.619407 4915 scope.go:117] "RemoveContainer" containerID="42b90e436064dd8168cddb993e9dbdef022a94757a87879e5aa20d49ca476381" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.705901 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6dtp\" (UniqueName: \"kubernetes.io/projected/f894aef5-bbf9-4a91-ab23-6d6c216d5645-kube-api-access-f6dtp\") pod \"aodh-0\" (UID: \"f894aef5-bbf9-4a91-ab23-6d6c216d5645\") " pod="openstack/aodh-0" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.706286 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f894aef5-bbf9-4a91-ab23-6d6c216d5645-scripts\") pod \"aodh-0\" (UID: \"f894aef5-bbf9-4a91-ab23-6d6c216d5645\") " pod="openstack/aodh-0" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.706399 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f894aef5-bbf9-4a91-ab23-6d6c216d5645-public-tls-certs\") pod \"aodh-0\" (UID: \"f894aef5-bbf9-4a91-ab23-6d6c216d5645\") " pod="openstack/aodh-0" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.706547 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f894aef5-bbf9-4a91-ab23-6d6c216d5645-config-data\") pod \"aodh-0\" (UID: \"f894aef5-bbf9-4a91-ab23-6d6c216d5645\") " pod="openstack/aodh-0" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.706625 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f894aef5-bbf9-4a91-ab23-6d6c216d5645-internal-tls-certs\") pod \"aodh-0\" (UID: \"f894aef5-bbf9-4a91-ab23-6d6c216d5645\") " pod="openstack/aodh-0" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.706672 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f894aef5-bbf9-4a91-ab23-6d6c216d5645-combined-ca-bundle\") pod \"aodh-0\" (UID: \"f894aef5-bbf9-4a91-ab23-6d6c216d5645\") " pod="openstack/aodh-0" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.813855 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f894aef5-bbf9-4a91-ab23-6d6c216d5645-scripts\") pod \"aodh-0\" (UID: \"f894aef5-bbf9-4a91-ab23-6d6c216d5645\") " pod="openstack/aodh-0" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.813932 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f894aef5-bbf9-4a91-ab23-6d6c216d5645-public-tls-certs\") pod \"aodh-0\" (UID: \"f894aef5-bbf9-4a91-ab23-6d6c216d5645\") " pod="openstack/aodh-0" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.814016 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f894aef5-bbf9-4a91-ab23-6d6c216d5645-config-data\") pod \"aodh-0\" (UID: \"f894aef5-bbf9-4a91-ab23-6d6c216d5645\") " pod="openstack/aodh-0" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.814074 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f894aef5-bbf9-4a91-ab23-6d6c216d5645-internal-tls-certs\") pod \"aodh-0\" (UID: \"f894aef5-bbf9-4a91-ab23-6d6c216d5645\") " pod="openstack/aodh-0" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.814128 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f894aef5-bbf9-4a91-ab23-6d6c216d5645-combined-ca-bundle\") pod \"aodh-0\" (UID: \"f894aef5-bbf9-4a91-ab23-6d6c216d5645\") " pod="openstack/aodh-0" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.814376 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6dtp\" (UniqueName: \"kubernetes.io/projected/f894aef5-bbf9-4a91-ab23-6d6c216d5645-kube-api-access-f6dtp\") pod \"aodh-0\" (UID: \"f894aef5-bbf9-4a91-ab23-6d6c216d5645\") " pod="openstack/aodh-0" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.819316 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f894aef5-bbf9-4a91-ab23-6d6c216d5645-combined-ca-bundle\") pod \"aodh-0\" (UID: \"f894aef5-bbf9-4a91-ab23-6d6c216d5645\") " pod="openstack/aodh-0" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.819420 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f894aef5-bbf9-4a91-ab23-6d6c216d5645-internal-tls-certs\") pod \"aodh-0\" (UID: \"f894aef5-bbf9-4a91-ab23-6d6c216d5645\") " pod="openstack/aodh-0" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.819468 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f894aef5-bbf9-4a91-ab23-6d6c216d5645-public-tls-certs\") pod \"aodh-0\" (UID: \"f894aef5-bbf9-4a91-ab23-6d6c216d5645\") " pod="openstack/aodh-0" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.831421 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f894aef5-bbf9-4a91-ab23-6d6c216d5645-scripts\") pod \"aodh-0\" (UID: \"f894aef5-bbf9-4a91-ab23-6d6c216d5645\") " pod="openstack/aodh-0" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.836183 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f894aef5-bbf9-4a91-ab23-6d6c216d5645-config-data\") pod \"aodh-0\" (UID: \"f894aef5-bbf9-4a91-ab23-6d6c216d5645\") " pod="openstack/aodh-0" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.849369 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6dtp\" (UniqueName: \"kubernetes.io/projected/f894aef5-bbf9-4a91-ab23-6d6c216d5645-kube-api-access-f6dtp\") pod \"aodh-0\" (UID: \"f894aef5-bbf9-4a91-ab23-6d6c216d5645\") " pod="openstack/aodh-0" Nov 24 21:45:28 crc kubenswrapper[4915]: I1124 21:45:28.907107 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 21:45:29 crc kubenswrapper[4915]: I1124 21:45:29.095151 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 24 21:45:29 crc kubenswrapper[4915]: I1124 21:45:29.103289 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 21:45:29 crc kubenswrapper[4915]: I1124 21:45:29.103331 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 24 21:45:29 crc kubenswrapper[4915]: I1124 21:45:29.133003 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 24 21:45:29 crc kubenswrapper[4915]: I1124 21:45:29.402150 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 24 21:45:29 crc kubenswrapper[4915]: W1124 21:45:29.410943 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf894aef5_bbf9_4a91_ab23_6d6c216d5645.slice/crio-5c3570bde007f345f538516f2fdbb4bc7ec0d0b1c2fe6b1f909d05a7cd3d8594 WatchSource:0}: Error finding container 5c3570bde007f345f538516f2fdbb4bc7ec0d0b1c2fe6b1f909d05a7cd3d8594: Status 404 returned error can't find the container with id 5c3570bde007f345f538516f2fdbb4bc7ec0d0b1c2fe6b1f909d05a7cd3d8594 Nov 24 21:45:29 crc kubenswrapper[4915]: I1124 21:45:29.490902 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f894aef5-bbf9-4a91-ab23-6d6c216d5645","Type":"ContainerStarted","Data":"5c3570bde007f345f538516f2fdbb4bc7ec0d0b1c2fe6b1f909d05a7cd3d8594"} Nov 24 21:45:29 crc kubenswrapper[4915]: I1124 21:45:29.520422 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 24 21:45:30 crc kubenswrapper[4915]: I1124 21:45:30.131011 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b63d6368-a68c-4520-835c-2799c2d64673" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.3:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 21:45:30 crc kubenswrapper[4915]: I1124 21:45:30.131342 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b63d6368-a68c-4520-835c-2799c2d64673" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.3:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 24 21:45:30 crc kubenswrapper[4915]: I1124 21:45:30.441606 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9c9b404-e3bf-41de-a3fe-f79c98113692" path="/var/lib/kubelet/pods/f9c9b404-e3bf-41de-a3fe-f79c98113692/volumes" Nov 24 21:45:30 crc kubenswrapper[4915]: I1124 21:45:30.508994 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f894aef5-bbf9-4a91-ab23-6d6c216d5645","Type":"ContainerStarted","Data":"47fd9b87119ef463887f8ae049c15821f51e7ce73f54aafcb36947bc15ab3256"} Nov 24 21:45:31 crc kubenswrapper[4915]: I1124 21:45:31.537662 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f894aef5-bbf9-4a91-ab23-6d6c216d5645","Type":"ContainerStarted","Data":"17de28c3ac630450d988b3309ab15d399b34975062a64a41ec8343541cea7975"} Nov 24 21:45:32 crc kubenswrapper[4915]: I1124 21:45:32.552422 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f894aef5-bbf9-4a91-ab23-6d6c216d5645","Type":"ContainerStarted","Data":"fa83c5b6b3d8da55e9ddcbdef89a0014366abf4dcc4962f6fb2799b368caf2c8"} Nov 24 21:45:32 crc kubenswrapper[4915]: I1124 21:45:32.552479 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f894aef5-bbf9-4a91-ab23-6d6c216d5645","Type":"ContainerStarted","Data":"4116e70ddb762a4dc7cd825b53527324ad3c4beb8f5becda7fc1e6279d34f6e8"} Nov 24 21:45:32 crc kubenswrapper[4915]: I1124 21:45:32.581121 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.023827508 podStartE2EDuration="4.58109589s" podCreationTimestamp="2025-11-24 21:45:28 +0000 UTC" firstStartedPulling="2025-11-24 21:45:29.413348807 +0000 UTC m=+1547.729600980" lastFinishedPulling="2025-11-24 21:45:31.970617169 +0000 UTC m=+1550.286869362" observedRunningTime="2025-11-24 21:45:32.580741681 +0000 UTC m=+1550.896993884" watchObservedRunningTime="2025-11-24 21:45:32.58109589 +0000 UTC m=+1550.897348063" Nov 24 21:45:36 crc kubenswrapper[4915]: I1124 21:45:36.058566 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 21:45:36 crc kubenswrapper[4915]: I1124 21:45:36.059551 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 21:45:36 crc kubenswrapper[4915]: I1124 21:45:36.065985 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 24 21:45:36 crc kubenswrapper[4915]: I1124 21:45:36.068940 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 21:45:36 crc kubenswrapper[4915]: I1124 21:45:36.605465 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 24 21:45:36 crc kubenswrapper[4915]: I1124 21:45:36.614120 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 24 21:45:37 crc kubenswrapper[4915]: I1124 21:45:37.539544 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 21:45:39 crc kubenswrapper[4915]: I1124 21:45:39.111218 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 21:45:39 crc kubenswrapper[4915]: I1124 21:45:39.113218 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 24 21:45:39 crc kubenswrapper[4915]: I1124 21:45:39.118872 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 21:45:39 crc kubenswrapper[4915]: I1124 21:45:39.665832 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 24 21:45:48 crc kubenswrapper[4915]: I1124 21:45:47.999543 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 21:45:48 crc kubenswrapper[4915]: I1124 21:45:48.956228 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 21:45:52 crc kubenswrapper[4915]: I1124 21:45:52.464597 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="a45944d3-396b-4683-b9b5-8e42e9331043" containerName="rabbitmq" containerID="cri-o://fb78ca6bc2439f8a59cc72a28612166857898661871c918b5ae004b3bc4f379c" gracePeriod=604796 Nov 24 21:45:53 crc kubenswrapper[4915]: I1124 21:45:53.275677 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="8c50db1c-ac88-4299-ab96-8b750308610f" containerName="rabbitmq" containerID="cri-o://9d7ed096d74c8eb4cd8db7f36bbb38472431a51770566de7e8a3e42a55417774" gracePeriod=604796 Nov 24 21:45:54 crc kubenswrapper[4915]: I1124 21:45:54.327123 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:45:54 crc kubenswrapper[4915]: I1124 21:45:54.327208 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:45:54 crc kubenswrapper[4915]: I1124 21:45:54.327274 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 21:45:54 crc kubenswrapper[4915]: I1124 21:45:54.328551 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e35b421db64e0c0a1d101af5d1cb0bc0580962fd3824f2e5b2c603b4d4ef9489"} pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:45:54 crc kubenswrapper[4915]: I1124 21:45:54.328665 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" containerID="cri-o://e35b421db64e0c0a1d101af5d1cb0bc0580962fd3824f2e5b2c603b4d4ef9489" gracePeriod=600 Nov 24 21:45:54 crc kubenswrapper[4915]: E1124 21:45:54.465637 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:45:54 crc kubenswrapper[4915]: I1124 21:45:54.864512 4915 generic.go:334] "Generic (PLEG): container finished" podID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerID="e35b421db64e0c0a1d101af5d1cb0bc0580962fd3824f2e5b2c603b4d4ef9489" exitCode=0 Nov 24 21:45:54 crc kubenswrapper[4915]: I1124 21:45:54.864580 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerDied","Data":"e35b421db64e0c0a1d101af5d1cb0bc0580962fd3824f2e5b2c603b4d4ef9489"} Nov 24 21:45:54 crc kubenswrapper[4915]: I1124 21:45:54.864628 4915 scope.go:117] "RemoveContainer" containerID="dce5b421849bc6dedc5b880936ade9c03271bcdbe605ecf2cad976e72aebbd14" Nov 24 21:45:54 crc kubenswrapper[4915]: I1124 21:45:54.865646 4915 scope.go:117] "RemoveContainer" containerID="e35b421db64e0c0a1d101af5d1cb0bc0580962fd3824f2e5b2c603b4d4ef9489" Nov 24 21:45:54 crc kubenswrapper[4915]: E1124 21:45:54.866077 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:45:55 crc kubenswrapper[4915]: I1124 21:45:55.992492 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="a45944d3-396b-4683-b9b5-8e42e9331043" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Nov 24 21:45:56 crc kubenswrapper[4915]: I1124 21:45:56.073426 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="8c50db1c-ac88-4299-ab96-8b750308610f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Nov 24 21:45:58 crc kubenswrapper[4915]: I1124 21:45:58.924926 4915 generic.go:334] "Generic (PLEG): container finished" podID="a45944d3-396b-4683-b9b5-8e42e9331043" containerID="fb78ca6bc2439f8a59cc72a28612166857898661871c918b5ae004b3bc4f379c" exitCode=0 Nov 24 21:45:58 crc kubenswrapper[4915]: I1124 21:45:58.925006 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a45944d3-396b-4683-b9b5-8e42e9331043","Type":"ContainerDied","Data":"fb78ca6bc2439f8a59cc72a28612166857898661871c918b5ae004b3bc4f379c"} Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.113614 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.275592 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a45944d3-396b-4683-b9b5-8e42e9331043-rabbitmq-tls\") pod \"a45944d3-396b-4683-b9b5-8e42e9331043\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.275645 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a45944d3-396b-4683-b9b5-8e42e9331043-erlang-cookie-secret\") pod \"a45944d3-396b-4683-b9b5-8e42e9331043\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.275673 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a45944d3-396b-4683-b9b5-8e42e9331043-rabbitmq-confd\") pod \"a45944d3-396b-4683-b9b5-8e42e9331043\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.275731 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a45944d3-396b-4683-b9b5-8e42e9331043-server-conf\") pod \"a45944d3-396b-4683-b9b5-8e42e9331043\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.275748 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a45944d3-396b-4683-b9b5-8e42e9331043-config-data\") pod \"a45944d3-396b-4683-b9b5-8e42e9331043\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.275763 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"a45944d3-396b-4683-b9b5-8e42e9331043\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.275820 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ftnx\" (UniqueName: \"kubernetes.io/projected/a45944d3-396b-4683-b9b5-8e42e9331043-kube-api-access-7ftnx\") pod \"a45944d3-396b-4683-b9b5-8e42e9331043\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.275863 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a45944d3-396b-4683-b9b5-8e42e9331043-pod-info\") pod \"a45944d3-396b-4683-b9b5-8e42e9331043\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.275911 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a45944d3-396b-4683-b9b5-8e42e9331043-rabbitmq-erlang-cookie\") pod \"a45944d3-396b-4683-b9b5-8e42e9331043\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.275935 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a45944d3-396b-4683-b9b5-8e42e9331043-rabbitmq-plugins\") pod \"a45944d3-396b-4683-b9b5-8e42e9331043\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.275989 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a45944d3-396b-4683-b9b5-8e42e9331043-plugins-conf\") pod \"a45944d3-396b-4683-b9b5-8e42e9331043\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.277216 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a45944d3-396b-4683-b9b5-8e42e9331043-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a45944d3-396b-4683-b9b5-8e42e9331043" (UID: "a45944d3-396b-4683-b9b5-8e42e9331043"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.278069 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a45944d3-396b-4683-b9b5-8e42e9331043-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a45944d3-396b-4683-b9b5-8e42e9331043" (UID: "a45944d3-396b-4683-b9b5-8e42e9331043"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.279661 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a45944d3-396b-4683-b9b5-8e42e9331043-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a45944d3-396b-4683-b9b5-8e42e9331043" (UID: "a45944d3-396b-4683-b9b5-8e42e9331043"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.302016 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a45944d3-396b-4683-b9b5-8e42e9331043-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a45944d3-396b-4683-b9b5-8e42e9331043" (UID: "a45944d3-396b-4683-b9b5-8e42e9331043"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.301943 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a45944d3-396b-4683-b9b5-8e42e9331043-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a45944d3-396b-4683-b9b5-8e42e9331043" (UID: "a45944d3-396b-4683-b9b5-8e42e9331043"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.305912 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "a45944d3-396b-4683-b9b5-8e42e9331043" (UID: "a45944d3-396b-4683-b9b5-8e42e9331043"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.306007 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a45944d3-396b-4683-b9b5-8e42e9331043-pod-info" (OuterVolumeSpecName: "pod-info") pod "a45944d3-396b-4683-b9b5-8e42e9331043" (UID: "a45944d3-396b-4683-b9b5-8e42e9331043"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.324977 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a45944d3-396b-4683-b9b5-8e42e9331043-kube-api-access-7ftnx" (OuterVolumeSpecName: "kube-api-access-7ftnx") pod "a45944d3-396b-4683-b9b5-8e42e9331043" (UID: "a45944d3-396b-4683-b9b5-8e42e9331043"). InnerVolumeSpecName "kube-api-access-7ftnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.340341 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a45944d3-396b-4683-b9b5-8e42e9331043-config-data" (OuterVolumeSpecName: "config-data") pod "a45944d3-396b-4683-b9b5-8e42e9331043" (UID: "a45944d3-396b-4683-b9b5-8e42e9331043"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.378643 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a45944d3-396b-4683-b9b5-8e42e9331043-server-conf" (OuterVolumeSpecName: "server-conf") pod "a45944d3-396b-4683-b9b5-8e42e9331043" (UID: "a45944d3-396b-4683-b9b5-8e42e9331043"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.379149 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a45944d3-396b-4683-b9b5-8e42e9331043-server-conf\") pod \"a45944d3-396b-4683-b9b5-8e42e9331043\" (UID: \"a45944d3-396b-4683-b9b5-8e42e9331043\") " Nov 24 21:45:59 crc kubenswrapper[4915]: W1124 21:45:59.379334 4915 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/a45944d3-396b-4683-b9b5-8e42e9331043/volumes/kubernetes.io~configmap/server-conf Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.379371 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a45944d3-396b-4683-b9b5-8e42e9331043-server-conf" (OuterVolumeSpecName: "server-conf") pod "a45944d3-396b-4683-b9b5-8e42e9331043" (UID: "a45944d3-396b-4683-b9b5-8e42e9331043"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.381112 4915 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a45944d3-396b-4683-b9b5-8e42e9331043-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.381133 4915 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a45944d3-396b-4683-b9b5-8e42e9331043-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.381143 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a45944d3-396b-4683-b9b5-8e42e9331043-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.381151 4915 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a45944d3-396b-4683-b9b5-8e42e9331043-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.381175 4915 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.381184 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ftnx\" (UniqueName: \"kubernetes.io/projected/a45944d3-396b-4683-b9b5-8e42e9331043-kube-api-access-7ftnx\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.381194 4915 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a45944d3-396b-4683-b9b5-8e42e9331043-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.381203 4915 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a45944d3-396b-4683-b9b5-8e42e9331043-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.381211 4915 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a45944d3-396b-4683-b9b5-8e42e9331043-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.381220 4915 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a45944d3-396b-4683-b9b5-8e42e9331043-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.418168 4915 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.483265 4915 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.488064 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a45944d3-396b-4683-b9b5-8e42e9331043-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a45944d3-396b-4683-b9b5-8e42e9331043" (UID: "a45944d3-396b-4683-b9b5-8e42e9331043"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.587323 4915 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a45944d3-396b-4683-b9b5-8e42e9331043-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.922178 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.936514 4915 generic.go:334] "Generic (PLEG): container finished" podID="8c50db1c-ac88-4299-ab96-8b750308610f" containerID="9d7ed096d74c8eb4cd8db7f36bbb38472431a51770566de7e8a3e42a55417774" exitCode=0 Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.936567 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8c50db1c-ac88-4299-ab96-8b750308610f","Type":"ContainerDied","Data":"9d7ed096d74c8eb4cd8db7f36bbb38472431a51770566de7e8a3e42a55417774"} Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.936591 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8c50db1c-ac88-4299-ab96-8b750308610f","Type":"ContainerDied","Data":"e05bb94401ccc07fabe73cf4cc54ed5685ae000ff382736c28f21a1b6526e564"} Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.936608 4915 scope.go:117] "RemoveContainer" containerID="9d7ed096d74c8eb4cd8db7f36bbb38472431a51770566de7e8a3e42a55417774" Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.936727 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.939728 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a45944d3-396b-4683-b9b5-8e42e9331043","Type":"ContainerDied","Data":"56dd50820f29c65e8a0f79d8019990f79aa29ae33e7c868a42b7bb19461c4ce4"} Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.939802 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 21:45:59 crc kubenswrapper[4915]: I1124 21:45:59.966192 4915 scope.go:117] "RemoveContainer" containerID="534c9537314191cda6d87d81dd98fa53f08531ad5782dccb0402569ba537e1b7" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.041766 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.046802 4915 scope.go:117] "RemoveContainer" containerID="9d7ed096d74c8eb4cd8db7f36bbb38472431a51770566de7e8a3e42a55417774" Nov 24 21:46:00 crc kubenswrapper[4915]: E1124 21:46:00.047170 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d7ed096d74c8eb4cd8db7f36bbb38472431a51770566de7e8a3e42a55417774\": container with ID starting with 9d7ed096d74c8eb4cd8db7f36bbb38472431a51770566de7e8a3e42a55417774 not found: ID does not exist" containerID="9d7ed096d74c8eb4cd8db7f36bbb38472431a51770566de7e8a3e42a55417774" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.047194 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d7ed096d74c8eb4cd8db7f36bbb38472431a51770566de7e8a3e42a55417774"} err="failed to get container status \"9d7ed096d74c8eb4cd8db7f36bbb38472431a51770566de7e8a3e42a55417774\": rpc error: code = NotFound desc = could not find container \"9d7ed096d74c8eb4cd8db7f36bbb38472431a51770566de7e8a3e42a55417774\": container with ID starting with 9d7ed096d74c8eb4cd8db7f36bbb38472431a51770566de7e8a3e42a55417774 not found: ID does not exist" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.047214 4915 scope.go:117] "RemoveContainer" containerID="534c9537314191cda6d87d81dd98fa53f08531ad5782dccb0402569ba537e1b7" Nov 24 21:46:00 crc kubenswrapper[4915]: E1124 21:46:00.047891 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"534c9537314191cda6d87d81dd98fa53f08531ad5782dccb0402569ba537e1b7\": container with ID starting with 534c9537314191cda6d87d81dd98fa53f08531ad5782dccb0402569ba537e1b7 not found: ID does not exist" containerID="534c9537314191cda6d87d81dd98fa53f08531ad5782dccb0402569ba537e1b7" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.047914 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"534c9537314191cda6d87d81dd98fa53f08531ad5782dccb0402569ba537e1b7"} err="failed to get container status \"534c9537314191cda6d87d81dd98fa53f08531ad5782dccb0402569ba537e1b7\": rpc error: code = NotFound desc = could not find container \"534c9537314191cda6d87d81dd98fa53f08531ad5782dccb0402569ba537e1b7\": container with ID starting with 534c9537314191cda6d87d81dd98fa53f08531ad5782dccb0402569ba537e1b7 not found: ID does not exist" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.047931 4915 scope.go:117] "RemoveContainer" containerID="fb78ca6bc2439f8a59cc72a28612166857898661871c918b5ae004b3bc4f379c" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.074219 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.079077 4915 scope.go:117] "RemoveContainer" containerID="75d4058db27c5fe795eb5b76e0577c3f27bc129142a656460e05201b3b1c3c20" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.094758 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 21:46:00 crc kubenswrapper[4915]: E1124 21:46:00.095390 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a45944d3-396b-4683-b9b5-8e42e9331043" containerName="rabbitmq" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.095411 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a45944d3-396b-4683-b9b5-8e42e9331043" containerName="rabbitmq" Nov 24 21:46:00 crc kubenswrapper[4915]: E1124 21:46:00.095451 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c50db1c-ac88-4299-ab96-8b750308610f" containerName="rabbitmq" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.095462 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c50db1c-ac88-4299-ab96-8b750308610f" containerName="rabbitmq" Nov 24 21:46:00 crc kubenswrapper[4915]: E1124 21:46:00.095475 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a45944d3-396b-4683-b9b5-8e42e9331043" containerName="setup-container" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.095484 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a45944d3-396b-4683-b9b5-8e42e9331043" containerName="setup-container" Nov 24 21:46:00 crc kubenswrapper[4915]: E1124 21:46:00.095500 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c50db1c-ac88-4299-ab96-8b750308610f" containerName="setup-container" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.095508 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c50db1c-ac88-4299-ab96-8b750308610f" containerName="setup-container" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.096511 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a45944d3-396b-4683-b9b5-8e42e9331043" containerName="rabbitmq" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.096569 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c50db1c-ac88-4299-ab96-8b750308610f" containerName="rabbitmq" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.098185 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.100622 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.100926 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.102703 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.102888 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.102993 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.103097 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-5zgjl" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.104939 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.107471 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8c50db1c-ac88-4299-ab96-8b750308610f-rabbitmq-tls\") pod \"8c50db1c-ac88-4299-ab96-8b750308610f\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.107544 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8c50db1c-ac88-4299-ab96-8b750308610f-rabbitmq-plugins\") pod \"8c50db1c-ac88-4299-ab96-8b750308610f\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.107607 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8c50db1c-ac88-4299-ab96-8b750308610f-erlang-cookie-secret\") pod \"8c50db1c-ac88-4299-ab96-8b750308610f\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.107670 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8c50db1c-ac88-4299-ab96-8b750308610f-rabbitmq-confd\") pod \"8c50db1c-ac88-4299-ab96-8b750308610f\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.107707 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8c50db1c-ac88-4299-ab96-8b750308610f-rabbitmq-erlang-cookie\") pod \"8c50db1c-ac88-4299-ab96-8b750308610f\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.107767 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8c50db1c-ac88-4299-ab96-8b750308610f-plugins-conf\") pod \"8c50db1c-ac88-4299-ab96-8b750308610f\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.110305 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8c50db1c-ac88-4299-ab96-8b750308610f-pod-info\") pod \"8c50db1c-ac88-4299-ab96-8b750308610f\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.110514 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"8c50db1c-ac88-4299-ab96-8b750308610f\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.110603 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2t62\" (UniqueName: \"kubernetes.io/projected/8c50db1c-ac88-4299-ab96-8b750308610f-kube-api-access-w2t62\") pod \"8c50db1c-ac88-4299-ab96-8b750308610f\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.110626 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c50db1c-ac88-4299-ab96-8b750308610f-config-data\") pod \"8c50db1c-ac88-4299-ab96-8b750308610f\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.110677 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8c50db1c-ac88-4299-ab96-8b750308610f-server-conf\") pod \"8c50db1c-ac88-4299-ab96-8b750308610f\" (UID: \"8c50db1c-ac88-4299-ab96-8b750308610f\") " Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.112633 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c50db1c-ac88-4299-ab96-8b750308610f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "8c50db1c-ac88-4299-ab96-8b750308610f" (UID: "8c50db1c-ac88-4299-ab96-8b750308610f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.114536 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c50db1c-ac88-4299-ab96-8b750308610f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "8c50db1c-ac88-4299-ab96-8b750308610f" (UID: "8c50db1c-ac88-4299-ab96-8b750308610f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.114569 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c50db1c-ac88-4299-ab96-8b750308610f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "8c50db1c-ac88-4299-ab96-8b750308610f" (UID: "8c50db1c-ac88-4299-ab96-8b750308610f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.115594 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c50db1c-ac88-4299-ab96-8b750308610f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "8c50db1c-ac88-4299-ab96-8b750308610f" (UID: "8c50db1c-ac88-4299-ab96-8b750308610f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.116368 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/8c50db1c-ac88-4299-ab96-8b750308610f-pod-info" (OuterVolumeSpecName: "pod-info") pod "8c50db1c-ac88-4299-ab96-8b750308610f" (UID: "8c50db1c-ac88-4299-ab96-8b750308610f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.116714 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c50db1c-ac88-4299-ab96-8b750308610f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "8c50db1c-ac88-4299-ab96-8b750308610f" (UID: "8c50db1c-ac88-4299-ab96-8b750308610f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.120636 4915 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8c50db1c-ac88-4299-ab96-8b750308610f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.122256 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c50db1c-ac88-4299-ab96-8b750308610f-kube-api-access-w2t62" (OuterVolumeSpecName: "kube-api-access-w2t62") pod "8c50db1c-ac88-4299-ab96-8b750308610f" (UID: "8c50db1c-ac88-4299-ab96-8b750308610f"). InnerVolumeSpecName "kube-api-access-w2t62". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.136713 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.142082 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "8c50db1c-ac88-4299-ab96-8b750308610f" (UID: "8c50db1c-ac88-4299-ab96-8b750308610f"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.176218 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c50db1c-ac88-4299-ab96-8b750308610f-config-data" (OuterVolumeSpecName: "config-data") pod "8c50db1c-ac88-4299-ab96-8b750308610f" (UID: "8c50db1c-ac88-4299-ab96-8b750308610f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.223901 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28a8634d-9ce0-460f-a8a9-e5cb05fc63cc-config-data\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.223984 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/28a8634d-9ce0-460f-a8a9-e5cb05fc63cc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.224005 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/28a8634d-9ce0-460f-a8a9-e5cb05fc63cc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.224021 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/28a8634d-9ce0-460f-a8a9-e5cb05fc63cc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.224084 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/28a8634d-9ce0-460f-a8a9-e5cb05fc63cc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.224150 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.224181 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/28a8634d-9ce0-460f-a8a9-e5cb05fc63cc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.224199 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/28a8634d-9ce0-460f-a8a9-e5cb05fc63cc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.225362 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/28a8634d-9ce0-460f-a8a9-e5cb05fc63cc-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.225422 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/28a8634d-9ce0-460f-a8a9-e5cb05fc63cc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.225553 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldgls\" (UniqueName: \"kubernetes.io/projected/28a8634d-9ce0-460f-a8a9-e5cb05fc63cc-kube-api-access-ldgls\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.225912 4915 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8c50db1c-ac88-4299-ab96-8b750308610f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.225939 4915 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8c50db1c-ac88-4299-ab96-8b750308610f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.225959 4915 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8c50db1c-ac88-4299-ab96-8b750308610f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.225976 4915 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8c50db1c-ac88-4299-ab96-8b750308610f-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.225989 4915 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8c50db1c-ac88-4299-ab96-8b750308610f-pod-info\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.226018 4915 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.226032 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2t62\" (UniqueName: \"kubernetes.io/projected/8c50db1c-ac88-4299-ab96-8b750308610f-kube-api-access-w2t62\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.226046 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c50db1c-ac88-4299-ab96-8b750308610f-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.246184 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c50db1c-ac88-4299-ab96-8b750308610f-server-conf" (OuterVolumeSpecName: "server-conf") pod "8c50db1c-ac88-4299-ab96-8b750308610f" (UID: "8c50db1c-ac88-4299-ab96-8b750308610f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.266854 4915 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.285208 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c50db1c-ac88-4299-ab96-8b750308610f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "8c50db1c-ac88-4299-ab96-8b750308610f" (UID: "8c50db1c-ac88-4299-ab96-8b750308610f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.327736 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/28a8634d-9ce0-460f-a8a9-e5cb05fc63cc-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.327790 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/28a8634d-9ce0-460f-a8a9-e5cb05fc63cc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.327832 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldgls\" (UniqueName: \"kubernetes.io/projected/28a8634d-9ce0-460f-a8a9-e5cb05fc63cc-kube-api-access-ldgls\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.327856 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28a8634d-9ce0-460f-a8a9-e5cb05fc63cc-config-data\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.327891 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/28a8634d-9ce0-460f-a8a9-e5cb05fc63cc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.327923 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/28a8634d-9ce0-460f-a8a9-e5cb05fc63cc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.327941 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/28a8634d-9ce0-460f-a8a9-e5cb05fc63cc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.327999 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/28a8634d-9ce0-460f-a8a9-e5cb05fc63cc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.328050 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.328075 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/28a8634d-9ce0-460f-a8a9-e5cb05fc63cc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.328094 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/28a8634d-9ce0-460f-a8a9-e5cb05fc63cc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.328181 4915 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8c50db1c-ac88-4299-ab96-8b750308610f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.328197 4915 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.328207 4915 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8c50db1c-ac88-4299-ab96-8b750308610f-server-conf\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.329919 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/28a8634d-9ce0-460f-a8a9-e5cb05fc63cc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.330020 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.331102 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/28a8634d-9ce0-460f-a8a9-e5cb05fc63cc-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.331462 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/28a8634d-9ce0-460f-a8a9-e5cb05fc63cc-config-data\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.331479 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/28a8634d-9ce0-460f-a8a9-e5cb05fc63cc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.331714 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/28a8634d-9ce0-460f-a8a9-e5cb05fc63cc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.331718 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/28a8634d-9ce0-460f-a8a9-e5cb05fc63cc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.334221 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/28a8634d-9ce0-460f-a8a9-e5cb05fc63cc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.336538 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/28a8634d-9ce0-460f-a8a9-e5cb05fc63cc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.337082 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/28a8634d-9ce0-460f-a8a9-e5cb05fc63cc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.347416 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldgls\" (UniqueName: \"kubernetes.io/projected/28a8634d-9ce0-460f-a8a9-e5cb05fc63cc-kube-api-access-ldgls\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.379749 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-server-0\" (UID: \"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc\") " pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.441232 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a45944d3-396b-4683-b9b5-8e42e9331043" path="/var/lib/kubelet/pods/a45944d3-396b-4683-b9b5-8e42e9331043/volumes" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.497446 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.576953 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.590143 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.601627 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.610061 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.610204 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.615532 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.616032 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.616269 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.616510 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-m2qf6" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.616836 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.617111 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.617113 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.739360 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.739568 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6c951dd6-e7fe-411c-8156-92784c966328-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.739656 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6c951dd6-e7fe-411c-8156-92784c966328-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.739687 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6c951dd6-e7fe-411c-8156-92784c966328-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.739727 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c951dd6-e7fe-411c-8156-92784c966328-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.739771 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6c951dd6-e7fe-411c-8156-92784c966328-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.739820 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6c951dd6-e7fe-411c-8156-92784c966328-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.739944 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6c951dd6-e7fe-411c-8156-92784c966328-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.740020 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6c951dd6-e7fe-411c-8156-92784c966328-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.740042 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6c951dd6-e7fe-411c-8156-92784c966328-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.740059 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zwz7\" (UniqueName: \"kubernetes.io/projected/6c951dd6-e7fe-411c-8156-92784c966328-kube-api-access-6zwz7\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.842126 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6c951dd6-e7fe-411c-8156-92784c966328-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.842199 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c951dd6-e7fe-411c-8156-92784c966328-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.842246 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6c951dd6-e7fe-411c-8156-92784c966328-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.842265 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6c951dd6-e7fe-411c-8156-92784c966328-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.842318 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6c951dd6-e7fe-411c-8156-92784c966328-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.842335 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6c951dd6-e7fe-411c-8156-92784c966328-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.842356 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6c951dd6-e7fe-411c-8156-92784c966328-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.842374 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zwz7\" (UniqueName: \"kubernetes.io/projected/6c951dd6-e7fe-411c-8156-92784c966328-kube-api-access-6zwz7\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.842405 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.842478 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6c951dd6-e7fe-411c-8156-92784c966328-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.842509 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6c951dd6-e7fe-411c-8156-92784c966328-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.843418 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6c951dd6-e7fe-411c-8156-92784c966328-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.843675 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.844989 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6c951dd6-e7fe-411c-8156-92784c966328-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.845033 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c951dd6-e7fe-411c-8156-92784c966328-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.845256 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6c951dd6-e7fe-411c-8156-92784c966328-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.847932 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6c951dd6-e7fe-411c-8156-92784c966328-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.851046 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6c951dd6-e7fe-411c-8156-92784c966328-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.851888 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6c951dd6-e7fe-411c-8156-92784c966328-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.852401 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6c951dd6-e7fe-411c-8156-92784c966328-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.880638 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6c951dd6-e7fe-411c-8156-92784c966328-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.887548 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zwz7\" (UniqueName: \"kubernetes.io/projected/6c951dd6-e7fe-411c-8156-92784c966328-kube-api-access-6zwz7\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.938927 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6c951dd6-e7fe-411c-8156-92784c966328\") " pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:00 crc kubenswrapper[4915]: I1124 21:46:00.952198 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:01 crc kubenswrapper[4915]: I1124 21:46:01.111747 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 24 21:46:01 crc kubenswrapper[4915]: I1124 21:46:01.444868 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 24 21:46:01 crc kubenswrapper[4915]: W1124 21:46:01.447324 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c951dd6_e7fe_411c_8156_92784c966328.slice/crio-91f1ea0207cfceda57e6d3311cf8296eeb15639e9f056b073c398553cad07e1c WatchSource:0}: Error finding container 91f1ea0207cfceda57e6d3311cf8296eeb15639e9f056b073c398553cad07e1c: Status 404 returned error can't find the container with id 91f1ea0207cfceda57e6d3311cf8296eeb15639e9f056b073c398553cad07e1c Nov 24 21:46:01 crc kubenswrapper[4915]: I1124 21:46:01.964682 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc","Type":"ContainerStarted","Data":"d2a1a72b138c85e875968384f61b85d569810a8b1fb96c6108f485647ba4d5b9"} Nov 24 21:46:01 crc kubenswrapper[4915]: I1124 21:46:01.966417 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6c951dd6-e7fe-411c-8156-92784c966328","Type":"ContainerStarted","Data":"91f1ea0207cfceda57e6d3311cf8296eeb15639e9f056b073c398553cad07e1c"} Nov 24 21:46:02 crc kubenswrapper[4915]: I1124 21:46:02.314972 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-52lgl"] Nov 24 21:46:02 crc kubenswrapper[4915]: I1124 21:46:02.317345 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" Nov 24 21:46:02 crc kubenswrapper[4915]: I1124 21:46:02.327334 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 24 21:46:02 crc kubenswrapper[4915]: I1124 21:46:02.364106 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-52lgl"] Nov 24 21:46:02 crc kubenswrapper[4915]: I1124 21:46:02.439853 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c50db1c-ac88-4299-ab96-8b750308610f" path="/var/lib/kubelet/pods/8c50db1c-ac88-4299-ab96-8b750308610f/volumes" Nov 24 21:46:02 crc kubenswrapper[4915]: I1124 21:46:02.477308 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-52lgl\" (UID: \"f756a69e-2278-4383-8498-3bf49c149cb9\") " pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" Nov 24 21:46:02 crc kubenswrapper[4915]: I1124 21:46:02.477370 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-52lgl\" (UID: \"f756a69e-2278-4383-8498-3bf49c149cb9\") " pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" Nov 24 21:46:02 crc kubenswrapper[4915]: I1124 21:46:02.477406 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-config\") pod \"dnsmasq-dns-5b75489c6f-52lgl\" (UID: \"f756a69e-2278-4383-8498-3bf49c149cb9\") " pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" Nov 24 21:46:02 crc kubenswrapper[4915]: I1124 21:46:02.477436 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vsgx\" (UniqueName: \"kubernetes.io/projected/f756a69e-2278-4383-8498-3bf49c149cb9-kube-api-access-4vsgx\") pod \"dnsmasq-dns-5b75489c6f-52lgl\" (UID: \"f756a69e-2278-4383-8498-3bf49c149cb9\") " pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" Nov 24 21:46:02 crc kubenswrapper[4915]: I1124 21:46:02.477475 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-52lgl\" (UID: \"f756a69e-2278-4383-8498-3bf49c149cb9\") " pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" Nov 24 21:46:02 crc kubenswrapper[4915]: I1124 21:46:02.477582 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-52lgl\" (UID: \"f756a69e-2278-4383-8498-3bf49c149cb9\") " pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" Nov 24 21:46:02 crc kubenswrapper[4915]: I1124 21:46:02.477673 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-52lgl\" (UID: \"f756a69e-2278-4383-8498-3bf49c149cb9\") " pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" Nov 24 21:46:02 crc kubenswrapper[4915]: I1124 21:46:02.582065 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-52lgl\" (UID: \"f756a69e-2278-4383-8498-3bf49c149cb9\") " pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" Nov 24 21:46:02 crc kubenswrapper[4915]: I1124 21:46:02.582234 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-52lgl\" (UID: \"f756a69e-2278-4383-8498-3bf49c149cb9\") " pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" Nov 24 21:46:02 crc kubenswrapper[4915]: I1124 21:46:02.582258 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-52lgl\" (UID: \"f756a69e-2278-4383-8498-3bf49c149cb9\") " pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" Nov 24 21:46:02 crc kubenswrapper[4915]: I1124 21:46:02.582278 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-config\") pod \"dnsmasq-dns-5b75489c6f-52lgl\" (UID: \"f756a69e-2278-4383-8498-3bf49c149cb9\") " pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" Nov 24 21:46:02 crc kubenswrapper[4915]: I1124 21:46:02.582299 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vsgx\" (UniqueName: \"kubernetes.io/projected/f756a69e-2278-4383-8498-3bf49c149cb9-kube-api-access-4vsgx\") pod \"dnsmasq-dns-5b75489c6f-52lgl\" (UID: \"f756a69e-2278-4383-8498-3bf49c149cb9\") " pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" Nov 24 21:46:02 crc kubenswrapper[4915]: I1124 21:46:02.582331 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-52lgl\" (UID: \"f756a69e-2278-4383-8498-3bf49c149cb9\") " pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" Nov 24 21:46:02 crc kubenswrapper[4915]: I1124 21:46:02.582412 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-52lgl\" (UID: \"f756a69e-2278-4383-8498-3bf49c149cb9\") " pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" Nov 24 21:46:02 crc kubenswrapper[4915]: I1124 21:46:02.585818 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-52lgl\" (UID: \"f756a69e-2278-4383-8498-3bf49c149cb9\") " pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" Nov 24 21:46:02 crc kubenswrapper[4915]: I1124 21:46:02.586593 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-52lgl\" (UID: \"f756a69e-2278-4383-8498-3bf49c149cb9\") " pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" Nov 24 21:46:02 crc kubenswrapper[4915]: I1124 21:46:02.586824 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-52lgl\" (UID: \"f756a69e-2278-4383-8498-3bf49c149cb9\") " pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" Nov 24 21:46:02 crc kubenswrapper[4915]: I1124 21:46:02.586854 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-52lgl\" (UID: \"f756a69e-2278-4383-8498-3bf49c149cb9\") " pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" Nov 24 21:46:02 crc kubenswrapper[4915]: I1124 21:46:02.587493 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-52lgl\" (UID: \"f756a69e-2278-4383-8498-3bf49c149cb9\") " pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" Nov 24 21:46:02 crc kubenswrapper[4915]: I1124 21:46:02.587654 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-config\") pod \"dnsmasq-dns-5b75489c6f-52lgl\" (UID: \"f756a69e-2278-4383-8498-3bf49c149cb9\") " pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" Nov 24 21:46:02 crc kubenswrapper[4915]: I1124 21:46:02.671339 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vsgx\" (UniqueName: \"kubernetes.io/projected/f756a69e-2278-4383-8498-3bf49c149cb9-kube-api-access-4vsgx\") pod \"dnsmasq-dns-5b75489c6f-52lgl\" (UID: \"f756a69e-2278-4383-8498-3bf49c149cb9\") " pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" Nov 24 21:46:02 crc kubenswrapper[4915]: I1124 21:46:02.948193 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" Nov 24 21:46:03 crc kubenswrapper[4915]: W1124 21:46:03.855292 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf756a69e_2278_4383_8498_3bf49c149cb9.slice/crio-a680419d1448a9e0b54aafb5a04dbb79b76c9d90bfe1da3b8669e29e7084c677 WatchSource:0}: Error finding container a680419d1448a9e0b54aafb5a04dbb79b76c9d90bfe1da3b8669e29e7084c677: Status 404 returned error can't find the container with id a680419d1448a9e0b54aafb5a04dbb79b76c9d90bfe1da3b8669e29e7084c677 Nov 24 21:46:03 crc kubenswrapper[4915]: I1124 21:46:03.859030 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-52lgl"] Nov 24 21:46:04 crc kubenswrapper[4915]: I1124 21:46:04.017692 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" event={"ID":"f756a69e-2278-4383-8498-3bf49c149cb9","Type":"ContainerStarted","Data":"a680419d1448a9e0b54aafb5a04dbb79b76c9d90bfe1da3b8669e29e7084c677"} Nov 24 21:46:04 crc kubenswrapper[4915]: I1124 21:46:04.021194 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc","Type":"ContainerStarted","Data":"a1a66138fad7f415b658d09e015a2499be32736664e7b8e54425aa5ccbc7fb06"} Nov 24 21:46:04 crc kubenswrapper[4915]: I1124 21:46:04.029059 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6c951dd6-e7fe-411c-8156-92784c966328","Type":"ContainerStarted","Data":"01e8e928672818208bbd06572b4f66651485d8de6fbc53a3913c906d542d8b53"} Nov 24 21:46:05 crc kubenswrapper[4915]: I1124 21:46:05.039358 4915 generic.go:334] "Generic (PLEG): container finished" podID="f756a69e-2278-4383-8498-3bf49c149cb9" containerID="f49029c1a7ba8da1e49745f42588ebbf7a17a19a621a1de42b9ff6dcfa1b75db" exitCode=0 Nov 24 21:46:05 crc kubenswrapper[4915]: I1124 21:46:05.039558 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" event={"ID":"f756a69e-2278-4383-8498-3bf49c149cb9","Type":"ContainerDied","Data":"f49029c1a7ba8da1e49745f42588ebbf7a17a19a621a1de42b9ff6dcfa1b75db"} Nov 24 21:46:06 crc kubenswrapper[4915]: I1124 21:46:06.059620 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" event={"ID":"f756a69e-2278-4383-8498-3bf49c149cb9","Type":"ContainerStarted","Data":"b49f70e0f7cdc41a8164c858da5fd643c848ebc95f5a0e5ad313b65d19fa40e9"} Nov 24 21:46:06 crc kubenswrapper[4915]: I1124 21:46:06.060320 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" Nov 24 21:46:06 crc kubenswrapper[4915]: I1124 21:46:06.081504 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" podStartSLOduration=4.08148762 podStartE2EDuration="4.08148762s" podCreationTimestamp="2025-11-24 21:46:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:46:06.076511886 +0000 UTC m=+1584.392764059" watchObservedRunningTime="2025-11-24 21:46:06.08148762 +0000 UTC m=+1584.397739793" Nov 24 21:46:09 crc kubenswrapper[4915]: I1124 21:46:09.426996 4915 scope.go:117] "RemoveContainer" containerID="e35b421db64e0c0a1d101af5d1cb0bc0580962fd3824f2e5b2c603b4d4ef9489" Nov 24 21:46:09 crc kubenswrapper[4915]: E1124 21:46:09.427505 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:46:12 crc kubenswrapper[4915]: I1124 21:46:12.950869 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.111347 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-jnbbb"] Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.111585 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f84f9ccf-jnbbb" podUID="3d5f3f02-da73-476f-944d-3149838eb7e6" containerName="dnsmasq-dns" containerID="cri-o://68260b105130add9f5ea03994ff35072ea9a4287b7e2d5cf285e2f8e681f8e45" gracePeriod=10 Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.213646 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-krgft"] Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.219615 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-krgft" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.229576 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-krgft"] Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.292120 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4adb52f8-9d39-407f-8e70-4dfe00552554-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-krgft\" (UID: \"4adb52f8-9d39-407f-8e70-4dfe00552554\") " pod="openstack/dnsmasq-dns-5d75f767dc-krgft" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.292308 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4adb52f8-9d39-407f-8e70-4dfe00552554-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-krgft\" (UID: \"4adb52f8-9d39-407f-8e70-4dfe00552554\") " pod="openstack/dnsmasq-dns-5d75f767dc-krgft" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.292342 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4adb52f8-9d39-407f-8e70-4dfe00552554-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-krgft\" (UID: \"4adb52f8-9d39-407f-8e70-4dfe00552554\") " pod="openstack/dnsmasq-dns-5d75f767dc-krgft" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.292374 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4adb52f8-9d39-407f-8e70-4dfe00552554-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-krgft\" (UID: \"4adb52f8-9d39-407f-8e70-4dfe00552554\") " pod="openstack/dnsmasq-dns-5d75f767dc-krgft" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.292416 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhwtm\" (UniqueName: \"kubernetes.io/projected/4adb52f8-9d39-407f-8e70-4dfe00552554-kube-api-access-jhwtm\") pod \"dnsmasq-dns-5d75f767dc-krgft\" (UID: \"4adb52f8-9d39-407f-8e70-4dfe00552554\") " pod="openstack/dnsmasq-dns-5d75f767dc-krgft" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.292435 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4adb52f8-9d39-407f-8e70-4dfe00552554-config\") pod \"dnsmasq-dns-5d75f767dc-krgft\" (UID: \"4adb52f8-9d39-407f-8e70-4dfe00552554\") " pod="openstack/dnsmasq-dns-5d75f767dc-krgft" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.292449 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4adb52f8-9d39-407f-8e70-4dfe00552554-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-krgft\" (UID: \"4adb52f8-9d39-407f-8e70-4dfe00552554\") " pod="openstack/dnsmasq-dns-5d75f767dc-krgft" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.394821 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4adb52f8-9d39-407f-8e70-4dfe00552554-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-krgft\" (UID: \"4adb52f8-9d39-407f-8e70-4dfe00552554\") " pod="openstack/dnsmasq-dns-5d75f767dc-krgft" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.395637 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4adb52f8-9d39-407f-8e70-4dfe00552554-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-krgft\" (UID: \"4adb52f8-9d39-407f-8e70-4dfe00552554\") " pod="openstack/dnsmasq-dns-5d75f767dc-krgft" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.395723 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4adb52f8-9d39-407f-8e70-4dfe00552554-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-krgft\" (UID: \"4adb52f8-9d39-407f-8e70-4dfe00552554\") " pod="openstack/dnsmasq-dns-5d75f767dc-krgft" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.396634 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhwtm\" (UniqueName: \"kubernetes.io/projected/4adb52f8-9d39-407f-8e70-4dfe00552554-kube-api-access-jhwtm\") pod \"dnsmasq-dns-5d75f767dc-krgft\" (UID: \"4adb52f8-9d39-407f-8e70-4dfe00552554\") " pod="openstack/dnsmasq-dns-5d75f767dc-krgft" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.396977 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4adb52f8-9d39-407f-8e70-4dfe00552554-config\") pod \"dnsmasq-dns-5d75f767dc-krgft\" (UID: \"4adb52f8-9d39-407f-8e70-4dfe00552554\") " pod="openstack/dnsmasq-dns-5d75f767dc-krgft" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.396556 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4adb52f8-9d39-407f-8e70-4dfe00552554-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-krgft\" (UID: \"4adb52f8-9d39-407f-8e70-4dfe00552554\") " pod="openstack/dnsmasq-dns-5d75f767dc-krgft" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.397979 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4adb52f8-9d39-407f-8e70-4dfe00552554-config\") pod \"dnsmasq-dns-5d75f767dc-krgft\" (UID: \"4adb52f8-9d39-407f-8e70-4dfe00552554\") " pod="openstack/dnsmasq-dns-5d75f767dc-krgft" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.398044 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4adb52f8-9d39-407f-8e70-4dfe00552554-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-krgft\" (UID: \"4adb52f8-9d39-407f-8e70-4dfe00552554\") " pod="openstack/dnsmasq-dns-5d75f767dc-krgft" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.398179 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4adb52f8-9d39-407f-8e70-4dfe00552554-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-krgft\" (UID: \"4adb52f8-9d39-407f-8e70-4dfe00552554\") " pod="openstack/dnsmasq-dns-5d75f767dc-krgft" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.398462 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4adb52f8-9d39-407f-8e70-4dfe00552554-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-krgft\" (UID: \"4adb52f8-9d39-407f-8e70-4dfe00552554\") " pod="openstack/dnsmasq-dns-5d75f767dc-krgft" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.398872 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4adb52f8-9d39-407f-8e70-4dfe00552554-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-krgft\" (UID: \"4adb52f8-9d39-407f-8e70-4dfe00552554\") " pod="openstack/dnsmasq-dns-5d75f767dc-krgft" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.399621 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4adb52f8-9d39-407f-8e70-4dfe00552554-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-krgft\" (UID: \"4adb52f8-9d39-407f-8e70-4dfe00552554\") " pod="openstack/dnsmasq-dns-5d75f767dc-krgft" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.400371 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/4adb52f8-9d39-407f-8e70-4dfe00552554-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-krgft\" (UID: \"4adb52f8-9d39-407f-8e70-4dfe00552554\") " pod="openstack/dnsmasq-dns-5d75f767dc-krgft" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.418336 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhwtm\" (UniqueName: \"kubernetes.io/projected/4adb52f8-9d39-407f-8e70-4dfe00552554-kube-api-access-jhwtm\") pod \"dnsmasq-dns-5d75f767dc-krgft\" (UID: \"4adb52f8-9d39-407f-8e70-4dfe00552554\") " pod="openstack/dnsmasq-dns-5d75f767dc-krgft" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.592484 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-krgft" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.741313 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-jnbbb" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.812568 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d5f3f02-da73-476f-944d-3149838eb7e6-dns-svc\") pod \"3d5f3f02-da73-476f-944d-3149838eb7e6\" (UID: \"3d5f3f02-da73-476f-944d-3149838eb7e6\") " Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.812675 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d5f3f02-da73-476f-944d-3149838eb7e6-ovsdbserver-sb\") pod \"3d5f3f02-da73-476f-944d-3149838eb7e6\" (UID: \"3d5f3f02-da73-476f-944d-3149838eb7e6\") " Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.812928 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d5f3f02-da73-476f-944d-3149838eb7e6-config\") pod \"3d5f3f02-da73-476f-944d-3149838eb7e6\" (UID: \"3d5f3f02-da73-476f-944d-3149838eb7e6\") " Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.813168 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d5f3f02-da73-476f-944d-3149838eb7e6-dns-swift-storage-0\") pod \"3d5f3f02-da73-476f-944d-3149838eb7e6\" (UID: \"3d5f3f02-da73-476f-944d-3149838eb7e6\") " Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.813608 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d5f3f02-da73-476f-944d-3149838eb7e6-ovsdbserver-nb\") pod \"3d5f3f02-da73-476f-944d-3149838eb7e6\" (UID: \"3d5f3f02-da73-476f-944d-3149838eb7e6\") " Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.814909 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqkjc\" (UniqueName: \"kubernetes.io/projected/3d5f3f02-da73-476f-944d-3149838eb7e6-kube-api-access-pqkjc\") pod \"3d5f3f02-da73-476f-944d-3149838eb7e6\" (UID: \"3d5f3f02-da73-476f-944d-3149838eb7e6\") " Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.819532 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d5f3f02-da73-476f-944d-3149838eb7e6-kube-api-access-pqkjc" (OuterVolumeSpecName: "kube-api-access-pqkjc") pod "3d5f3f02-da73-476f-944d-3149838eb7e6" (UID: "3d5f3f02-da73-476f-944d-3149838eb7e6"). InnerVolumeSpecName "kube-api-access-pqkjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.895583 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d5f3f02-da73-476f-944d-3149838eb7e6-config" (OuterVolumeSpecName: "config") pod "3d5f3f02-da73-476f-944d-3149838eb7e6" (UID: "3d5f3f02-da73-476f-944d-3149838eb7e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.899274 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d5f3f02-da73-476f-944d-3149838eb7e6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3d5f3f02-da73-476f-944d-3149838eb7e6" (UID: "3d5f3f02-da73-476f-944d-3149838eb7e6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.899742 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d5f3f02-da73-476f-944d-3149838eb7e6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3d5f3f02-da73-476f-944d-3149838eb7e6" (UID: "3d5f3f02-da73-476f-944d-3149838eb7e6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.905189 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d5f3f02-da73-476f-944d-3149838eb7e6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3d5f3f02-da73-476f-944d-3149838eb7e6" (UID: "3d5f3f02-da73-476f-944d-3149838eb7e6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.918722 4915 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3d5f3f02-da73-476f-944d-3149838eb7e6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.918761 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3d5f3f02-da73-476f-944d-3149838eb7e6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.918785 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqkjc\" (UniqueName: \"kubernetes.io/projected/3d5f3f02-da73-476f-944d-3149838eb7e6-kube-api-access-pqkjc\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.918798 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3d5f3f02-da73-476f-944d-3149838eb7e6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.918809 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d5f3f02-da73-476f-944d-3149838eb7e6-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:13 crc kubenswrapper[4915]: I1124 21:46:13.938654 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d5f3f02-da73-476f-944d-3149838eb7e6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3d5f3f02-da73-476f-944d-3149838eb7e6" (UID: "3d5f3f02-da73-476f-944d-3149838eb7e6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:46:14 crc kubenswrapper[4915]: I1124 21:46:14.021388 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3d5f3f02-da73-476f-944d-3149838eb7e6-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:14 crc kubenswrapper[4915]: I1124 21:46:14.067528 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-krgft"] Nov 24 21:46:14 crc kubenswrapper[4915]: I1124 21:46:14.173868 4915 generic.go:334] "Generic (PLEG): container finished" podID="3d5f3f02-da73-476f-944d-3149838eb7e6" containerID="68260b105130add9f5ea03994ff35072ea9a4287b7e2d5cf285e2f8e681f8e45" exitCode=0 Nov 24 21:46:14 crc kubenswrapper[4915]: I1124 21:46:14.173952 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-jnbbb" event={"ID":"3d5f3f02-da73-476f-944d-3149838eb7e6","Type":"ContainerDied","Data":"68260b105130add9f5ea03994ff35072ea9a4287b7e2d5cf285e2f8e681f8e45"} Nov 24 21:46:14 crc kubenswrapper[4915]: I1124 21:46:14.173970 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-jnbbb" Nov 24 21:46:14 crc kubenswrapper[4915]: I1124 21:46:14.173985 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-jnbbb" event={"ID":"3d5f3f02-da73-476f-944d-3149838eb7e6","Type":"ContainerDied","Data":"a094200b26899fc2565c935de8a06a28b907ee7aa77292433a04da16881f9361"} Nov 24 21:46:14 crc kubenswrapper[4915]: I1124 21:46:14.174013 4915 scope.go:117] "RemoveContainer" containerID="68260b105130add9f5ea03994ff35072ea9a4287b7e2d5cf285e2f8e681f8e45" Nov 24 21:46:14 crc kubenswrapper[4915]: I1124 21:46:14.177117 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-krgft" event={"ID":"4adb52f8-9d39-407f-8e70-4dfe00552554","Type":"ContainerStarted","Data":"bc9c8d91a6ef4e363170f75b95986ef0e459250dbbd6c89c7a934c3a40bb4e74"} Nov 24 21:46:14 crc kubenswrapper[4915]: I1124 21:46:14.213306 4915 scope.go:117] "RemoveContainer" containerID="0c5f8996c93e5cb8bd20845d4a98ef7a1a1d016a818e5d666a05dafc1fbc447b" Nov 24 21:46:14 crc kubenswrapper[4915]: I1124 21:46:14.215962 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-jnbbb"] Nov 24 21:46:14 crc kubenswrapper[4915]: I1124 21:46:14.227243 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-jnbbb"] Nov 24 21:46:14 crc kubenswrapper[4915]: I1124 21:46:14.312578 4915 scope.go:117] "RemoveContainer" containerID="68260b105130add9f5ea03994ff35072ea9a4287b7e2d5cf285e2f8e681f8e45" Nov 24 21:46:14 crc kubenswrapper[4915]: E1124 21:46:14.313301 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68260b105130add9f5ea03994ff35072ea9a4287b7e2d5cf285e2f8e681f8e45\": container with ID starting with 68260b105130add9f5ea03994ff35072ea9a4287b7e2d5cf285e2f8e681f8e45 not found: ID does not exist" containerID="68260b105130add9f5ea03994ff35072ea9a4287b7e2d5cf285e2f8e681f8e45" Nov 24 21:46:14 crc kubenswrapper[4915]: I1124 21:46:14.313339 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68260b105130add9f5ea03994ff35072ea9a4287b7e2d5cf285e2f8e681f8e45"} err="failed to get container status \"68260b105130add9f5ea03994ff35072ea9a4287b7e2d5cf285e2f8e681f8e45\": rpc error: code = NotFound desc = could not find container \"68260b105130add9f5ea03994ff35072ea9a4287b7e2d5cf285e2f8e681f8e45\": container with ID starting with 68260b105130add9f5ea03994ff35072ea9a4287b7e2d5cf285e2f8e681f8e45 not found: ID does not exist" Nov 24 21:46:14 crc kubenswrapper[4915]: I1124 21:46:14.313367 4915 scope.go:117] "RemoveContainer" containerID="0c5f8996c93e5cb8bd20845d4a98ef7a1a1d016a818e5d666a05dafc1fbc447b" Nov 24 21:46:14 crc kubenswrapper[4915]: E1124 21:46:14.314119 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c5f8996c93e5cb8bd20845d4a98ef7a1a1d016a818e5d666a05dafc1fbc447b\": container with ID starting with 0c5f8996c93e5cb8bd20845d4a98ef7a1a1d016a818e5d666a05dafc1fbc447b not found: ID does not exist" containerID="0c5f8996c93e5cb8bd20845d4a98ef7a1a1d016a818e5d666a05dafc1fbc447b" Nov 24 21:46:14 crc kubenswrapper[4915]: I1124 21:46:14.314152 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c5f8996c93e5cb8bd20845d4a98ef7a1a1d016a818e5d666a05dafc1fbc447b"} err="failed to get container status \"0c5f8996c93e5cb8bd20845d4a98ef7a1a1d016a818e5d666a05dafc1fbc447b\": rpc error: code = NotFound desc = could not find container \"0c5f8996c93e5cb8bd20845d4a98ef7a1a1d016a818e5d666a05dafc1fbc447b\": container with ID starting with 0c5f8996c93e5cb8bd20845d4a98ef7a1a1d016a818e5d666a05dafc1fbc447b not found: ID does not exist" Nov 24 21:46:14 crc kubenswrapper[4915]: I1124 21:46:14.442463 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d5f3f02-da73-476f-944d-3149838eb7e6" path="/var/lib/kubelet/pods/3d5f3f02-da73-476f-944d-3149838eb7e6/volumes" Nov 24 21:46:15 crc kubenswrapper[4915]: I1124 21:46:15.188819 4915 generic.go:334] "Generic (PLEG): container finished" podID="4adb52f8-9d39-407f-8e70-4dfe00552554" containerID="8109c3d229ebbec9cd346e4172a72969909f023da172fd36a294de6c0117602c" exitCode=0 Nov 24 21:46:15 crc kubenswrapper[4915]: I1124 21:46:15.189102 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-krgft" event={"ID":"4adb52f8-9d39-407f-8e70-4dfe00552554","Type":"ContainerDied","Data":"8109c3d229ebbec9cd346e4172a72969909f023da172fd36a294de6c0117602c"} Nov 24 21:46:16 crc kubenswrapper[4915]: I1124 21:46:16.223739 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-krgft" event={"ID":"4adb52f8-9d39-407f-8e70-4dfe00552554","Type":"ContainerStarted","Data":"829c8bf7743582eebcfe5c023612e708dc1dfe73c9c1cf8045e06ef52f0618a7"} Nov 24 21:46:16 crc kubenswrapper[4915]: I1124 21:46:16.228376 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d75f767dc-krgft" Nov 24 21:46:16 crc kubenswrapper[4915]: I1124 21:46:16.259754 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d75f767dc-krgft" podStartSLOduration=3.259738556 podStartE2EDuration="3.259738556s" podCreationTimestamp="2025-11-24 21:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:46:16.254538316 +0000 UTC m=+1594.570790479" watchObservedRunningTime="2025-11-24 21:46:16.259738556 +0000 UTC m=+1594.575990729" Nov 24 21:46:19 crc kubenswrapper[4915]: I1124 21:46:19.730209 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w9pgw"] Nov 24 21:46:19 crc kubenswrapper[4915]: E1124 21:46:19.731333 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d5f3f02-da73-476f-944d-3149838eb7e6" containerName="dnsmasq-dns" Nov 24 21:46:19 crc kubenswrapper[4915]: I1124 21:46:19.731345 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d5f3f02-da73-476f-944d-3149838eb7e6" containerName="dnsmasq-dns" Nov 24 21:46:19 crc kubenswrapper[4915]: E1124 21:46:19.731366 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d5f3f02-da73-476f-944d-3149838eb7e6" containerName="init" Nov 24 21:46:19 crc kubenswrapper[4915]: I1124 21:46:19.731372 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d5f3f02-da73-476f-944d-3149838eb7e6" containerName="init" Nov 24 21:46:19 crc kubenswrapper[4915]: I1124 21:46:19.731595 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d5f3f02-da73-476f-944d-3149838eb7e6" containerName="dnsmasq-dns" Nov 24 21:46:19 crc kubenswrapper[4915]: I1124 21:46:19.733189 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w9pgw" Nov 24 21:46:19 crc kubenswrapper[4915]: I1124 21:46:19.749834 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w9pgw"] Nov 24 21:46:19 crc kubenswrapper[4915]: I1124 21:46:19.867951 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/757f4ada-a720-4615-a1e6-41043a302e43-catalog-content\") pod \"community-operators-w9pgw\" (UID: \"757f4ada-a720-4615-a1e6-41043a302e43\") " pod="openshift-marketplace/community-operators-w9pgw" Nov 24 21:46:19 crc kubenswrapper[4915]: I1124 21:46:19.868248 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmxzg\" (UniqueName: \"kubernetes.io/projected/757f4ada-a720-4615-a1e6-41043a302e43-kube-api-access-vmxzg\") pod \"community-operators-w9pgw\" (UID: \"757f4ada-a720-4615-a1e6-41043a302e43\") " pod="openshift-marketplace/community-operators-w9pgw" Nov 24 21:46:19 crc kubenswrapper[4915]: I1124 21:46:19.868466 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/757f4ada-a720-4615-a1e6-41043a302e43-utilities\") pod \"community-operators-w9pgw\" (UID: \"757f4ada-a720-4615-a1e6-41043a302e43\") " pod="openshift-marketplace/community-operators-w9pgw" Nov 24 21:46:19 crc kubenswrapper[4915]: I1124 21:46:19.970558 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/757f4ada-a720-4615-a1e6-41043a302e43-utilities\") pod \"community-operators-w9pgw\" (UID: \"757f4ada-a720-4615-a1e6-41043a302e43\") " pod="openshift-marketplace/community-operators-w9pgw" Nov 24 21:46:19 crc kubenswrapper[4915]: I1124 21:46:19.970650 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/757f4ada-a720-4615-a1e6-41043a302e43-catalog-content\") pod \"community-operators-w9pgw\" (UID: \"757f4ada-a720-4615-a1e6-41043a302e43\") " pod="openshift-marketplace/community-operators-w9pgw" Nov 24 21:46:19 crc kubenswrapper[4915]: I1124 21:46:19.970725 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmxzg\" (UniqueName: \"kubernetes.io/projected/757f4ada-a720-4615-a1e6-41043a302e43-kube-api-access-vmxzg\") pod \"community-operators-w9pgw\" (UID: \"757f4ada-a720-4615-a1e6-41043a302e43\") " pod="openshift-marketplace/community-operators-w9pgw" Nov 24 21:46:19 crc kubenswrapper[4915]: I1124 21:46:19.971270 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/757f4ada-a720-4615-a1e6-41043a302e43-utilities\") pod \"community-operators-w9pgw\" (UID: \"757f4ada-a720-4615-a1e6-41043a302e43\") " pod="openshift-marketplace/community-operators-w9pgw" Nov 24 21:46:19 crc kubenswrapper[4915]: I1124 21:46:19.971286 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/757f4ada-a720-4615-a1e6-41043a302e43-catalog-content\") pod \"community-operators-w9pgw\" (UID: \"757f4ada-a720-4615-a1e6-41043a302e43\") " pod="openshift-marketplace/community-operators-w9pgw" Nov 24 21:46:19 crc kubenswrapper[4915]: I1124 21:46:19.997954 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmxzg\" (UniqueName: \"kubernetes.io/projected/757f4ada-a720-4615-a1e6-41043a302e43-kube-api-access-vmxzg\") pod \"community-operators-w9pgw\" (UID: \"757f4ada-a720-4615-a1e6-41043a302e43\") " pod="openshift-marketplace/community-operators-w9pgw" Nov 24 21:46:20 crc kubenswrapper[4915]: I1124 21:46:20.057026 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w9pgw" Nov 24 21:46:20 crc kubenswrapper[4915]: I1124 21:46:20.334737 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cf2cb"] Nov 24 21:46:20 crc kubenswrapper[4915]: I1124 21:46:20.337523 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cf2cb" Nov 24 21:46:20 crc kubenswrapper[4915]: I1124 21:46:20.360742 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cf2cb"] Nov 24 21:46:20 crc kubenswrapper[4915]: I1124 21:46:20.482653 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0fd4b9f-01e8-4c1b-a564-ee695d24757c-catalog-content\") pod \"redhat-marketplace-cf2cb\" (UID: \"b0fd4b9f-01e8-4c1b-a564-ee695d24757c\") " pod="openshift-marketplace/redhat-marketplace-cf2cb" Nov 24 21:46:20 crc kubenswrapper[4915]: I1124 21:46:20.482714 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0fd4b9f-01e8-4c1b-a564-ee695d24757c-utilities\") pod \"redhat-marketplace-cf2cb\" (UID: \"b0fd4b9f-01e8-4c1b-a564-ee695d24757c\") " pod="openshift-marketplace/redhat-marketplace-cf2cb" Nov 24 21:46:20 crc kubenswrapper[4915]: I1124 21:46:20.483036 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rc7s\" (UniqueName: \"kubernetes.io/projected/b0fd4b9f-01e8-4c1b-a564-ee695d24757c-kube-api-access-4rc7s\") pod \"redhat-marketplace-cf2cb\" (UID: \"b0fd4b9f-01e8-4c1b-a564-ee695d24757c\") " pod="openshift-marketplace/redhat-marketplace-cf2cb" Nov 24 21:46:20 crc kubenswrapper[4915]: I1124 21:46:20.587604 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0fd4b9f-01e8-4c1b-a564-ee695d24757c-catalog-content\") pod \"redhat-marketplace-cf2cb\" (UID: \"b0fd4b9f-01e8-4c1b-a564-ee695d24757c\") " pod="openshift-marketplace/redhat-marketplace-cf2cb" Nov 24 21:46:20 crc kubenswrapper[4915]: I1124 21:46:20.588216 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0fd4b9f-01e8-4c1b-a564-ee695d24757c-utilities\") pod \"redhat-marketplace-cf2cb\" (UID: \"b0fd4b9f-01e8-4c1b-a564-ee695d24757c\") " pod="openshift-marketplace/redhat-marketplace-cf2cb" Nov 24 21:46:20 crc kubenswrapper[4915]: I1124 21:46:20.588550 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rc7s\" (UniqueName: \"kubernetes.io/projected/b0fd4b9f-01e8-4c1b-a564-ee695d24757c-kube-api-access-4rc7s\") pod \"redhat-marketplace-cf2cb\" (UID: \"b0fd4b9f-01e8-4c1b-a564-ee695d24757c\") " pod="openshift-marketplace/redhat-marketplace-cf2cb" Nov 24 21:46:20 crc kubenswrapper[4915]: I1124 21:46:20.588141 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0fd4b9f-01e8-4c1b-a564-ee695d24757c-catalog-content\") pod \"redhat-marketplace-cf2cb\" (UID: \"b0fd4b9f-01e8-4c1b-a564-ee695d24757c\") " pod="openshift-marketplace/redhat-marketplace-cf2cb" Nov 24 21:46:20 crc kubenswrapper[4915]: I1124 21:46:20.589182 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0fd4b9f-01e8-4c1b-a564-ee695d24757c-utilities\") pod \"redhat-marketplace-cf2cb\" (UID: \"b0fd4b9f-01e8-4c1b-a564-ee695d24757c\") " pod="openshift-marketplace/redhat-marketplace-cf2cb" Nov 24 21:46:20 crc kubenswrapper[4915]: I1124 21:46:20.611290 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rc7s\" (UniqueName: \"kubernetes.io/projected/b0fd4b9f-01e8-4c1b-a564-ee695d24757c-kube-api-access-4rc7s\") pod \"redhat-marketplace-cf2cb\" (UID: \"b0fd4b9f-01e8-4c1b-a564-ee695d24757c\") " pod="openshift-marketplace/redhat-marketplace-cf2cb" Nov 24 21:46:20 crc kubenswrapper[4915]: I1124 21:46:20.672552 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cf2cb" Nov 24 21:46:20 crc kubenswrapper[4915]: I1124 21:46:20.672850 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w9pgw"] Nov 24 21:46:20 crc kubenswrapper[4915]: W1124 21:46:20.673973 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod757f4ada_a720_4615_a1e6_41043a302e43.slice/crio-d9858fde12b1433d61e216fe3692daad353d5e25a7920a440c359821e3026ad7 WatchSource:0}: Error finding container d9858fde12b1433d61e216fe3692daad353d5e25a7920a440c359821e3026ad7: Status 404 returned error can't find the container with id d9858fde12b1433d61e216fe3692daad353d5e25a7920a440c359821e3026ad7 Nov 24 21:46:21 crc kubenswrapper[4915]: I1124 21:46:21.197898 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cf2cb"] Nov 24 21:46:21 crc kubenswrapper[4915]: I1124 21:46:21.306563 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cf2cb" event={"ID":"b0fd4b9f-01e8-4c1b-a564-ee695d24757c","Type":"ContainerStarted","Data":"db6ad5b4d797d045af8e681d4475d8dd22dad42dc096c686705b021060d22ffb"} Nov 24 21:46:21 crc kubenswrapper[4915]: I1124 21:46:21.307916 4915 generic.go:334] "Generic (PLEG): container finished" podID="757f4ada-a720-4615-a1e6-41043a302e43" containerID="e2eed6230170c4914a7edf0077dc3e1a8c5a4d469c9214b9cecb8448583b2825" exitCode=0 Nov 24 21:46:21 crc kubenswrapper[4915]: I1124 21:46:21.307968 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9pgw" event={"ID":"757f4ada-a720-4615-a1e6-41043a302e43","Type":"ContainerDied","Data":"e2eed6230170c4914a7edf0077dc3e1a8c5a4d469c9214b9cecb8448583b2825"} Nov 24 21:46:21 crc kubenswrapper[4915]: I1124 21:46:21.308033 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9pgw" event={"ID":"757f4ada-a720-4615-a1e6-41043a302e43","Type":"ContainerStarted","Data":"d9858fde12b1433d61e216fe3692daad353d5e25a7920a440c359821e3026ad7"} Nov 24 21:46:21 crc kubenswrapper[4915]: I1124 21:46:21.310081 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 21:46:22 crc kubenswrapper[4915]: I1124 21:46:22.321149 4915 generic.go:334] "Generic (PLEG): container finished" podID="b0fd4b9f-01e8-4c1b-a564-ee695d24757c" containerID="9ad5667b02e0b8b9a3cc4974a6b4723e9a23cd3ca11dd1df89168846cd70bb5b" exitCode=0 Nov 24 21:46:22 crc kubenswrapper[4915]: I1124 21:46:22.321333 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cf2cb" event={"ID":"b0fd4b9f-01e8-4c1b-a564-ee695d24757c","Type":"ContainerDied","Data":"9ad5667b02e0b8b9a3cc4974a6b4723e9a23cd3ca11dd1df89168846cd70bb5b"} Nov 24 21:46:22 crc kubenswrapper[4915]: I1124 21:46:22.328643 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9pgw" event={"ID":"757f4ada-a720-4615-a1e6-41043a302e43","Type":"ContainerStarted","Data":"8c269b0147333f8430bbae33282d19ffc7a99c1a974e0930b4088c657e37e930"} Nov 24 21:46:22 crc kubenswrapper[4915]: I1124 21:46:22.733941 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nrknt"] Nov 24 21:46:22 crc kubenswrapper[4915]: I1124 21:46:22.736900 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nrknt" Nov 24 21:46:22 crc kubenswrapper[4915]: I1124 21:46:22.752334 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nrknt"] Nov 24 21:46:22 crc kubenswrapper[4915]: I1124 21:46:22.846507 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swwc4\" (UniqueName: \"kubernetes.io/projected/5f0637b1-c40b-40a6-8d0b-49c37a127cb3-kube-api-access-swwc4\") pod \"certified-operators-nrknt\" (UID: \"5f0637b1-c40b-40a6-8d0b-49c37a127cb3\") " pod="openshift-marketplace/certified-operators-nrknt" Nov 24 21:46:22 crc kubenswrapper[4915]: I1124 21:46:22.846947 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f0637b1-c40b-40a6-8d0b-49c37a127cb3-catalog-content\") pod \"certified-operators-nrknt\" (UID: \"5f0637b1-c40b-40a6-8d0b-49c37a127cb3\") " pod="openshift-marketplace/certified-operators-nrknt" Nov 24 21:46:22 crc kubenswrapper[4915]: I1124 21:46:22.847116 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f0637b1-c40b-40a6-8d0b-49c37a127cb3-utilities\") pod \"certified-operators-nrknt\" (UID: \"5f0637b1-c40b-40a6-8d0b-49c37a127cb3\") " pod="openshift-marketplace/certified-operators-nrknt" Nov 24 21:46:22 crc kubenswrapper[4915]: I1124 21:46:22.952490 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f0637b1-c40b-40a6-8d0b-49c37a127cb3-catalog-content\") pod \"certified-operators-nrknt\" (UID: \"5f0637b1-c40b-40a6-8d0b-49c37a127cb3\") " pod="openshift-marketplace/certified-operators-nrknt" Nov 24 21:46:22 crc kubenswrapper[4915]: I1124 21:46:22.952609 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f0637b1-c40b-40a6-8d0b-49c37a127cb3-utilities\") pod \"certified-operators-nrknt\" (UID: \"5f0637b1-c40b-40a6-8d0b-49c37a127cb3\") " pod="openshift-marketplace/certified-operators-nrknt" Nov 24 21:46:22 crc kubenswrapper[4915]: I1124 21:46:22.952712 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swwc4\" (UniqueName: \"kubernetes.io/projected/5f0637b1-c40b-40a6-8d0b-49c37a127cb3-kube-api-access-swwc4\") pod \"certified-operators-nrknt\" (UID: \"5f0637b1-c40b-40a6-8d0b-49c37a127cb3\") " pod="openshift-marketplace/certified-operators-nrknt" Nov 24 21:46:22 crc kubenswrapper[4915]: I1124 21:46:22.952954 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f0637b1-c40b-40a6-8d0b-49c37a127cb3-catalog-content\") pod \"certified-operators-nrknt\" (UID: \"5f0637b1-c40b-40a6-8d0b-49c37a127cb3\") " pod="openshift-marketplace/certified-operators-nrknt" Nov 24 21:46:22 crc kubenswrapper[4915]: I1124 21:46:22.953293 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f0637b1-c40b-40a6-8d0b-49c37a127cb3-utilities\") pod \"certified-operators-nrknt\" (UID: \"5f0637b1-c40b-40a6-8d0b-49c37a127cb3\") " pod="openshift-marketplace/certified-operators-nrknt" Nov 24 21:46:22 crc kubenswrapper[4915]: I1124 21:46:22.972480 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swwc4\" (UniqueName: \"kubernetes.io/projected/5f0637b1-c40b-40a6-8d0b-49c37a127cb3-kube-api-access-swwc4\") pod \"certified-operators-nrknt\" (UID: \"5f0637b1-c40b-40a6-8d0b-49c37a127cb3\") " pod="openshift-marketplace/certified-operators-nrknt" Nov 24 21:46:23 crc kubenswrapper[4915]: I1124 21:46:23.066922 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nrknt" Nov 24 21:46:23 crc kubenswrapper[4915]: I1124 21:46:23.350245 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cf2cb" event={"ID":"b0fd4b9f-01e8-4c1b-a564-ee695d24757c","Type":"ContainerStarted","Data":"224896a421349e1f7c4b38bba70fac517f2b56c9753cd0737060a53630702345"} Nov 24 21:46:23 crc kubenswrapper[4915]: I1124 21:46:23.428374 4915 scope.go:117] "RemoveContainer" containerID="e35b421db64e0c0a1d101af5d1cb0bc0580962fd3824f2e5b2c603b4d4ef9489" Nov 24 21:46:23 crc kubenswrapper[4915]: E1124 21:46:23.428587 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:46:23 crc kubenswrapper[4915]: I1124 21:46:23.646914 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d75f767dc-krgft" Nov 24 21:46:23 crc kubenswrapper[4915]: I1124 21:46:23.695630 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nrknt"] Nov 24 21:46:23 crc kubenswrapper[4915]: I1124 21:46:23.750847 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-52lgl"] Nov 24 21:46:23 crc kubenswrapper[4915]: I1124 21:46:23.751086 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" podUID="f756a69e-2278-4383-8498-3bf49c149cb9" containerName="dnsmasq-dns" containerID="cri-o://b49f70e0f7cdc41a8164c858da5fd643c848ebc95f5a0e5ad313b65d19fa40e9" gracePeriod=10 Nov 24 21:46:24 crc kubenswrapper[4915]: I1124 21:46:24.358841 4915 generic.go:334] "Generic (PLEG): container finished" podID="f756a69e-2278-4383-8498-3bf49c149cb9" containerID="b49f70e0f7cdc41a8164c858da5fd643c848ebc95f5a0e5ad313b65d19fa40e9" exitCode=0 Nov 24 21:46:24 crc kubenswrapper[4915]: I1124 21:46:24.358929 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" event={"ID":"f756a69e-2278-4383-8498-3bf49c149cb9","Type":"ContainerDied","Data":"b49f70e0f7cdc41a8164c858da5fd643c848ebc95f5a0e5ad313b65d19fa40e9"} Nov 24 21:46:24 crc kubenswrapper[4915]: I1124 21:46:24.359304 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" event={"ID":"f756a69e-2278-4383-8498-3bf49c149cb9","Type":"ContainerDied","Data":"a680419d1448a9e0b54aafb5a04dbb79b76c9d90bfe1da3b8669e29e7084c677"} Nov 24 21:46:24 crc kubenswrapper[4915]: I1124 21:46:24.359324 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a680419d1448a9e0b54aafb5a04dbb79b76c9d90bfe1da3b8669e29e7084c677" Nov 24 21:46:24 crc kubenswrapper[4915]: I1124 21:46:24.361121 4915 generic.go:334] "Generic (PLEG): container finished" podID="5f0637b1-c40b-40a6-8d0b-49c37a127cb3" containerID="f6804497ddf646eb9f066d1cbdf154151a0c93405109855380611c4f32478a63" exitCode=0 Nov 24 21:46:24 crc kubenswrapper[4915]: I1124 21:46:24.361208 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrknt" event={"ID":"5f0637b1-c40b-40a6-8d0b-49c37a127cb3","Type":"ContainerDied","Data":"f6804497ddf646eb9f066d1cbdf154151a0c93405109855380611c4f32478a63"} Nov 24 21:46:24 crc kubenswrapper[4915]: I1124 21:46:24.361234 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrknt" event={"ID":"5f0637b1-c40b-40a6-8d0b-49c37a127cb3","Type":"ContainerStarted","Data":"a4ee4ea65a11cdcc79bc8f331f9c459dfd5ff57bec6b065d4ac43641210e2031"} Nov 24 21:46:24 crc kubenswrapper[4915]: I1124 21:46:24.410431 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" Nov 24 21:46:24 crc kubenswrapper[4915]: I1124 21:46:24.498148 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-dns-swift-storage-0\") pod \"f756a69e-2278-4383-8498-3bf49c149cb9\" (UID: \"f756a69e-2278-4383-8498-3bf49c149cb9\") " Nov 24 21:46:24 crc kubenswrapper[4915]: I1124 21:46:24.498207 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-ovsdbserver-nb\") pod \"f756a69e-2278-4383-8498-3bf49c149cb9\" (UID: \"f756a69e-2278-4383-8498-3bf49c149cb9\") " Nov 24 21:46:24 crc kubenswrapper[4915]: I1124 21:46:24.498311 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-config\") pod \"f756a69e-2278-4383-8498-3bf49c149cb9\" (UID: \"f756a69e-2278-4383-8498-3bf49c149cb9\") " Nov 24 21:46:24 crc kubenswrapper[4915]: I1124 21:46:24.498441 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-ovsdbserver-sb\") pod \"f756a69e-2278-4383-8498-3bf49c149cb9\" (UID: \"f756a69e-2278-4383-8498-3bf49c149cb9\") " Nov 24 21:46:24 crc kubenswrapper[4915]: I1124 21:46:24.498603 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vsgx\" (UniqueName: \"kubernetes.io/projected/f756a69e-2278-4383-8498-3bf49c149cb9-kube-api-access-4vsgx\") pod \"f756a69e-2278-4383-8498-3bf49c149cb9\" (UID: \"f756a69e-2278-4383-8498-3bf49c149cb9\") " Nov 24 21:46:24 crc kubenswrapper[4915]: I1124 21:46:24.498642 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-openstack-edpm-ipam\") pod \"f756a69e-2278-4383-8498-3bf49c149cb9\" (UID: \"f756a69e-2278-4383-8498-3bf49c149cb9\") " Nov 24 21:46:24 crc kubenswrapper[4915]: I1124 21:46:24.498673 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-dns-svc\") pod \"f756a69e-2278-4383-8498-3bf49c149cb9\" (UID: \"f756a69e-2278-4383-8498-3bf49c149cb9\") " Nov 24 21:46:24 crc kubenswrapper[4915]: I1124 21:46:24.528493 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f756a69e-2278-4383-8498-3bf49c149cb9-kube-api-access-4vsgx" (OuterVolumeSpecName: "kube-api-access-4vsgx") pod "f756a69e-2278-4383-8498-3bf49c149cb9" (UID: "f756a69e-2278-4383-8498-3bf49c149cb9"). InnerVolumeSpecName "kube-api-access-4vsgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:46:24 crc kubenswrapper[4915]: I1124 21:46:24.575998 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-config" (OuterVolumeSpecName: "config") pod "f756a69e-2278-4383-8498-3bf49c149cb9" (UID: "f756a69e-2278-4383-8498-3bf49c149cb9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:46:24 crc kubenswrapper[4915]: I1124 21:46:24.576645 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f756a69e-2278-4383-8498-3bf49c149cb9" (UID: "f756a69e-2278-4383-8498-3bf49c149cb9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:46:24 crc kubenswrapper[4915]: I1124 21:46:24.576693 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f756a69e-2278-4383-8498-3bf49c149cb9" (UID: "f756a69e-2278-4383-8498-3bf49c149cb9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:46:24 crc kubenswrapper[4915]: I1124 21:46:24.577240 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f756a69e-2278-4383-8498-3bf49c149cb9" (UID: "f756a69e-2278-4383-8498-3bf49c149cb9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:46:24 crc kubenswrapper[4915]: I1124 21:46:24.577666 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "f756a69e-2278-4383-8498-3bf49c149cb9" (UID: "f756a69e-2278-4383-8498-3bf49c149cb9"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:46:24 crc kubenswrapper[4915]: I1124 21:46:24.584407 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f756a69e-2278-4383-8498-3bf49c149cb9" (UID: "f756a69e-2278-4383-8498-3bf49c149cb9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:46:24 crc kubenswrapper[4915]: I1124 21:46:24.601948 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vsgx\" (UniqueName: \"kubernetes.io/projected/f756a69e-2278-4383-8498-3bf49c149cb9-kube-api-access-4vsgx\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:24 crc kubenswrapper[4915]: I1124 21:46:24.601989 4915 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:24 crc kubenswrapper[4915]: I1124 21:46:24.602001 4915 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:24 crc kubenswrapper[4915]: I1124 21:46:24.602016 4915 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:24 crc kubenswrapper[4915]: I1124 21:46:24.602029 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:24 crc kubenswrapper[4915]: I1124 21:46:24.602041 4915 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-config\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:24 crc kubenswrapper[4915]: I1124 21:46:24.602051 4915 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f756a69e-2278-4383-8498-3bf49c149cb9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:25 crc kubenswrapper[4915]: I1124 21:46:25.373276 4915 generic.go:334] "Generic (PLEG): container finished" podID="757f4ada-a720-4615-a1e6-41043a302e43" containerID="8c269b0147333f8430bbae33282d19ffc7a99c1a974e0930b4088c657e37e930" exitCode=0 Nov 24 21:46:25 crc kubenswrapper[4915]: I1124 21:46:25.373633 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-52lgl" Nov 24 21:46:25 crc kubenswrapper[4915]: I1124 21:46:25.373315 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9pgw" event={"ID":"757f4ada-a720-4615-a1e6-41043a302e43","Type":"ContainerDied","Data":"8c269b0147333f8430bbae33282d19ffc7a99c1a974e0930b4088c657e37e930"} Nov 24 21:46:25 crc kubenswrapper[4915]: I1124 21:46:25.445365 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-52lgl"] Nov 24 21:46:25 crc kubenswrapper[4915]: I1124 21:46:25.455269 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-52lgl"] Nov 24 21:46:26 crc kubenswrapper[4915]: I1124 21:46:26.388452 4915 generic.go:334] "Generic (PLEG): container finished" podID="b0fd4b9f-01e8-4c1b-a564-ee695d24757c" containerID="224896a421349e1f7c4b38bba70fac517f2b56c9753cd0737060a53630702345" exitCode=0 Nov 24 21:46:26 crc kubenswrapper[4915]: I1124 21:46:26.388537 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cf2cb" event={"ID":"b0fd4b9f-01e8-4c1b-a564-ee695d24757c","Type":"ContainerDied","Data":"224896a421349e1f7c4b38bba70fac517f2b56c9753cd0737060a53630702345"} Nov 24 21:46:26 crc kubenswrapper[4915]: I1124 21:46:26.458550 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f756a69e-2278-4383-8498-3bf49c149cb9" path="/var/lib/kubelet/pods/f756a69e-2278-4383-8498-3bf49c149cb9/volumes" Nov 24 21:46:27 crc kubenswrapper[4915]: I1124 21:46:27.401682 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrknt" event={"ID":"5f0637b1-c40b-40a6-8d0b-49c37a127cb3","Type":"ContainerStarted","Data":"35ed0487c6e846a62bec0f2b6484d041ac9301354d6ed57404d82882ba17e2f1"} Nov 24 21:46:28 crc kubenswrapper[4915]: I1124 21:46:28.421523 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cf2cb" event={"ID":"b0fd4b9f-01e8-4c1b-a564-ee695d24757c","Type":"ContainerStarted","Data":"25af712785026ef9bed0ea936a9c5a646f84fcb0159522442f912ced7e56e68f"} Nov 24 21:46:28 crc kubenswrapper[4915]: I1124 21:46:28.424975 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9pgw" event={"ID":"757f4ada-a720-4615-a1e6-41043a302e43","Type":"ContainerStarted","Data":"041c33822a97f141a9e4f28828651b2bfec2a2d58d4ac882e8e5d39e4061b4d2"} Nov 24 21:46:28 crc kubenswrapper[4915]: I1124 21:46:28.476196 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cf2cb" podStartSLOduration=3.246870294 podStartE2EDuration="8.476170145s" podCreationTimestamp="2025-11-24 21:46:20 +0000 UTC" firstStartedPulling="2025-11-24 21:46:22.323438828 +0000 UTC m=+1600.639691021" lastFinishedPulling="2025-11-24 21:46:27.552738699 +0000 UTC m=+1605.868990872" observedRunningTime="2025-11-24 21:46:28.443428944 +0000 UTC m=+1606.759681167" watchObservedRunningTime="2025-11-24 21:46:28.476170145 +0000 UTC m=+1606.792422328" Nov 24 21:46:28 crc kubenswrapper[4915]: I1124 21:46:28.521041 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w9pgw" podStartSLOduration=3.651622092 podStartE2EDuration="9.521024371s" podCreationTimestamp="2025-11-24 21:46:19 +0000 UTC" firstStartedPulling="2025-11-24 21:46:21.30987108 +0000 UTC m=+1599.626123253" lastFinishedPulling="2025-11-24 21:46:27.179273329 +0000 UTC m=+1605.495525532" observedRunningTime="2025-11-24 21:46:28.4800872 +0000 UTC m=+1606.796339383" watchObservedRunningTime="2025-11-24 21:46:28.521024371 +0000 UTC m=+1606.837276544" Nov 24 21:46:30 crc kubenswrapper[4915]: I1124 21:46:30.057573 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w9pgw" Nov 24 21:46:30 crc kubenswrapper[4915]: I1124 21:46:30.057855 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w9pgw" Nov 24 21:46:30 crc kubenswrapper[4915]: I1124 21:46:30.448305 4915 generic.go:334] "Generic (PLEG): container finished" podID="5f0637b1-c40b-40a6-8d0b-49c37a127cb3" containerID="35ed0487c6e846a62bec0f2b6484d041ac9301354d6ed57404d82882ba17e2f1" exitCode=0 Nov 24 21:46:30 crc kubenswrapper[4915]: I1124 21:46:30.448441 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrknt" event={"ID":"5f0637b1-c40b-40a6-8d0b-49c37a127cb3","Type":"ContainerDied","Data":"35ed0487c6e846a62bec0f2b6484d041ac9301354d6ed57404d82882ba17e2f1"} Nov 24 21:46:30 crc kubenswrapper[4915]: I1124 21:46:30.673812 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cf2cb" Nov 24 21:46:30 crc kubenswrapper[4915]: I1124 21:46:30.674407 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cf2cb" Nov 24 21:46:31 crc kubenswrapper[4915]: I1124 21:46:31.113863 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-w9pgw" podUID="757f4ada-a720-4615-a1e6-41043a302e43" containerName="registry-server" probeResult="failure" output=< Nov 24 21:46:31 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Nov 24 21:46:31 crc kubenswrapper[4915]: > Nov 24 21:46:31 crc kubenswrapper[4915]: I1124 21:46:31.461282 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrknt" event={"ID":"5f0637b1-c40b-40a6-8d0b-49c37a127cb3","Type":"ContainerStarted","Data":"b0ff068260b60821c29ccb1e7484c261796393e2e67eb9d77e35b9041923a25e"} Nov 24 21:46:31 crc kubenswrapper[4915]: I1124 21:46:31.491921 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nrknt" podStartSLOduration=2.991935577 podStartE2EDuration="9.491906118s" podCreationTimestamp="2025-11-24 21:46:22 +0000 UTC" firstStartedPulling="2025-11-24 21:46:24.363178434 +0000 UTC m=+1602.679430607" lastFinishedPulling="2025-11-24 21:46:30.863148975 +0000 UTC m=+1609.179401148" observedRunningTime="2025-11-24 21:46:31.476581246 +0000 UTC m=+1609.792833419" watchObservedRunningTime="2025-11-24 21:46:31.491906118 +0000 UTC m=+1609.808158291" Nov 24 21:46:31 crc kubenswrapper[4915]: I1124 21:46:31.730437 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-cf2cb" podUID="b0fd4b9f-01e8-4c1b-a564-ee695d24757c" containerName="registry-server" probeResult="failure" output=< Nov 24 21:46:31 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Nov 24 21:46:31 crc kubenswrapper[4915]: > Nov 24 21:46:33 crc kubenswrapper[4915]: I1124 21:46:33.068449 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nrknt" Nov 24 21:46:33 crc kubenswrapper[4915]: I1124 21:46:33.068571 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nrknt" Nov 24 21:46:34 crc kubenswrapper[4915]: I1124 21:46:34.122115 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-nrknt" podUID="5f0637b1-c40b-40a6-8d0b-49c37a127cb3" containerName="registry-server" probeResult="failure" output=< Nov 24 21:46:34 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Nov 24 21:46:34 crc kubenswrapper[4915]: > Nov 24 21:46:35 crc kubenswrapper[4915]: I1124 21:46:35.510799 4915 generic.go:334] "Generic (PLEG): container finished" podID="28a8634d-9ce0-460f-a8a9-e5cb05fc63cc" containerID="a1a66138fad7f415b658d09e015a2499be32736664e7b8e54425aa5ccbc7fb06" exitCode=0 Nov 24 21:46:35 crc kubenswrapper[4915]: I1124 21:46:35.510895 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc","Type":"ContainerDied","Data":"a1a66138fad7f415b658d09e015a2499be32736664e7b8e54425aa5ccbc7fb06"} Nov 24 21:46:35 crc kubenswrapper[4915]: I1124 21:46:35.519719 4915 generic.go:334] "Generic (PLEG): container finished" podID="6c951dd6-e7fe-411c-8156-92784c966328" containerID="01e8e928672818208bbd06572b4f66651485d8de6fbc53a3913c906d542d8b53" exitCode=0 Nov 24 21:46:35 crc kubenswrapper[4915]: I1124 21:46:35.519801 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6c951dd6-e7fe-411c-8156-92784c966328","Type":"ContainerDied","Data":"01e8e928672818208bbd06572b4f66651485d8de6fbc53a3913c906d542d8b53"} Nov 24 21:46:36 crc kubenswrapper[4915]: I1124 21:46:36.532293 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"28a8634d-9ce0-460f-a8a9-e5cb05fc63cc","Type":"ContainerStarted","Data":"acabe3980b3e6f1dc9f2525ed6c38daa5a0427ecb4c50b8e00cbeecf2ae948fb"} Nov 24 21:46:36 crc kubenswrapper[4915]: I1124 21:46:36.533087 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 24 21:46:36 crc kubenswrapper[4915]: I1124 21:46:36.536385 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6c951dd6-e7fe-411c-8156-92784c966328","Type":"ContainerStarted","Data":"75c9b34a5433faabca24056121910f07ea825289e28238478393d0a36a45fe6d"} Nov 24 21:46:36 crc kubenswrapper[4915]: I1124 21:46:36.537044 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:46:36 crc kubenswrapper[4915]: I1124 21:46:36.570366 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.570337944 podStartE2EDuration="37.570337944s" podCreationTimestamp="2025-11-24 21:45:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:46:36.561949538 +0000 UTC m=+1614.878201741" watchObservedRunningTime="2025-11-24 21:46:36.570337944 +0000 UTC m=+1614.886590117" Nov 24 21:46:36 crc kubenswrapper[4915]: I1124 21:46:36.590532 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.590508386 podStartE2EDuration="36.590508386s" podCreationTimestamp="2025-11-24 21:46:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 21:46:36.590429374 +0000 UTC m=+1614.906681577" watchObservedRunningTime="2025-11-24 21:46:36.590508386 +0000 UTC m=+1614.906760559" Nov 24 21:46:37 crc kubenswrapper[4915]: I1124 21:46:37.942086 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2"] Nov 24 21:46:37 crc kubenswrapper[4915]: E1124 21:46:37.942917 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f756a69e-2278-4383-8498-3bf49c149cb9" containerName="init" Nov 24 21:46:37 crc kubenswrapper[4915]: I1124 21:46:37.942936 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f756a69e-2278-4383-8498-3bf49c149cb9" containerName="init" Nov 24 21:46:37 crc kubenswrapper[4915]: E1124 21:46:37.942991 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f756a69e-2278-4383-8498-3bf49c149cb9" containerName="dnsmasq-dns" Nov 24 21:46:37 crc kubenswrapper[4915]: I1124 21:46:37.943002 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f756a69e-2278-4383-8498-3bf49c149cb9" containerName="dnsmasq-dns" Nov 24 21:46:37 crc kubenswrapper[4915]: I1124 21:46:37.943267 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f756a69e-2278-4383-8498-3bf49c149cb9" containerName="dnsmasq-dns" Nov 24 21:46:37 crc kubenswrapper[4915]: I1124 21:46:37.944080 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2" Nov 24 21:46:37 crc kubenswrapper[4915]: I1124 21:46:37.948399 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:46:37 crc kubenswrapper[4915]: I1124 21:46:37.948438 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkk6k" Nov 24 21:46:37 crc kubenswrapper[4915]: I1124 21:46:37.948599 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:46:37 crc kubenswrapper[4915]: I1124 21:46:37.948719 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:46:38 crc kubenswrapper[4915]: I1124 21:46:38.002073 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2"] Nov 24 21:46:38 crc kubenswrapper[4915]: I1124 21:46:38.099987 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17878839-cfa7-48e0-a162-9a11347e9424-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2\" (UID: \"17878839-cfa7-48e0-a162-9a11347e9424\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2" Nov 24 21:46:38 crc kubenswrapper[4915]: I1124 21:46:38.100118 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17878839-cfa7-48e0-a162-9a11347e9424-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2\" (UID: \"17878839-cfa7-48e0-a162-9a11347e9424\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2" Nov 24 21:46:38 crc kubenswrapper[4915]: I1124 21:46:38.100171 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlgqk\" (UniqueName: \"kubernetes.io/projected/17878839-cfa7-48e0-a162-9a11347e9424-kube-api-access-qlgqk\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2\" (UID: \"17878839-cfa7-48e0-a162-9a11347e9424\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2" Nov 24 21:46:38 crc kubenswrapper[4915]: I1124 21:46:38.101124 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17878839-cfa7-48e0-a162-9a11347e9424-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2\" (UID: \"17878839-cfa7-48e0-a162-9a11347e9424\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2" Nov 24 21:46:38 crc kubenswrapper[4915]: I1124 21:46:38.203167 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17878839-cfa7-48e0-a162-9a11347e9424-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2\" (UID: \"17878839-cfa7-48e0-a162-9a11347e9424\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2" Nov 24 21:46:38 crc kubenswrapper[4915]: I1124 21:46:38.203283 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlgqk\" (UniqueName: \"kubernetes.io/projected/17878839-cfa7-48e0-a162-9a11347e9424-kube-api-access-qlgqk\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2\" (UID: \"17878839-cfa7-48e0-a162-9a11347e9424\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2" Nov 24 21:46:38 crc kubenswrapper[4915]: I1124 21:46:38.203313 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17878839-cfa7-48e0-a162-9a11347e9424-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2\" (UID: \"17878839-cfa7-48e0-a162-9a11347e9424\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2" Nov 24 21:46:38 crc kubenswrapper[4915]: I1124 21:46:38.203506 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17878839-cfa7-48e0-a162-9a11347e9424-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2\" (UID: \"17878839-cfa7-48e0-a162-9a11347e9424\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2" Nov 24 21:46:38 crc kubenswrapper[4915]: I1124 21:46:38.208541 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17878839-cfa7-48e0-a162-9a11347e9424-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2\" (UID: \"17878839-cfa7-48e0-a162-9a11347e9424\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2" Nov 24 21:46:38 crc kubenswrapper[4915]: I1124 21:46:38.208803 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17878839-cfa7-48e0-a162-9a11347e9424-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2\" (UID: \"17878839-cfa7-48e0-a162-9a11347e9424\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2" Nov 24 21:46:38 crc kubenswrapper[4915]: I1124 21:46:38.208835 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17878839-cfa7-48e0-a162-9a11347e9424-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2\" (UID: \"17878839-cfa7-48e0-a162-9a11347e9424\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2" Nov 24 21:46:38 crc kubenswrapper[4915]: I1124 21:46:38.229146 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlgqk\" (UniqueName: \"kubernetes.io/projected/17878839-cfa7-48e0-a162-9a11347e9424-kube-api-access-qlgqk\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2\" (UID: \"17878839-cfa7-48e0-a162-9a11347e9424\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2" Nov 24 21:46:38 crc kubenswrapper[4915]: I1124 21:46:38.274052 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2" Nov 24 21:46:38 crc kubenswrapper[4915]: I1124 21:46:38.427222 4915 scope.go:117] "RemoveContainer" containerID="e35b421db64e0c0a1d101af5d1cb0bc0580962fd3824f2e5b2c603b4d4ef9489" Nov 24 21:46:38 crc kubenswrapper[4915]: E1124 21:46:38.427923 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:46:39 crc kubenswrapper[4915]: I1124 21:46:39.453042 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2"] Nov 24 21:46:39 crc kubenswrapper[4915]: I1124 21:46:39.577714 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2" event={"ID":"17878839-cfa7-48e0-a162-9a11347e9424","Type":"ContainerStarted","Data":"3d0d8c56b1283db54df1c76e2d4500d1250897e2794703dc98f8aa3d56d26def"} Nov 24 21:46:40 crc kubenswrapper[4915]: I1124 21:46:40.785544 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cf2cb" Nov 24 21:46:40 crc kubenswrapper[4915]: I1124 21:46:40.847103 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cf2cb" Nov 24 21:46:41 crc kubenswrapper[4915]: I1124 21:46:41.028516 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cf2cb"] Nov 24 21:46:41 crc kubenswrapper[4915]: I1124 21:46:41.121667 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-w9pgw" podUID="757f4ada-a720-4615-a1e6-41043a302e43" containerName="registry-server" probeResult="failure" output=< Nov 24 21:46:41 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Nov 24 21:46:41 crc kubenswrapper[4915]: > Nov 24 21:46:42 crc kubenswrapper[4915]: I1124 21:46:42.619492 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cf2cb" podUID="b0fd4b9f-01e8-4c1b-a564-ee695d24757c" containerName="registry-server" containerID="cri-o://25af712785026ef9bed0ea936a9c5a646f84fcb0159522442f912ced7e56e68f" gracePeriod=2 Nov 24 21:46:44 crc kubenswrapper[4915]: I1124 21:46:44.132991 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-nrknt" podUID="5f0637b1-c40b-40a6-8d0b-49c37a127cb3" containerName="registry-server" probeResult="failure" output=< Nov 24 21:46:44 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Nov 24 21:46:44 crc kubenswrapper[4915]: > Nov 24 21:46:44 crc kubenswrapper[4915]: I1124 21:46:44.653630 4915 generic.go:334] "Generic (PLEG): container finished" podID="b0fd4b9f-01e8-4c1b-a564-ee695d24757c" containerID="25af712785026ef9bed0ea936a9c5a646f84fcb0159522442f912ced7e56e68f" exitCode=0 Nov 24 21:46:44 crc kubenswrapper[4915]: I1124 21:46:44.653882 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cf2cb" event={"ID":"b0fd4b9f-01e8-4c1b-a564-ee695d24757c","Type":"ContainerDied","Data":"25af712785026ef9bed0ea936a9c5a646f84fcb0159522442f912ced7e56e68f"} Nov 24 21:46:45 crc kubenswrapper[4915]: I1124 21:46:45.437627 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cf2cb" Nov 24 21:46:45 crc kubenswrapper[4915]: I1124 21:46:45.514790 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0fd4b9f-01e8-4c1b-a564-ee695d24757c-utilities\") pod \"b0fd4b9f-01e8-4c1b-a564-ee695d24757c\" (UID: \"b0fd4b9f-01e8-4c1b-a564-ee695d24757c\") " Nov 24 21:46:45 crc kubenswrapper[4915]: I1124 21:46:45.515322 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rc7s\" (UniqueName: \"kubernetes.io/projected/b0fd4b9f-01e8-4c1b-a564-ee695d24757c-kube-api-access-4rc7s\") pod \"b0fd4b9f-01e8-4c1b-a564-ee695d24757c\" (UID: \"b0fd4b9f-01e8-4c1b-a564-ee695d24757c\") " Nov 24 21:46:45 crc kubenswrapper[4915]: I1124 21:46:45.515437 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0fd4b9f-01e8-4c1b-a564-ee695d24757c-catalog-content\") pod \"b0fd4b9f-01e8-4c1b-a564-ee695d24757c\" (UID: \"b0fd4b9f-01e8-4c1b-a564-ee695d24757c\") " Nov 24 21:46:45 crc kubenswrapper[4915]: I1124 21:46:45.516207 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0fd4b9f-01e8-4c1b-a564-ee695d24757c-utilities" (OuterVolumeSpecName: "utilities") pod "b0fd4b9f-01e8-4c1b-a564-ee695d24757c" (UID: "b0fd4b9f-01e8-4c1b-a564-ee695d24757c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:46:45 crc kubenswrapper[4915]: I1124 21:46:45.517812 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0fd4b9f-01e8-4c1b-a564-ee695d24757c-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:45 crc kubenswrapper[4915]: I1124 21:46:45.532988 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0fd4b9f-01e8-4c1b-a564-ee695d24757c-kube-api-access-4rc7s" (OuterVolumeSpecName: "kube-api-access-4rc7s") pod "b0fd4b9f-01e8-4c1b-a564-ee695d24757c" (UID: "b0fd4b9f-01e8-4c1b-a564-ee695d24757c"). InnerVolumeSpecName "kube-api-access-4rc7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:46:45 crc kubenswrapper[4915]: I1124 21:46:45.544865 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0fd4b9f-01e8-4c1b-a564-ee695d24757c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0fd4b9f-01e8-4c1b-a564-ee695d24757c" (UID: "b0fd4b9f-01e8-4c1b-a564-ee695d24757c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:46:45 crc kubenswrapper[4915]: I1124 21:46:45.620745 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0fd4b9f-01e8-4c1b-a564-ee695d24757c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:45 crc kubenswrapper[4915]: I1124 21:46:45.620806 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rc7s\" (UniqueName: \"kubernetes.io/projected/b0fd4b9f-01e8-4c1b-a564-ee695d24757c-kube-api-access-4rc7s\") on node \"crc\" DevicePath \"\"" Nov 24 21:46:45 crc kubenswrapper[4915]: I1124 21:46:45.672391 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cf2cb" event={"ID":"b0fd4b9f-01e8-4c1b-a564-ee695d24757c","Type":"ContainerDied","Data":"db6ad5b4d797d045af8e681d4475d8dd22dad42dc096c686705b021060d22ffb"} Nov 24 21:46:45 crc kubenswrapper[4915]: I1124 21:46:45.672451 4915 scope.go:117] "RemoveContainer" containerID="25af712785026ef9bed0ea936a9c5a646f84fcb0159522442f912ced7e56e68f" Nov 24 21:46:45 crc kubenswrapper[4915]: I1124 21:46:45.672514 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cf2cb" Nov 24 21:46:45 crc kubenswrapper[4915]: I1124 21:46:45.727476 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cf2cb"] Nov 24 21:46:45 crc kubenswrapper[4915]: I1124 21:46:45.745676 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cf2cb"] Nov 24 21:46:46 crc kubenswrapper[4915]: I1124 21:46:46.450712 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0fd4b9f-01e8-4c1b-a564-ee695d24757c" path="/var/lib/kubelet/pods/b0fd4b9f-01e8-4c1b-a564-ee695d24757c/volumes" Nov 24 21:46:50 crc kubenswrapper[4915]: I1124 21:46:50.501220 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="28a8634d-9ce0-460f-a8a9-e5cb05fc63cc" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.5:5671: connect: connection refused" Nov 24 21:46:50 crc kubenswrapper[4915]: I1124 21:46:50.955068 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="6c951dd6-e7fe-411c-8156-92784c966328" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.6:5671: connect: connection refused" Nov 24 21:46:51 crc kubenswrapper[4915]: I1124 21:46:51.144050 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-w9pgw" podUID="757f4ada-a720-4615-a1e6-41043a302e43" containerName="registry-server" probeResult="failure" output=< Nov 24 21:46:51 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Nov 24 21:46:51 crc kubenswrapper[4915]: > Nov 24 21:46:52 crc kubenswrapper[4915]: I1124 21:46:52.227863 4915 scope.go:117] "RemoveContainer" containerID="224896a421349e1f7c4b38bba70fac517f2b56c9753cd0737060a53630702345" Nov 24 21:46:52 crc kubenswrapper[4915]: I1124 21:46:52.301908 4915 scope.go:117] "RemoveContainer" containerID="9ad5667b02e0b8b9a3cc4974a6b4723e9a23cd3ca11dd1df89168846cd70bb5b" Nov 24 21:46:52 crc kubenswrapper[4915]: I1124 21:46:52.306808 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:46:52 crc kubenswrapper[4915]: I1124 21:46:52.751359 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2" event={"ID":"17878839-cfa7-48e0-a162-9a11347e9424","Type":"ContainerStarted","Data":"77af3bbae7366969747fc6b52b56b885fe764d12c18907640699dd46e5541344"} Nov 24 21:46:52 crc kubenswrapper[4915]: I1124 21:46:52.775528 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2" podStartSLOduration=2.916264294 podStartE2EDuration="15.775508255s" podCreationTimestamp="2025-11-24 21:46:37 +0000 UTC" firstStartedPulling="2025-11-24 21:46:39.444399168 +0000 UTC m=+1617.760651341" lastFinishedPulling="2025-11-24 21:46:52.303643129 +0000 UTC m=+1630.619895302" observedRunningTime="2025-11-24 21:46:52.76791056 +0000 UTC m=+1631.084162753" watchObservedRunningTime="2025-11-24 21:46:52.775508255 +0000 UTC m=+1631.091760438" Nov 24 21:46:53 crc kubenswrapper[4915]: I1124 21:46:53.426683 4915 scope.go:117] "RemoveContainer" containerID="e35b421db64e0c0a1d101af5d1cb0bc0580962fd3824f2e5b2c603b4d4ef9489" Nov 24 21:46:53 crc kubenswrapper[4915]: E1124 21:46:53.427320 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:46:54 crc kubenswrapper[4915]: I1124 21:46:54.119656 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-nrknt" podUID="5f0637b1-c40b-40a6-8d0b-49c37a127cb3" containerName="registry-server" probeResult="failure" output=< Nov 24 21:46:54 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Nov 24 21:46:54 crc kubenswrapper[4915]: > Nov 24 21:47:00 crc kubenswrapper[4915]: I1124 21:47:00.147523 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w9pgw" Nov 24 21:47:00 crc kubenswrapper[4915]: I1124 21:47:00.210393 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w9pgw" Nov 24 21:47:00 crc kubenswrapper[4915]: I1124 21:47:00.388412 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w9pgw"] Nov 24 21:47:00 crc kubenswrapper[4915]: I1124 21:47:00.500002 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 24 21:47:00 crc kubenswrapper[4915]: I1124 21:47:00.954211 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 24 21:47:01 crc kubenswrapper[4915]: I1124 21:47:01.858650 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w9pgw" podUID="757f4ada-a720-4615-a1e6-41043a302e43" containerName="registry-server" containerID="cri-o://041c33822a97f141a9e4f28828651b2bfec2a2d58d4ac882e8e5d39e4061b4d2" gracePeriod=2 Nov 24 21:47:02 crc kubenswrapper[4915]: I1124 21:47:02.471728 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w9pgw" Nov 24 21:47:02 crc kubenswrapper[4915]: I1124 21:47:02.557431 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/757f4ada-a720-4615-a1e6-41043a302e43-utilities\") pod \"757f4ada-a720-4615-a1e6-41043a302e43\" (UID: \"757f4ada-a720-4615-a1e6-41043a302e43\") " Nov 24 21:47:02 crc kubenswrapper[4915]: I1124 21:47:02.557485 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmxzg\" (UniqueName: \"kubernetes.io/projected/757f4ada-a720-4615-a1e6-41043a302e43-kube-api-access-vmxzg\") pod \"757f4ada-a720-4615-a1e6-41043a302e43\" (UID: \"757f4ada-a720-4615-a1e6-41043a302e43\") " Nov 24 21:47:02 crc kubenswrapper[4915]: I1124 21:47:02.557539 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/757f4ada-a720-4615-a1e6-41043a302e43-catalog-content\") pod \"757f4ada-a720-4615-a1e6-41043a302e43\" (UID: \"757f4ada-a720-4615-a1e6-41043a302e43\") " Nov 24 21:47:02 crc kubenswrapper[4915]: I1124 21:47:02.558106 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/757f4ada-a720-4615-a1e6-41043a302e43-utilities" (OuterVolumeSpecName: "utilities") pod "757f4ada-a720-4615-a1e6-41043a302e43" (UID: "757f4ada-a720-4615-a1e6-41043a302e43"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:47:02 crc kubenswrapper[4915]: I1124 21:47:02.563019 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/757f4ada-a720-4615-a1e6-41043a302e43-kube-api-access-vmxzg" (OuterVolumeSpecName: "kube-api-access-vmxzg") pod "757f4ada-a720-4615-a1e6-41043a302e43" (UID: "757f4ada-a720-4615-a1e6-41043a302e43"). InnerVolumeSpecName "kube-api-access-vmxzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:47:02 crc kubenswrapper[4915]: I1124 21:47:02.608844 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/757f4ada-a720-4615-a1e6-41043a302e43-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "757f4ada-a720-4615-a1e6-41043a302e43" (UID: "757f4ada-a720-4615-a1e6-41043a302e43"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:47:02 crc kubenswrapper[4915]: I1124 21:47:02.659988 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/757f4ada-a720-4615-a1e6-41043a302e43-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:47:02 crc kubenswrapper[4915]: I1124 21:47:02.660306 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmxzg\" (UniqueName: \"kubernetes.io/projected/757f4ada-a720-4615-a1e6-41043a302e43-kube-api-access-vmxzg\") on node \"crc\" DevicePath \"\"" Nov 24 21:47:02 crc kubenswrapper[4915]: I1124 21:47:02.660321 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/757f4ada-a720-4615-a1e6-41043a302e43-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:47:02 crc kubenswrapper[4915]: I1124 21:47:02.882863 4915 generic.go:334] "Generic (PLEG): container finished" podID="757f4ada-a720-4615-a1e6-41043a302e43" containerID="041c33822a97f141a9e4f28828651b2bfec2a2d58d4ac882e8e5d39e4061b4d2" exitCode=0 Nov 24 21:47:02 crc kubenswrapper[4915]: I1124 21:47:02.882914 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9pgw" event={"ID":"757f4ada-a720-4615-a1e6-41043a302e43","Type":"ContainerDied","Data":"041c33822a97f141a9e4f28828651b2bfec2a2d58d4ac882e8e5d39e4061b4d2"} Nov 24 21:47:02 crc kubenswrapper[4915]: I1124 21:47:02.882953 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w9pgw" event={"ID":"757f4ada-a720-4615-a1e6-41043a302e43","Type":"ContainerDied","Data":"d9858fde12b1433d61e216fe3692daad353d5e25a7920a440c359821e3026ad7"} Nov 24 21:47:02 crc kubenswrapper[4915]: I1124 21:47:02.882971 4915 scope.go:117] "RemoveContainer" containerID="041c33822a97f141a9e4f28828651b2bfec2a2d58d4ac882e8e5d39e4061b4d2" Nov 24 21:47:02 crc kubenswrapper[4915]: I1124 21:47:02.882976 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w9pgw" Nov 24 21:47:02 crc kubenswrapper[4915]: I1124 21:47:02.936299 4915 scope.go:117] "RemoveContainer" containerID="8c269b0147333f8430bbae33282d19ffc7a99c1a974e0930b4088c657e37e930" Nov 24 21:47:02 crc kubenswrapper[4915]: I1124 21:47:02.957927 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w9pgw"] Nov 24 21:47:02 crc kubenswrapper[4915]: I1124 21:47:02.969022 4915 scope.go:117] "RemoveContainer" containerID="e2eed6230170c4914a7edf0077dc3e1a8c5a4d469c9214b9cecb8448583b2825" Nov 24 21:47:02 crc kubenswrapper[4915]: I1124 21:47:02.987265 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w9pgw"] Nov 24 21:47:03 crc kubenswrapper[4915]: I1124 21:47:03.036151 4915 scope.go:117] "RemoveContainer" containerID="041c33822a97f141a9e4f28828651b2bfec2a2d58d4ac882e8e5d39e4061b4d2" Nov 24 21:47:03 crc kubenswrapper[4915]: E1124 21:47:03.037596 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"041c33822a97f141a9e4f28828651b2bfec2a2d58d4ac882e8e5d39e4061b4d2\": container with ID starting with 041c33822a97f141a9e4f28828651b2bfec2a2d58d4ac882e8e5d39e4061b4d2 not found: ID does not exist" containerID="041c33822a97f141a9e4f28828651b2bfec2a2d58d4ac882e8e5d39e4061b4d2" Nov 24 21:47:03 crc kubenswrapper[4915]: I1124 21:47:03.037646 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"041c33822a97f141a9e4f28828651b2bfec2a2d58d4ac882e8e5d39e4061b4d2"} err="failed to get container status \"041c33822a97f141a9e4f28828651b2bfec2a2d58d4ac882e8e5d39e4061b4d2\": rpc error: code = NotFound desc = could not find container \"041c33822a97f141a9e4f28828651b2bfec2a2d58d4ac882e8e5d39e4061b4d2\": container with ID starting with 041c33822a97f141a9e4f28828651b2bfec2a2d58d4ac882e8e5d39e4061b4d2 not found: ID does not exist" Nov 24 21:47:03 crc kubenswrapper[4915]: I1124 21:47:03.037674 4915 scope.go:117] "RemoveContainer" containerID="8c269b0147333f8430bbae33282d19ffc7a99c1a974e0930b4088c657e37e930" Nov 24 21:47:03 crc kubenswrapper[4915]: E1124 21:47:03.038102 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c269b0147333f8430bbae33282d19ffc7a99c1a974e0930b4088c657e37e930\": container with ID starting with 8c269b0147333f8430bbae33282d19ffc7a99c1a974e0930b4088c657e37e930 not found: ID does not exist" containerID="8c269b0147333f8430bbae33282d19ffc7a99c1a974e0930b4088c657e37e930" Nov 24 21:47:03 crc kubenswrapper[4915]: I1124 21:47:03.038150 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c269b0147333f8430bbae33282d19ffc7a99c1a974e0930b4088c657e37e930"} err="failed to get container status \"8c269b0147333f8430bbae33282d19ffc7a99c1a974e0930b4088c657e37e930\": rpc error: code = NotFound desc = could not find container \"8c269b0147333f8430bbae33282d19ffc7a99c1a974e0930b4088c657e37e930\": container with ID starting with 8c269b0147333f8430bbae33282d19ffc7a99c1a974e0930b4088c657e37e930 not found: ID does not exist" Nov 24 21:47:03 crc kubenswrapper[4915]: I1124 21:47:03.038190 4915 scope.go:117] "RemoveContainer" containerID="e2eed6230170c4914a7edf0077dc3e1a8c5a4d469c9214b9cecb8448583b2825" Nov 24 21:47:03 crc kubenswrapper[4915]: E1124 21:47:03.038537 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2eed6230170c4914a7edf0077dc3e1a8c5a4d469c9214b9cecb8448583b2825\": container with ID starting with e2eed6230170c4914a7edf0077dc3e1a8c5a4d469c9214b9cecb8448583b2825 not found: ID does not exist" containerID="e2eed6230170c4914a7edf0077dc3e1a8c5a4d469c9214b9cecb8448583b2825" Nov 24 21:47:03 crc kubenswrapper[4915]: I1124 21:47:03.038580 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2eed6230170c4914a7edf0077dc3e1a8c5a4d469c9214b9cecb8448583b2825"} err="failed to get container status \"e2eed6230170c4914a7edf0077dc3e1a8c5a4d469c9214b9cecb8448583b2825\": rpc error: code = NotFound desc = could not find container \"e2eed6230170c4914a7edf0077dc3e1a8c5a4d469c9214b9cecb8448583b2825\": container with ID starting with e2eed6230170c4914a7edf0077dc3e1a8c5a4d469c9214b9cecb8448583b2825 not found: ID does not exist" Nov 24 21:47:03 crc kubenswrapper[4915]: I1124 21:47:03.120313 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nrknt" Nov 24 21:47:03 crc kubenswrapper[4915]: I1124 21:47:03.187852 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nrknt" Nov 24 21:47:03 crc kubenswrapper[4915]: I1124 21:47:03.901360 4915 generic.go:334] "Generic (PLEG): container finished" podID="17878839-cfa7-48e0-a162-9a11347e9424" containerID="77af3bbae7366969747fc6b52b56b885fe764d12c18907640699dd46e5541344" exitCode=0 Nov 24 21:47:03 crc kubenswrapper[4915]: I1124 21:47:03.902960 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2" event={"ID":"17878839-cfa7-48e0-a162-9a11347e9424","Type":"ContainerDied","Data":"77af3bbae7366969747fc6b52b56b885fe764d12c18907640699dd46e5541344"} Nov 24 21:47:04 crc kubenswrapper[4915]: I1124 21:47:04.440727 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="757f4ada-a720-4615-a1e6-41043a302e43" path="/var/lib/kubelet/pods/757f4ada-a720-4615-a1e6-41043a302e43/volumes" Nov 24 21:47:05 crc kubenswrapper[4915]: I1124 21:47:05.389690 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nrknt"] Nov 24 21:47:05 crc kubenswrapper[4915]: I1124 21:47:05.390405 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nrknt" podUID="5f0637b1-c40b-40a6-8d0b-49c37a127cb3" containerName="registry-server" containerID="cri-o://b0ff068260b60821c29ccb1e7484c261796393e2e67eb9d77e35b9041923a25e" gracePeriod=2 Nov 24 21:47:05 crc kubenswrapper[4915]: I1124 21:47:05.626833 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2" Nov 24 21:47:05 crc kubenswrapper[4915]: I1124 21:47:05.758261 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlgqk\" (UniqueName: \"kubernetes.io/projected/17878839-cfa7-48e0-a162-9a11347e9424-kube-api-access-qlgqk\") pod \"17878839-cfa7-48e0-a162-9a11347e9424\" (UID: \"17878839-cfa7-48e0-a162-9a11347e9424\") " Nov 24 21:47:05 crc kubenswrapper[4915]: I1124 21:47:05.758546 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17878839-cfa7-48e0-a162-9a11347e9424-ssh-key\") pod \"17878839-cfa7-48e0-a162-9a11347e9424\" (UID: \"17878839-cfa7-48e0-a162-9a11347e9424\") " Nov 24 21:47:05 crc kubenswrapper[4915]: I1124 21:47:05.758588 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17878839-cfa7-48e0-a162-9a11347e9424-inventory\") pod \"17878839-cfa7-48e0-a162-9a11347e9424\" (UID: \"17878839-cfa7-48e0-a162-9a11347e9424\") " Nov 24 21:47:05 crc kubenswrapper[4915]: I1124 21:47:05.758628 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17878839-cfa7-48e0-a162-9a11347e9424-repo-setup-combined-ca-bundle\") pod \"17878839-cfa7-48e0-a162-9a11347e9424\" (UID: \"17878839-cfa7-48e0-a162-9a11347e9424\") " Nov 24 21:47:05 crc kubenswrapper[4915]: I1124 21:47:05.766239 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17878839-cfa7-48e0-a162-9a11347e9424-kube-api-access-qlgqk" (OuterVolumeSpecName: "kube-api-access-qlgqk") pod "17878839-cfa7-48e0-a162-9a11347e9424" (UID: "17878839-cfa7-48e0-a162-9a11347e9424"). InnerVolumeSpecName "kube-api-access-qlgqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:47:05 crc kubenswrapper[4915]: I1124 21:47:05.766385 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17878839-cfa7-48e0-a162-9a11347e9424-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "17878839-cfa7-48e0-a162-9a11347e9424" (UID: "17878839-cfa7-48e0-a162-9a11347e9424"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:47:05 crc kubenswrapper[4915]: I1124 21:47:05.801023 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17878839-cfa7-48e0-a162-9a11347e9424-inventory" (OuterVolumeSpecName: "inventory") pod "17878839-cfa7-48e0-a162-9a11347e9424" (UID: "17878839-cfa7-48e0-a162-9a11347e9424"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:47:05 crc kubenswrapper[4915]: I1124 21:47:05.802910 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17878839-cfa7-48e0-a162-9a11347e9424-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "17878839-cfa7-48e0-a162-9a11347e9424" (UID: "17878839-cfa7-48e0-a162-9a11347e9424"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:47:05 crc kubenswrapper[4915]: I1124 21:47:05.861587 4915 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17878839-cfa7-48e0-a162-9a11347e9424-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:47:05 crc kubenswrapper[4915]: I1124 21:47:05.861617 4915 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17878839-cfa7-48e0-a162-9a11347e9424-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:47:05 crc kubenswrapper[4915]: I1124 21:47:05.861628 4915 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17878839-cfa7-48e0-a162-9a11347e9424-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:47:05 crc kubenswrapper[4915]: I1124 21:47:05.861638 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlgqk\" (UniqueName: \"kubernetes.io/projected/17878839-cfa7-48e0-a162-9a11347e9424-kube-api-access-qlgqk\") on node \"crc\" DevicePath \"\"" Nov 24 21:47:05 crc kubenswrapper[4915]: I1124 21:47:05.887645 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nrknt" Nov 24 21:47:05 crc kubenswrapper[4915]: I1124 21:47:05.942405 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2" event={"ID":"17878839-cfa7-48e0-a162-9a11347e9424","Type":"ContainerDied","Data":"3d0d8c56b1283db54df1c76e2d4500d1250897e2794703dc98f8aa3d56d26def"} Nov 24 21:47:05 crc kubenswrapper[4915]: I1124 21:47:05.942452 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d0d8c56b1283db54df1c76e2d4500d1250897e2794703dc98f8aa3d56d26def" Nov 24 21:47:05 crc kubenswrapper[4915]: I1124 21:47:05.942450 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2" Nov 24 21:47:05 crc kubenswrapper[4915]: I1124 21:47:05.948180 4915 generic.go:334] "Generic (PLEG): container finished" podID="5f0637b1-c40b-40a6-8d0b-49c37a127cb3" containerID="b0ff068260b60821c29ccb1e7484c261796393e2e67eb9d77e35b9041923a25e" exitCode=0 Nov 24 21:47:05 crc kubenswrapper[4915]: I1124 21:47:05.948295 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrknt" event={"ID":"5f0637b1-c40b-40a6-8d0b-49c37a127cb3","Type":"ContainerDied","Data":"b0ff068260b60821c29ccb1e7484c261796393e2e67eb9d77e35b9041923a25e"} Nov 24 21:47:05 crc kubenswrapper[4915]: I1124 21:47:05.948361 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nrknt" event={"ID":"5f0637b1-c40b-40a6-8d0b-49c37a127cb3","Type":"ContainerDied","Data":"a4ee4ea65a11cdcc79bc8f331f9c459dfd5ff57bec6b065d4ac43641210e2031"} Nov 24 21:47:05 crc kubenswrapper[4915]: I1124 21:47:05.948386 4915 scope.go:117] "RemoveContainer" containerID="b0ff068260b60821c29ccb1e7484c261796393e2e67eb9d77e35b9041923a25e" Nov 24 21:47:05 crc kubenswrapper[4915]: I1124 21:47:05.948587 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nrknt" Nov 24 21:47:05 crc kubenswrapper[4915]: I1124 21:47:05.962691 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swwc4\" (UniqueName: \"kubernetes.io/projected/5f0637b1-c40b-40a6-8d0b-49c37a127cb3-kube-api-access-swwc4\") pod \"5f0637b1-c40b-40a6-8d0b-49c37a127cb3\" (UID: \"5f0637b1-c40b-40a6-8d0b-49c37a127cb3\") " Nov 24 21:47:05 crc kubenswrapper[4915]: I1124 21:47:05.962814 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f0637b1-c40b-40a6-8d0b-49c37a127cb3-catalog-content\") pod \"5f0637b1-c40b-40a6-8d0b-49c37a127cb3\" (UID: \"5f0637b1-c40b-40a6-8d0b-49c37a127cb3\") " Nov 24 21:47:05 crc kubenswrapper[4915]: I1124 21:47:05.962969 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f0637b1-c40b-40a6-8d0b-49c37a127cb3-utilities\") pod \"5f0637b1-c40b-40a6-8d0b-49c37a127cb3\" (UID: \"5f0637b1-c40b-40a6-8d0b-49c37a127cb3\") " Nov 24 21:47:05 crc kubenswrapper[4915]: I1124 21:47:05.963757 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f0637b1-c40b-40a6-8d0b-49c37a127cb3-utilities" (OuterVolumeSpecName: "utilities") pod "5f0637b1-c40b-40a6-8d0b-49c37a127cb3" (UID: "5f0637b1-c40b-40a6-8d0b-49c37a127cb3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:47:05 crc kubenswrapper[4915]: I1124 21:47:05.985529 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f0637b1-c40b-40a6-8d0b-49c37a127cb3-kube-api-access-swwc4" (OuterVolumeSpecName: "kube-api-access-swwc4") pod "5f0637b1-c40b-40a6-8d0b-49c37a127cb3" (UID: "5f0637b1-c40b-40a6-8d0b-49c37a127cb3"). InnerVolumeSpecName "kube-api-access-swwc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.006640 4915 scope.go:117] "RemoveContainer" containerID="35ed0487c6e846a62bec0f2b6484d041ac9301354d6ed57404d82882ba17e2f1" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.035601 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-2ql5d"] Nov 24 21:47:06 crc kubenswrapper[4915]: E1124 21:47:06.036220 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0fd4b9f-01e8-4c1b-a564-ee695d24757c" containerName="extract-utilities" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.036233 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0fd4b9f-01e8-4c1b-a564-ee695d24757c" containerName="extract-utilities" Nov 24 21:47:06 crc kubenswrapper[4915]: E1124 21:47:06.036249 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f0637b1-c40b-40a6-8d0b-49c37a127cb3" containerName="extract-content" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.036273 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f0637b1-c40b-40a6-8d0b-49c37a127cb3" containerName="extract-content" Nov 24 21:47:06 crc kubenswrapper[4915]: E1124 21:47:06.036292 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="757f4ada-a720-4615-a1e6-41043a302e43" containerName="registry-server" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.036300 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="757f4ada-a720-4615-a1e6-41043a302e43" containerName="registry-server" Nov 24 21:47:06 crc kubenswrapper[4915]: E1124 21:47:06.036324 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0fd4b9f-01e8-4c1b-a564-ee695d24757c" containerName="extract-content" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.036354 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0fd4b9f-01e8-4c1b-a564-ee695d24757c" containerName="extract-content" Nov 24 21:47:06 crc kubenswrapper[4915]: E1124 21:47:06.036373 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0fd4b9f-01e8-4c1b-a564-ee695d24757c" containerName="registry-server" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.036379 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0fd4b9f-01e8-4c1b-a564-ee695d24757c" containerName="registry-server" Nov 24 21:47:06 crc kubenswrapper[4915]: E1124 21:47:06.036391 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="757f4ada-a720-4615-a1e6-41043a302e43" containerName="extract-content" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.036398 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="757f4ada-a720-4615-a1e6-41043a302e43" containerName="extract-content" Nov 24 21:47:06 crc kubenswrapper[4915]: E1124 21:47:06.036443 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f0637b1-c40b-40a6-8d0b-49c37a127cb3" containerName="registry-server" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.036452 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f0637b1-c40b-40a6-8d0b-49c37a127cb3" containerName="registry-server" Nov 24 21:47:06 crc kubenswrapper[4915]: E1124 21:47:06.036465 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="757f4ada-a720-4615-a1e6-41043a302e43" containerName="extract-utilities" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.036471 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="757f4ada-a720-4615-a1e6-41043a302e43" containerName="extract-utilities" Nov 24 21:47:06 crc kubenswrapper[4915]: E1124 21:47:06.036488 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17878839-cfa7-48e0-a162-9a11347e9424" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.036513 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="17878839-cfa7-48e0-a162-9a11347e9424" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 21:47:06 crc kubenswrapper[4915]: E1124 21:47:06.036527 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f0637b1-c40b-40a6-8d0b-49c37a127cb3" containerName="extract-utilities" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.036532 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f0637b1-c40b-40a6-8d0b-49c37a127cb3" containerName="extract-utilities" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.036826 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f0637b1-c40b-40a6-8d0b-49c37a127cb3" containerName="registry-server" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.036875 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0fd4b9f-01e8-4c1b-a564-ee695d24757c" containerName="registry-server" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.036884 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="757f4ada-a720-4615-a1e6-41043a302e43" containerName="registry-server" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.036892 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="17878839-cfa7-48e0-a162-9a11347e9424" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.037961 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2ql5d" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.053095 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.053350 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.054679 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.054680 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkk6k" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.057098 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f0637b1-c40b-40a6-8d0b-49c37a127cb3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f0637b1-c40b-40a6-8d0b-49c37a127cb3" (UID: "5f0637b1-c40b-40a6-8d0b-49c37a127cb3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.066860 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f0637b1-c40b-40a6-8d0b-49c37a127cb3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.066888 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f0637b1-c40b-40a6-8d0b-49c37a127cb3-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.066898 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swwc4\" (UniqueName: \"kubernetes.io/projected/5f0637b1-c40b-40a6-8d0b-49c37a127cb3-kube-api-access-swwc4\") on node \"crc\" DevicePath \"\"" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.068893 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-2ql5d"] Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.070705 4915 scope.go:117] "RemoveContainer" containerID="f6804497ddf646eb9f066d1cbdf154151a0c93405109855380611c4f32478a63" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.098936 4915 scope.go:117] "RemoveContainer" containerID="b0ff068260b60821c29ccb1e7484c261796393e2e67eb9d77e35b9041923a25e" Nov 24 21:47:06 crc kubenswrapper[4915]: E1124 21:47:06.099466 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0ff068260b60821c29ccb1e7484c261796393e2e67eb9d77e35b9041923a25e\": container with ID starting with b0ff068260b60821c29ccb1e7484c261796393e2e67eb9d77e35b9041923a25e not found: ID does not exist" containerID="b0ff068260b60821c29ccb1e7484c261796393e2e67eb9d77e35b9041923a25e" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.099510 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0ff068260b60821c29ccb1e7484c261796393e2e67eb9d77e35b9041923a25e"} err="failed to get container status \"b0ff068260b60821c29ccb1e7484c261796393e2e67eb9d77e35b9041923a25e\": rpc error: code = NotFound desc = could not find container \"b0ff068260b60821c29ccb1e7484c261796393e2e67eb9d77e35b9041923a25e\": container with ID starting with b0ff068260b60821c29ccb1e7484c261796393e2e67eb9d77e35b9041923a25e not found: ID does not exist" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.099533 4915 scope.go:117] "RemoveContainer" containerID="35ed0487c6e846a62bec0f2b6484d041ac9301354d6ed57404d82882ba17e2f1" Nov 24 21:47:06 crc kubenswrapper[4915]: E1124 21:47:06.099921 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35ed0487c6e846a62bec0f2b6484d041ac9301354d6ed57404d82882ba17e2f1\": container with ID starting with 35ed0487c6e846a62bec0f2b6484d041ac9301354d6ed57404d82882ba17e2f1 not found: ID does not exist" containerID="35ed0487c6e846a62bec0f2b6484d041ac9301354d6ed57404d82882ba17e2f1" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.099952 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35ed0487c6e846a62bec0f2b6484d041ac9301354d6ed57404d82882ba17e2f1"} err="failed to get container status \"35ed0487c6e846a62bec0f2b6484d041ac9301354d6ed57404d82882ba17e2f1\": rpc error: code = NotFound desc = could not find container \"35ed0487c6e846a62bec0f2b6484d041ac9301354d6ed57404d82882ba17e2f1\": container with ID starting with 35ed0487c6e846a62bec0f2b6484d041ac9301354d6ed57404d82882ba17e2f1 not found: ID does not exist" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.099969 4915 scope.go:117] "RemoveContainer" containerID="f6804497ddf646eb9f066d1cbdf154151a0c93405109855380611c4f32478a63" Nov 24 21:47:06 crc kubenswrapper[4915]: E1124 21:47:06.100416 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6804497ddf646eb9f066d1cbdf154151a0c93405109855380611c4f32478a63\": container with ID starting with f6804497ddf646eb9f066d1cbdf154151a0c93405109855380611c4f32478a63 not found: ID does not exist" containerID="f6804497ddf646eb9f066d1cbdf154151a0c93405109855380611c4f32478a63" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.100465 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6804497ddf646eb9f066d1cbdf154151a0c93405109855380611c4f32478a63"} err="failed to get container status \"f6804497ddf646eb9f066d1cbdf154151a0c93405109855380611c4f32478a63\": rpc error: code = NotFound desc = could not find container \"f6804497ddf646eb9f066d1cbdf154151a0c93405109855380611c4f32478a63\": container with ID starting with f6804497ddf646eb9f066d1cbdf154151a0c93405109855380611c4f32478a63 not found: ID does not exist" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.168651 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2bcdaa71-e70f-4d08-969f-bf0f3aa88db8-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2ql5d\" (UID: \"2bcdaa71-e70f-4d08-969f-bf0f3aa88db8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2ql5d" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.168750 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw2wd\" (UniqueName: \"kubernetes.io/projected/2bcdaa71-e70f-4d08-969f-bf0f3aa88db8-kube-api-access-fw2wd\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2ql5d\" (UID: \"2bcdaa71-e70f-4d08-969f-bf0f3aa88db8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2ql5d" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.168856 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2bcdaa71-e70f-4d08-969f-bf0f3aa88db8-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2ql5d\" (UID: \"2bcdaa71-e70f-4d08-969f-bf0f3aa88db8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2ql5d" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.270770 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2bcdaa71-e70f-4d08-969f-bf0f3aa88db8-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2ql5d\" (UID: \"2bcdaa71-e70f-4d08-969f-bf0f3aa88db8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2ql5d" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.270913 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw2wd\" (UniqueName: \"kubernetes.io/projected/2bcdaa71-e70f-4d08-969f-bf0f3aa88db8-kube-api-access-fw2wd\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2ql5d\" (UID: \"2bcdaa71-e70f-4d08-969f-bf0f3aa88db8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2ql5d" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.271317 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2bcdaa71-e70f-4d08-969f-bf0f3aa88db8-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2ql5d\" (UID: \"2bcdaa71-e70f-4d08-969f-bf0f3aa88db8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2ql5d" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.276059 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2bcdaa71-e70f-4d08-969f-bf0f3aa88db8-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2ql5d\" (UID: \"2bcdaa71-e70f-4d08-969f-bf0f3aa88db8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2ql5d" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.280650 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2bcdaa71-e70f-4d08-969f-bf0f3aa88db8-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2ql5d\" (UID: \"2bcdaa71-e70f-4d08-969f-bf0f3aa88db8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2ql5d" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.294887 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nrknt"] Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.295204 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw2wd\" (UniqueName: \"kubernetes.io/projected/2bcdaa71-e70f-4d08-969f-bf0f3aa88db8-kube-api-access-fw2wd\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2ql5d\" (UID: \"2bcdaa71-e70f-4d08-969f-bf0f3aa88db8\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2ql5d" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.305414 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nrknt"] Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.374984 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2ql5d" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.427553 4915 scope.go:117] "RemoveContainer" containerID="e35b421db64e0c0a1d101af5d1cb0bc0580962fd3824f2e5b2c603b4d4ef9489" Nov 24 21:47:06 crc kubenswrapper[4915]: E1124 21:47:06.427990 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:47:06 crc kubenswrapper[4915]: I1124 21:47:06.447895 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f0637b1-c40b-40a6-8d0b-49c37a127cb3" path="/var/lib/kubelet/pods/5f0637b1-c40b-40a6-8d0b-49c37a127cb3/volumes" Nov 24 21:47:07 crc kubenswrapper[4915]: W1124 21:47:07.007406 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bcdaa71_e70f_4d08_969f_bf0f3aa88db8.slice/crio-e6e754725f9ce1fa6557bb21dfef6025e7fd33e015689a284d00ff3da34eab58 WatchSource:0}: Error finding container e6e754725f9ce1fa6557bb21dfef6025e7fd33e015689a284d00ff3da34eab58: Status 404 returned error can't find the container with id e6e754725f9ce1fa6557bb21dfef6025e7fd33e015689a284d00ff3da34eab58 Nov 24 21:47:07 crc kubenswrapper[4915]: I1124 21:47:07.007650 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-2ql5d"] Nov 24 21:47:08 crc kubenswrapper[4915]: I1124 21:47:08.011227 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2ql5d" event={"ID":"2bcdaa71-e70f-4d08-969f-bf0f3aa88db8","Type":"ContainerStarted","Data":"6f264dff3a53bcf5677aa4db3be1beb2b84877ea25fdd9e13c2d142cb3635983"} Nov 24 21:47:08 crc kubenswrapper[4915]: I1124 21:47:08.011607 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2ql5d" event={"ID":"2bcdaa71-e70f-4d08-969f-bf0f3aa88db8","Type":"ContainerStarted","Data":"e6e754725f9ce1fa6557bb21dfef6025e7fd33e015689a284d00ff3da34eab58"} Nov 24 21:47:08 crc kubenswrapper[4915]: I1124 21:47:08.034547 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2ql5d" podStartSLOduration=2.550162809 podStartE2EDuration="3.03452805s" podCreationTimestamp="2025-11-24 21:47:05 +0000 UTC" firstStartedPulling="2025-11-24 21:47:07.010366137 +0000 UTC m=+1645.326618310" lastFinishedPulling="2025-11-24 21:47:07.494731378 +0000 UTC m=+1645.810983551" observedRunningTime="2025-11-24 21:47:08.032390772 +0000 UTC m=+1646.348642985" watchObservedRunningTime="2025-11-24 21:47:08.03452805 +0000 UTC m=+1646.350780233" Nov 24 21:47:11 crc kubenswrapper[4915]: I1124 21:47:11.059485 4915 generic.go:334] "Generic (PLEG): container finished" podID="2bcdaa71-e70f-4d08-969f-bf0f3aa88db8" containerID="6f264dff3a53bcf5677aa4db3be1beb2b84877ea25fdd9e13c2d142cb3635983" exitCode=0 Nov 24 21:47:11 crc kubenswrapper[4915]: I1124 21:47:11.059627 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2ql5d" event={"ID":"2bcdaa71-e70f-4d08-969f-bf0f3aa88db8","Type":"ContainerDied","Data":"6f264dff3a53bcf5677aa4db3be1beb2b84877ea25fdd9e13c2d142cb3635983"} Nov 24 21:47:12 crc kubenswrapper[4915]: I1124 21:47:12.622579 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2ql5d" Nov 24 21:47:12 crc kubenswrapper[4915]: I1124 21:47:12.749704 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2bcdaa71-e70f-4d08-969f-bf0f3aa88db8-ssh-key\") pod \"2bcdaa71-e70f-4d08-969f-bf0f3aa88db8\" (UID: \"2bcdaa71-e70f-4d08-969f-bf0f3aa88db8\") " Nov 24 21:47:12 crc kubenswrapper[4915]: I1124 21:47:12.750666 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2bcdaa71-e70f-4d08-969f-bf0f3aa88db8-inventory\") pod \"2bcdaa71-e70f-4d08-969f-bf0f3aa88db8\" (UID: \"2bcdaa71-e70f-4d08-969f-bf0f3aa88db8\") " Nov 24 21:47:12 crc kubenswrapper[4915]: I1124 21:47:12.751009 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw2wd\" (UniqueName: \"kubernetes.io/projected/2bcdaa71-e70f-4d08-969f-bf0f3aa88db8-kube-api-access-fw2wd\") pod \"2bcdaa71-e70f-4d08-969f-bf0f3aa88db8\" (UID: \"2bcdaa71-e70f-4d08-969f-bf0f3aa88db8\") " Nov 24 21:47:12 crc kubenswrapper[4915]: I1124 21:47:12.755626 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bcdaa71-e70f-4d08-969f-bf0f3aa88db8-kube-api-access-fw2wd" (OuterVolumeSpecName: "kube-api-access-fw2wd") pod "2bcdaa71-e70f-4d08-969f-bf0f3aa88db8" (UID: "2bcdaa71-e70f-4d08-969f-bf0f3aa88db8"). InnerVolumeSpecName "kube-api-access-fw2wd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:47:12 crc kubenswrapper[4915]: I1124 21:47:12.783894 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bcdaa71-e70f-4d08-969f-bf0f3aa88db8-inventory" (OuterVolumeSpecName: "inventory") pod "2bcdaa71-e70f-4d08-969f-bf0f3aa88db8" (UID: "2bcdaa71-e70f-4d08-969f-bf0f3aa88db8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:47:12 crc kubenswrapper[4915]: I1124 21:47:12.812295 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bcdaa71-e70f-4d08-969f-bf0f3aa88db8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2bcdaa71-e70f-4d08-969f-bf0f3aa88db8" (UID: "2bcdaa71-e70f-4d08-969f-bf0f3aa88db8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:47:12 crc kubenswrapper[4915]: I1124 21:47:12.853970 4915 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2bcdaa71-e70f-4d08-969f-bf0f3aa88db8-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:47:12 crc kubenswrapper[4915]: I1124 21:47:12.854008 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw2wd\" (UniqueName: \"kubernetes.io/projected/2bcdaa71-e70f-4d08-969f-bf0f3aa88db8-kube-api-access-fw2wd\") on node \"crc\" DevicePath \"\"" Nov 24 21:47:12 crc kubenswrapper[4915]: I1124 21:47:12.854024 4915 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2bcdaa71-e70f-4d08-969f-bf0f3aa88db8-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:47:13 crc kubenswrapper[4915]: I1124 21:47:13.098955 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2ql5d" event={"ID":"2bcdaa71-e70f-4d08-969f-bf0f3aa88db8","Type":"ContainerDied","Data":"e6e754725f9ce1fa6557bb21dfef6025e7fd33e015689a284d00ff3da34eab58"} Nov 24 21:47:13 crc kubenswrapper[4915]: I1124 21:47:13.099033 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2ql5d" Nov 24 21:47:13 crc kubenswrapper[4915]: I1124 21:47:13.099044 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6e754725f9ce1fa6557bb21dfef6025e7fd33e015689a284d00ff3da34eab58" Nov 24 21:47:13 crc kubenswrapper[4915]: I1124 21:47:13.102359 4915 scope.go:117] "RemoveContainer" containerID="a91d57e8e1a5a27871050dae5a121ab8d7418eb4ef962432bbcf37df14437731" Nov 24 21:47:13 crc kubenswrapper[4915]: I1124 21:47:13.141263 4915 scope.go:117] "RemoveContainer" containerID="3bc1729e828a003ae84d24761ba79a9fbe3919376bdebe42cefa23d9b29b9989" Nov 24 21:47:13 crc kubenswrapper[4915]: I1124 21:47:13.185444 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6"] Nov 24 21:47:13 crc kubenswrapper[4915]: E1124 21:47:13.186147 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bcdaa71-e70f-4d08-969f-bf0f3aa88db8" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 24 21:47:13 crc kubenswrapper[4915]: I1124 21:47:13.186175 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bcdaa71-e70f-4d08-969f-bf0f3aa88db8" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 24 21:47:13 crc kubenswrapper[4915]: I1124 21:47:13.186502 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bcdaa71-e70f-4d08-969f-bf0f3aa88db8" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 24 21:47:13 crc kubenswrapper[4915]: I1124 21:47:13.188067 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6" Nov 24 21:47:13 crc kubenswrapper[4915]: I1124 21:47:13.190870 4915 scope.go:117] "RemoveContainer" containerID="44c2eb5bdf0da5e246548f1d2ef03a4d19a509aec35c968db9f50e0f1a990917" Nov 24 21:47:13 crc kubenswrapper[4915]: I1124 21:47:13.191346 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:47:13 crc kubenswrapper[4915]: I1124 21:47:13.191694 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:47:13 crc kubenswrapper[4915]: I1124 21:47:13.191770 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkk6k" Nov 24 21:47:13 crc kubenswrapper[4915]: I1124 21:47:13.191867 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:47:13 crc kubenswrapper[4915]: I1124 21:47:13.198152 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6"] Nov 24 21:47:13 crc kubenswrapper[4915]: I1124 21:47:13.267887 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdjtl\" (UniqueName: \"kubernetes.io/projected/48dcd241-f575-4735-8ad2-0449ae02ddaf-kube-api-access-cdjtl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6\" (UID: \"48dcd241-f575-4735-8ad2-0449ae02ddaf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6" Nov 24 21:47:13 crc kubenswrapper[4915]: I1124 21:47:13.270120 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48dcd241-f575-4735-8ad2-0449ae02ddaf-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6\" (UID: \"48dcd241-f575-4735-8ad2-0449ae02ddaf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6" Nov 24 21:47:13 crc kubenswrapper[4915]: I1124 21:47:13.270238 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48dcd241-f575-4735-8ad2-0449ae02ddaf-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6\" (UID: \"48dcd241-f575-4735-8ad2-0449ae02ddaf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6" Nov 24 21:47:13 crc kubenswrapper[4915]: I1124 21:47:13.270308 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48dcd241-f575-4735-8ad2-0449ae02ddaf-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6\" (UID: \"48dcd241-f575-4735-8ad2-0449ae02ddaf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6" Nov 24 21:47:13 crc kubenswrapper[4915]: I1124 21:47:13.373073 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48dcd241-f575-4735-8ad2-0449ae02ddaf-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6\" (UID: \"48dcd241-f575-4735-8ad2-0449ae02ddaf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6" Nov 24 21:47:13 crc kubenswrapper[4915]: I1124 21:47:13.373144 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48dcd241-f575-4735-8ad2-0449ae02ddaf-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6\" (UID: \"48dcd241-f575-4735-8ad2-0449ae02ddaf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6" Nov 24 21:47:13 crc kubenswrapper[4915]: I1124 21:47:13.373189 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48dcd241-f575-4735-8ad2-0449ae02ddaf-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6\" (UID: \"48dcd241-f575-4735-8ad2-0449ae02ddaf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6" Nov 24 21:47:13 crc kubenswrapper[4915]: I1124 21:47:13.373283 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdjtl\" (UniqueName: \"kubernetes.io/projected/48dcd241-f575-4735-8ad2-0449ae02ddaf-kube-api-access-cdjtl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6\" (UID: \"48dcd241-f575-4735-8ad2-0449ae02ddaf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6" Nov 24 21:47:13 crc kubenswrapper[4915]: I1124 21:47:13.377484 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48dcd241-f575-4735-8ad2-0449ae02ddaf-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6\" (UID: \"48dcd241-f575-4735-8ad2-0449ae02ddaf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6" Nov 24 21:47:13 crc kubenswrapper[4915]: I1124 21:47:13.377529 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48dcd241-f575-4735-8ad2-0449ae02ddaf-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6\" (UID: \"48dcd241-f575-4735-8ad2-0449ae02ddaf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6" Nov 24 21:47:13 crc kubenswrapper[4915]: I1124 21:47:13.377797 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48dcd241-f575-4735-8ad2-0449ae02ddaf-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6\" (UID: \"48dcd241-f575-4735-8ad2-0449ae02ddaf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6" Nov 24 21:47:13 crc kubenswrapper[4915]: I1124 21:47:13.390614 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdjtl\" (UniqueName: \"kubernetes.io/projected/48dcd241-f575-4735-8ad2-0449ae02ddaf-kube-api-access-cdjtl\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6\" (UID: \"48dcd241-f575-4735-8ad2-0449ae02ddaf\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6" Nov 24 21:47:13 crc kubenswrapper[4915]: I1124 21:47:13.521980 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6" Nov 24 21:47:14 crc kubenswrapper[4915]: I1124 21:47:14.169452 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6"] Nov 24 21:47:15 crc kubenswrapper[4915]: I1124 21:47:15.127653 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6" event={"ID":"48dcd241-f575-4735-8ad2-0449ae02ddaf","Type":"ContainerStarted","Data":"17f55b51f2bb2dc3fa6f2e276202f8787f4633424071400dd4c6112a9148d8f0"} Nov 24 21:47:15 crc kubenswrapper[4915]: I1124 21:47:15.128249 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6" event={"ID":"48dcd241-f575-4735-8ad2-0449ae02ddaf","Type":"ContainerStarted","Data":"f0ea1d282b8f16343da449a19c7558704d4bd0e96995e9d671c32bea771ab725"} Nov 24 21:47:15 crc kubenswrapper[4915]: I1124 21:47:15.156927 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6" podStartSLOduration=1.740304675 podStartE2EDuration="2.156898574s" podCreationTimestamp="2025-11-24 21:47:13 +0000 UTC" firstStartedPulling="2025-11-24 21:47:14.174101953 +0000 UTC m=+1652.490354126" lastFinishedPulling="2025-11-24 21:47:14.590695812 +0000 UTC m=+1652.906948025" observedRunningTime="2025-11-24 21:47:15.149528546 +0000 UTC m=+1653.465780769" watchObservedRunningTime="2025-11-24 21:47:15.156898574 +0000 UTC m=+1653.473150767" Nov 24 21:47:20 crc kubenswrapper[4915]: I1124 21:47:20.427756 4915 scope.go:117] "RemoveContainer" containerID="e35b421db64e0c0a1d101af5d1cb0bc0580962fd3824f2e5b2c603b4d4ef9489" Nov 24 21:47:20 crc kubenswrapper[4915]: E1124 21:47:20.429304 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:47:31 crc kubenswrapper[4915]: I1124 21:47:31.426920 4915 scope.go:117] "RemoveContainer" containerID="e35b421db64e0c0a1d101af5d1cb0bc0580962fd3824f2e5b2c603b4d4ef9489" Nov 24 21:47:31 crc kubenswrapper[4915]: E1124 21:47:31.427722 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:47:45 crc kubenswrapper[4915]: I1124 21:47:45.427494 4915 scope.go:117] "RemoveContainer" containerID="e35b421db64e0c0a1d101af5d1cb0bc0580962fd3824f2e5b2c603b4d4ef9489" Nov 24 21:47:45 crc kubenswrapper[4915]: E1124 21:47:45.428236 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:47:59 crc kubenswrapper[4915]: I1124 21:47:59.428857 4915 scope.go:117] "RemoveContainer" containerID="e35b421db64e0c0a1d101af5d1cb0bc0580962fd3824f2e5b2c603b4d4ef9489" Nov 24 21:47:59 crc kubenswrapper[4915]: E1124 21:47:59.429621 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:48:13 crc kubenswrapper[4915]: I1124 21:48:13.398214 4915 scope.go:117] "RemoveContainer" containerID="f3e8acb1a79d00a6496f0fe12dd9bac40012355de41215adb38f498553c99853" Nov 24 21:48:13 crc kubenswrapper[4915]: I1124 21:48:13.427574 4915 scope.go:117] "RemoveContainer" containerID="e35b421db64e0c0a1d101af5d1cb0bc0580962fd3824f2e5b2c603b4d4ef9489" Nov 24 21:48:13 crc kubenswrapper[4915]: E1124 21:48:13.428322 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:48:13 crc kubenswrapper[4915]: I1124 21:48:13.438963 4915 scope.go:117] "RemoveContainer" containerID="e880bef4e0f1c9ea8c2de97d2a0290d4e0275c418a1d605a705a3c400e423ec7" Nov 24 21:48:13 crc kubenswrapper[4915]: I1124 21:48:13.481066 4915 scope.go:117] "RemoveContainer" containerID="ce1afc60f87ebad07cb6720d86a00f388082ee5f6d05848f61eecb66be3a1183" Nov 24 21:48:13 crc kubenswrapper[4915]: I1124 21:48:13.556507 4915 scope.go:117] "RemoveContainer" containerID="db52c0172ea04d0a948de02991c171a2413e5b7665489c613c5f8155042a9753" Nov 24 21:48:13 crc kubenswrapper[4915]: I1124 21:48:13.590111 4915 scope.go:117] "RemoveContainer" containerID="af9ee1aa18ff1870dbd543975255711a2a6e390b816116ff16c1e9f4300af1d7" Nov 24 21:48:13 crc kubenswrapper[4915]: I1124 21:48:13.616277 4915 scope.go:117] "RemoveContainer" containerID="bfd0790b766d393930046cfb400c998ce4a2cfac5f9d11f1e812e3ecd4dc862b" Nov 24 21:48:26 crc kubenswrapper[4915]: I1124 21:48:26.426755 4915 scope.go:117] "RemoveContainer" containerID="e35b421db64e0c0a1d101af5d1cb0bc0580962fd3824f2e5b2c603b4d4ef9489" Nov 24 21:48:26 crc kubenswrapper[4915]: E1124 21:48:26.427557 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:48:40 crc kubenswrapper[4915]: I1124 21:48:40.427214 4915 scope.go:117] "RemoveContainer" containerID="e35b421db64e0c0a1d101af5d1cb0bc0580962fd3824f2e5b2c603b4d4ef9489" Nov 24 21:48:40 crc kubenswrapper[4915]: E1124 21:48:40.428081 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:48:54 crc kubenswrapper[4915]: I1124 21:48:54.427651 4915 scope.go:117] "RemoveContainer" containerID="e35b421db64e0c0a1d101af5d1cb0bc0580962fd3824f2e5b2c603b4d4ef9489" Nov 24 21:48:54 crc kubenswrapper[4915]: E1124 21:48:54.429682 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:49:08 crc kubenswrapper[4915]: I1124 21:49:08.427260 4915 scope.go:117] "RemoveContainer" containerID="e35b421db64e0c0a1d101af5d1cb0bc0580962fd3824f2e5b2c603b4d4ef9489" Nov 24 21:49:08 crc kubenswrapper[4915]: E1124 21:49:08.428288 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:49:13 crc kubenswrapper[4915]: I1124 21:49:13.804293 4915 scope.go:117] "RemoveContainer" containerID="9d8fcc49d4845bde8e7a10366404f976096c59219377177fe6698b3e29d863d8" Nov 24 21:49:13 crc kubenswrapper[4915]: I1124 21:49:13.830344 4915 scope.go:117] "RemoveContainer" containerID="758e36b26cc0fb5693068dcda64a28fd6b96b4d2a353c93a74e9837e318e396b" Nov 24 21:49:20 crc kubenswrapper[4915]: I1124 21:49:20.426898 4915 scope.go:117] "RemoveContainer" containerID="e35b421db64e0c0a1d101af5d1cb0bc0580962fd3824f2e5b2c603b4d4ef9489" Nov 24 21:49:20 crc kubenswrapper[4915]: E1124 21:49:20.428019 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:49:31 crc kubenswrapper[4915]: I1124 21:49:31.427944 4915 scope.go:117] "RemoveContainer" containerID="e35b421db64e0c0a1d101af5d1cb0bc0580962fd3824f2e5b2c603b4d4ef9489" Nov 24 21:49:31 crc kubenswrapper[4915]: E1124 21:49:31.428900 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:49:46 crc kubenswrapper[4915]: I1124 21:49:46.427240 4915 scope.go:117] "RemoveContainer" containerID="e35b421db64e0c0a1d101af5d1cb0bc0580962fd3824f2e5b2c603b4d4ef9489" Nov 24 21:49:46 crc kubenswrapper[4915]: E1124 21:49:46.428289 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:49:59 crc kubenswrapper[4915]: I1124 21:49:59.427131 4915 scope.go:117] "RemoveContainer" containerID="e35b421db64e0c0a1d101af5d1cb0bc0580962fd3824f2e5b2c603b4d4ef9489" Nov 24 21:49:59 crc kubenswrapper[4915]: E1124 21:49:59.427975 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:50:13 crc kubenswrapper[4915]: I1124 21:50:13.428488 4915 scope.go:117] "RemoveContainer" containerID="e35b421db64e0c0a1d101af5d1cb0bc0580962fd3824f2e5b2c603b4d4ef9489" Nov 24 21:50:13 crc kubenswrapper[4915]: E1124 21:50:13.429882 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:50:13 crc kubenswrapper[4915]: I1124 21:50:13.886533 4915 scope.go:117] "RemoveContainer" containerID="a341bdbd36c838d6282800c59151bf8da0e4781776e7176542adb0c7493ba339" Nov 24 21:50:13 crc kubenswrapper[4915]: I1124 21:50:13.912010 4915 scope.go:117] "RemoveContainer" containerID="cd01ce869c7a1ae18efa30bd50d95827ef58a4ae136ea90c6193c577308cd5c2" Nov 24 21:50:13 crc kubenswrapper[4915]: I1124 21:50:13.934104 4915 scope.go:117] "RemoveContainer" containerID="7a69382b6cc2be0ad42fc337f95e7db8d0f1b3b9728ed7020c17db4f1c39679f" Nov 24 21:50:13 crc kubenswrapper[4915]: I1124 21:50:13.984023 4915 scope.go:117] "RemoveContainer" containerID="9cacddc55197936037aabc9afb9efe5ecd8ae3e0b51843d17dadd249d3de5880" Nov 24 21:50:14 crc kubenswrapper[4915]: I1124 21:50:14.010915 4915 scope.go:117] "RemoveContainer" containerID="c5ecd6165b41bac8eb231025c06b9eb6ae10f3057829cf5e807767c3845daa71" Nov 24 21:50:14 crc kubenswrapper[4915]: I1124 21:50:14.039019 4915 scope.go:117] "RemoveContainer" containerID="ede936be91c81cbeec8d58555049b90d26a0205cf95c14700174b3487a2ea2ba" Nov 24 21:50:14 crc kubenswrapper[4915]: I1124 21:50:14.070413 4915 scope.go:117] "RemoveContainer" containerID="c4155d6bf5ee23e0d09e319d78c27d6fe369080bbee6687058dc870277a3bd1c" Nov 24 21:50:19 crc kubenswrapper[4915]: I1124 21:50:19.484817 4915 generic.go:334] "Generic (PLEG): container finished" podID="48dcd241-f575-4735-8ad2-0449ae02ddaf" containerID="17f55b51f2bb2dc3fa6f2e276202f8787f4633424071400dd4c6112a9148d8f0" exitCode=0 Nov 24 21:50:19 crc kubenswrapper[4915]: I1124 21:50:19.484908 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6" event={"ID":"48dcd241-f575-4735-8ad2-0449ae02ddaf","Type":"ContainerDied","Data":"17f55b51f2bb2dc3fa6f2e276202f8787f4633424071400dd4c6112a9148d8f0"} Nov 24 21:50:20 crc kubenswrapper[4915]: I1124 21:50:20.973581 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6" Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.026084 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48dcd241-f575-4735-8ad2-0449ae02ddaf-inventory\") pod \"48dcd241-f575-4735-8ad2-0449ae02ddaf\" (UID: \"48dcd241-f575-4735-8ad2-0449ae02ddaf\") " Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.027048 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48dcd241-f575-4735-8ad2-0449ae02ddaf-bootstrap-combined-ca-bundle\") pod \"48dcd241-f575-4735-8ad2-0449ae02ddaf\" (UID: \"48dcd241-f575-4735-8ad2-0449ae02ddaf\") " Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.027142 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48dcd241-f575-4735-8ad2-0449ae02ddaf-ssh-key\") pod \"48dcd241-f575-4735-8ad2-0449ae02ddaf\" (UID: \"48dcd241-f575-4735-8ad2-0449ae02ddaf\") " Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.027209 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdjtl\" (UniqueName: \"kubernetes.io/projected/48dcd241-f575-4735-8ad2-0449ae02ddaf-kube-api-access-cdjtl\") pod \"48dcd241-f575-4735-8ad2-0449ae02ddaf\" (UID: \"48dcd241-f575-4735-8ad2-0449ae02ddaf\") " Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.033106 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48dcd241-f575-4735-8ad2-0449ae02ddaf-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "48dcd241-f575-4735-8ad2-0449ae02ddaf" (UID: "48dcd241-f575-4735-8ad2-0449ae02ddaf"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.038937 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48dcd241-f575-4735-8ad2-0449ae02ddaf-kube-api-access-cdjtl" (OuterVolumeSpecName: "kube-api-access-cdjtl") pod "48dcd241-f575-4735-8ad2-0449ae02ddaf" (UID: "48dcd241-f575-4735-8ad2-0449ae02ddaf"). InnerVolumeSpecName "kube-api-access-cdjtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.057208 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48dcd241-f575-4735-8ad2-0449ae02ddaf-inventory" (OuterVolumeSpecName: "inventory") pod "48dcd241-f575-4735-8ad2-0449ae02ddaf" (UID: "48dcd241-f575-4735-8ad2-0449ae02ddaf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.064912 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48dcd241-f575-4735-8ad2-0449ae02ddaf-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "48dcd241-f575-4735-8ad2-0449ae02ddaf" (UID: "48dcd241-f575-4735-8ad2-0449ae02ddaf"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.131002 4915 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48dcd241-f575-4735-8ad2-0449ae02ddaf-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.131030 4915 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48dcd241-f575-4735-8ad2-0449ae02ddaf-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.131042 4915 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48dcd241-f575-4735-8ad2-0449ae02ddaf-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.131052 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdjtl\" (UniqueName: \"kubernetes.io/projected/48dcd241-f575-4735-8ad2-0449ae02ddaf-kube-api-access-cdjtl\") on node \"crc\" DevicePath \"\"" Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.516576 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6" event={"ID":"48dcd241-f575-4735-8ad2-0449ae02ddaf","Type":"ContainerDied","Data":"f0ea1d282b8f16343da449a19c7558704d4bd0e96995e9d671c32bea771ab725"} Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.516617 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0ea1d282b8f16343da449a19c7558704d4bd0e96995e9d671c32bea771ab725" Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.516668 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6" Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.622900 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ht7z7"] Nov 24 21:50:21 crc kubenswrapper[4915]: E1124 21:50:21.623651 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48dcd241-f575-4735-8ad2-0449ae02ddaf" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.623673 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="48dcd241-f575-4735-8ad2-0449ae02ddaf" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.623961 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="48dcd241-f575-4735-8ad2-0449ae02ddaf" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.625108 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ht7z7" Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.627109 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkk6k" Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.628429 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.628542 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.635391 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.648594 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ht7z7"] Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.749640 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f71427ed-e785-491d-b980-63359324b3ac-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ht7z7\" (UID: \"f71427ed-e785-491d-b980-63359324b3ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ht7z7" Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.750078 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f71427ed-e785-491d-b980-63359324b3ac-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ht7z7\" (UID: \"f71427ed-e785-491d-b980-63359324b3ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ht7z7" Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.750154 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dzjh\" (UniqueName: \"kubernetes.io/projected/f71427ed-e785-491d-b980-63359324b3ac-kube-api-access-2dzjh\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ht7z7\" (UID: \"f71427ed-e785-491d-b980-63359324b3ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ht7z7" Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.852220 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f71427ed-e785-491d-b980-63359324b3ac-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ht7z7\" (UID: \"f71427ed-e785-491d-b980-63359324b3ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ht7z7" Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.852295 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dzjh\" (UniqueName: \"kubernetes.io/projected/f71427ed-e785-491d-b980-63359324b3ac-kube-api-access-2dzjh\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ht7z7\" (UID: \"f71427ed-e785-491d-b980-63359324b3ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ht7z7" Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.852372 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f71427ed-e785-491d-b980-63359324b3ac-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ht7z7\" (UID: \"f71427ed-e785-491d-b980-63359324b3ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ht7z7" Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.857607 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f71427ed-e785-491d-b980-63359324b3ac-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ht7z7\" (UID: \"f71427ed-e785-491d-b980-63359324b3ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ht7z7" Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.861507 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f71427ed-e785-491d-b980-63359324b3ac-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ht7z7\" (UID: \"f71427ed-e785-491d-b980-63359324b3ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ht7z7" Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.873970 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dzjh\" (UniqueName: \"kubernetes.io/projected/f71427ed-e785-491d-b980-63359324b3ac-kube-api-access-2dzjh\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-ht7z7\" (UID: \"f71427ed-e785-491d-b980-63359324b3ac\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ht7z7" Nov 24 21:50:21 crc kubenswrapper[4915]: I1124 21:50:21.951504 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ht7z7" Nov 24 21:50:22 crc kubenswrapper[4915]: I1124 21:50:22.543985 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ht7z7"] Nov 24 21:50:23 crc kubenswrapper[4915]: I1124 21:50:23.540799 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ht7z7" event={"ID":"f71427ed-e785-491d-b980-63359324b3ac","Type":"ContainerStarted","Data":"e06429ee7a775126ec5e848985adf6c19f97928285cb758cf995f4cc9f03989e"} Nov 24 21:50:23 crc kubenswrapper[4915]: I1124 21:50:23.541384 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ht7z7" event={"ID":"f71427ed-e785-491d-b980-63359324b3ac","Type":"ContainerStarted","Data":"b84c03770db996bac8a9b16fac6667459dd1b3fa7adfd5c470f26460d2895b9d"} Nov 24 21:50:23 crc kubenswrapper[4915]: I1124 21:50:23.580679 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ht7z7" podStartSLOduration=2.103800761 podStartE2EDuration="2.580658285s" podCreationTimestamp="2025-11-24 21:50:21 +0000 UTC" firstStartedPulling="2025-11-24 21:50:22.562367622 +0000 UTC m=+1840.878619795" lastFinishedPulling="2025-11-24 21:50:23.039225136 +0000 UTC m=+1841.355477319" observedRunningTime="2025-11-24 21:50:23.560763798 +0000 UTC m=+1841.877015981" watchObservedRunningTime="2025-11-24 21:50:23.580658285 +0000 UTC m=+1841.896910478" Nov 24 21:50:27 crc kubenswrapper[4915]: I1124 21:50:27.426705 4915 scope.go:117] "RemoveContainer" containerID="e35b421db64e0c0a1d101af5d1cb0bc0580962fd3824f2e5b2c603b4d4ef9489" Nov 24 21:50:27 crc kubenswrapper[4915]: E1124 21:50:27.427628 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:50:41 crc kubenswrapper[4915]: I1124 21:50:41.427606 4915 scope.go:117] "RemoveContainer" containerID="e35b421db64e0c0a1d101af5d1cb0bc0580962fd3824f2e5b2c603b4d4ef9489" Nov 24 21:50:41 crc kubenswrapper[4915]: E1124 21:50:41.428508 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:50:47 crc kubenswrapper[4915]: I1124 21:50:47.080031 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-59d9-account-create-tk9r7"] Nov 24 21:50:47 crc kubenswrapper[4915]: I1124 21:50:47.094714 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-381b-account-create-2p98p"] Nov 24 21:50:47 crc kubenswrapper[4915]: I1124 21:50:47.107595 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-pxmbk"] Nov 24 21:50:47 crc kubenswrapper[4915]: I1124 21:50:47.120427 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-rp6qr"] Nov 24 21:50:47 crc kubenswrapper[4915]: I1124 21:50:47.132305 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-c522-account-create-hp224"] Nov 24 21:50:47 crc kubenswrapper[4915]: I1124 21:50:47.144653 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-59d9-account-create-tk9r7"] Nov 24 21:50:47 crc kubenswrapper[4915]: I1124 21:50:47.157409 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-pxmbk"] Nov 24 21:50:47 crc kubenswrapper[4915]: I1124 21:50:47.167049 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-381b-account-create-2p98p"] Nov 24 21:50:47 crc kubenswrapper[4915]: I1124 21:50:47.176483 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-c522-account-create-hp224"] Nov 24 21:50:47 crc kubenswrapper[4915]: I1124 21:50:47.188218 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-rp6qr"] Nov 24 21:50:48 crc kubenswrapper[4915]: I1124 21:50:48.452957 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bfad247-8a41-40c1-876a-a8948d106c5f" path="/var/lib/kubelet/pods/1bfad247-8a41-40c1-876a-a8948d106c5f/volumes" Nov 24 21:50:48 crc kubenswrapper[4915]: I1124 21:50:48.456768 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e339ac3-4a26-4a61-8887-e007e1f3d35c" path="/var/lib/kubelet/pods/8e339ac3-4a26-4a61-8887-e007e1f3d35c/volumes" Nov 24 21:50:48 crc kubenswrapper[4915]: I1124 21:50:48.459372 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ec792db-f087-4f1f-a6d5-83e85ca536f9" path="/var/lib/kubelet/pods/8ec792db-f087-4f1f-a6d5-83e85ca536f9/volumes" Nov 24 21:50:48 crc kubenswrapper[4915]: I1124 21:50:48.461231 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b449b4ff-1966-465a-8df7-13c2c1a28f75" path="/var/lib/kubelet/pods/b449b4ff-1966-465a-8df7-13c2c1a28f75/volumes" Nov 24 21:50:48 crc kubenswrapper[4915]: I1124 21:50:48.468670 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb6d5ae7-edbd-4f11-8635-02213556ff48" path="/var/lib/kubelet/pods/bb6d5ae7-edbd-4f11-8635-02213556ff48/volumes" Nov 24 21:50:49 crc kubenswrapper[4915]: I1124 21:50:49.049459 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-fwvf6"] Nov 24 21:50:49 crc kubenswrapper[4915]: I1124 21:50:49.062298 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-fwvf6"] Nov 24 21:50:50 crc kubenswrapper[4915]: I1124 21:50:50.448607 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58f1a4a9-6f26-483a-966a-f29142a74b4e" path="/var/lib/kubelet/pods/58f1a4a9-6f26-483a-966a-f29142a74b4e/volumes" Nov 24 21:50:52 crc kubenswrapper[4915]: I1124 21:50:52.040196 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-a004-account-create-rfvch"] Nov 24 21:50:52 crc kubenswrapper[4915]: I1124 21:50:52.060132 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-a004-account-create-rfvch"] Nov 24 21:50:52 crc kubenswrapper[4915]: I1124 21:50:52.442414 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb9cf31d-94cf-471c-ab71-668c1cbdcffd" path="/var/lib/kubelet/pods/bb9cf31d-94cf-471c-ab71-668c1cbdcffd/volumes" Nov 24 21:50:53 crc kubenswrapper[4915]: I1124 21:50:53.427179 4915 scope.go:117] "RemoveContainer" containerID="e35b421db64e0c0a1d101af5d1cb0bc0580962fd3824f2e5b2c603b4d4ef9489" Nov 24 21:50:53 crc kubenswrapper[4915]: E1124 21:50:53.427592 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:50:59 crc kubenswrapper[4915]: I1124 21:50:59.041442 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-qgvss"] Nov 24 21:50:59 crc kubenswrapper[4915]: I1124 21:50:59.055902 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-qgvss"] Nov 24 21:51:00 crc kubenswrapper[4915]: I1124 21:51:00.082396 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-gv8q6"] Nov 24 21:51:00 crc kubenswrapper[4915]: I1124 21:51:00.109131 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-gv8q6"] Nov 24 21:51:00 crc kubenswrapper[4915]: I1124 21:51:00.447136 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b5f98ba-ac45-4e86-b59f-34018eebddf8" path="/var/lib/kubelet/pods/3b5f98ba-ac45-4e86-b59f-34018eebddf8/volumes" Nov 24 21:51:00 crc kubenswrapper[4915]: I1124 21:51:00.448062 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c73858f3-e174-4a29-a454-1e7a222fd34e" path="/var/lib/kubelet/pods/c73858f3-e174-4a29-a454-1e7a222fd34e/volumes" Nov 24 21:51:01 crc kubenswrapper[4915]: I1124 21:51:01.041189 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-1c22-account-create-slgtx"] Nov 24 21:51:01 crc kubenswrapper[4915]: I1124 21:51:01.057187 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-1c22-account-create-slgtx"] Nov 24 21:51:02 crc kubenswrapper[4915]: I1124 21:51:02.451883 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23b95ea3-815a-4b5a-a623-012022b46b78" path="/var/lib/kubelet/pods/23b95ea3-815a-4b5a-a623-012022b46b78/volumes" Nov 24 21:51:05 crc kubenswrapper[4915]: I1124 21:51:05.428246 4915 scope.go:117] "RemoveContainer" containerID="e35b421db64e0c0a1d101af5d1cb0bc0580962fd3824f2e5b2c603b4d4ef9489" Nov 24 21:51:06 crc kubenswrapper[4915]: I1124 21:51:06.116906 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerStarted","Data":"556aae8bbced33e3b49c2bf37d5c3e01d83bc99ece680acbe35b92b0990e5b5d"} Nov 24 21:51:14 crc kubenswrapper[4915]: I1124 21:51:14.146437 4915 scope.go:117] "RemoveContainer" containerID="3dc6890cf19b43df6cc27b302bd755aea07fb56994ee11b98a31fbad56367aaa" Nov 24 21:51:14 crc kubenswrapper[4915]: I1124 21:51:14.178504 4915 scope.go:117] "RemoveContainer" containerID="58a1174e7fb92acce5c3a2ba0125a276a4ff6fb58e91a800cc375746bfc34084" Nov 24 21:51:14 crc kubenswrapper[4915]: I1124 21:51:14.261716 4915 scope.go:117] "RemoveContainer" containerID="6b0aa3428e2ffd403420cd969f1ed3877c982cf1e85c0b3af3385eb466455007" Nov 24 21:51:14 crc kubenswrapper[4915]: I1124 21:51:14.307484 4915 scope.go:117] "RemoveContainer" containerID="94c360f9dd8172b2701ffde4c9b1235e3b7bd24a13029ca55ae0ab784fee5420" Nov 24 21:51:14 crc kubenswrapper[4915]: I1124 21:51:14.336752 4915 scope.go:117] "RemoveContainer" containerID="1a55bdb8125041c319557e9986b7c8acd09fb081321ba18793bbe20a0b54fd2f" Nov 24 21:51:14 crc kubenswrapper[4915]: I1124 21:51:14.389352 4915 scope.go:117] "RemoveContainer" containerID="8996b7c9d8affc73412b71985eac6479d526428730be7e006bc597787068b8ed" Nov 24 21:51:14 crc kubenswrapper[4915]: I1124 21:51:14.464652 4915 scope.go:117] "RemoveContainer" containerID="1bcc7f9d9daea9311d26122f9ea7ec24798aca0eb4f750d10487450e766ad1c6" Nov 24 21:51:14 crc kubenswrapper[4915]: I1124 21:51:14.497942 4915 scope.go:117] "RemoveContainer" containerID="9f6c9aee891e4922f3827aa4b4d956552726de373a24435c32c33572652bb41a" Nov 24 21:51:14 crc kubenswrapper[4915]: I1124 21:51:14.532125 4915 scope.go:117] "RemoveContainer" containerID="a753f3b8ae1ff69ea19c039af8cf90c9f555d2220aa8cfe8e9252a90a2c85980" Nov 24 21:51:14 crc kubenswrapper[4915]: I1124 21:51:14.560539 4915 scope.go:117] "RemoveContainer" containerID="e3b2b4d7bf833ee1b510ba541b0232acb16649f203d91419473b6ab4dd29d0c2" Nov 24 21:51:14 crc kubenswrapper[4915]: I1124 21:51:14.585629 4915 scope.go:117] "RemoveContainer" containerID="a78fb559c775d164594a142ecac63fb8bcdd92613226ab8ccb2ac1d761a800cb" Nov 24 21:51:22 crc kubenswrapper[4915]: I1124 21:51:22.035526 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-wtkdf"] Nov 24 21:51:22 crc kubenswrapper[4915]: I1124 21:51:22.052417 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-wtkdf"] Nov 24 21:51:22 crc kubenswrapper[4915]: I1124 21:51:22.441712 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="815268b1-2fe4-4c1b-a56f-73cc4a0ccf46" path="/var/lib/kubelet/pods/815268b1-2fe4-4c1b-a56f-73cc4a0ccf46/volumes" Nov 24 21:51:31 crc kubenswrapper[4915]: I1124 21:51:31.054903 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-ssnfp"] Nov 24 21:51:31 crc kubenswrapper[4915]: I1124 21:51:31.069704 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-e10f-account-create-5mrxz"] Nov 24 21:51:31 crc kubenswrapper[4915]: I1124 21:51:31.082151 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-4269-account-create-rbxk7"] Nov 24 21:51:31 crc kubenswrapper[4915]: I1124 21:51:31.091615 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-s5npb"] Nov 24 21:51:31 crc kubenswrapper[4915]: I1124 21:51:31.100619 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-da23-account-create-xqsrk"] Nov 24 21:51:31 crc kubenswrapper[4915]: I1124 21:51:31.109618 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-4269-account-create-rbxk7"] Nov 24 21:51:31 crc kubenswrapper[4915]: I1124 21:51:31.119558 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-ssnfp"] Nov 24 21:51:31 crc kubenswrapper[4915]: I1124 21:51:31.128610 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-e10f-account-create-5mrxz"] Nov 24 21:51:31 crc kubenswrapper[4915]: I1124 21:51:31.138000 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-s5npb"] Nov 24 21:51:31 crc kubenswrapper[4915]: I1124 21:51:31.148263 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-da23-account-create-xqsrk"] Nov 24 21:51:32 crc kubenswrapper[4915]: I1124 21:51:32.450250 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00e73e89-3b03-49a8-878b-3b208952476b" path="/var/lib/kubelet/pods/00e73e89-3b03-49a8-878b-3b208952476b/volumes" Nov 24 21:51:32 crc kubenswrapper[4915]: I1124 21:51:32.453207 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a9cf3f7-758e-4c7a-9ace-de87acde4680" path="/var/lib/kubelet/pods/0a9cf3f7-758e-4c7a-9ace-de87acde4680/volumes" Nov 24 21:51:32 crc kubenswrapper[4915]: I1124 21:51:32.454706 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="240b0e2f-e08f-45b7-a006-4a982095780c" path="/var/lib/kubelet/pods/240b0e2f-e08f-45b7-a006-4a982095780c/volumes" Nov 24 21:51:32 crc kubenswrapper[4915]: I1124 21:51:32.459882 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4699e1f2-8d9a-4aff-bf8a-d648162c8bf3" path="/var/lib/kubelet/pods/4699e1f2-8d9a-4aff-bf8a-d648162c8bf3/volumes" Nov 24 21:51:32 crc kubenswrapper[4915]: I1124 21:51:32.464739 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="919f233f-0ca4-4b04-ad6f-4b4bebba4f3b" path="/var/lib/kubelet/pods/919f233f-0ca4-4b04-ad6f-4b4bebba4f3b/volumes" Nov 24 21:51:34 crc kubenswrapper[4915]: I1124 21:51:34.043230 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-kg5gb"] Nov 24 21:51:34 crc kubenswrapper[4915]: I1124 21:51:34.057308 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-tdtzl"] Nov 24 21:51:34 crc kubenswrapper[4915]: I1124 21:51:34.069891 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-kg5gb"] Nov 24 21:51:34 crc kubenswrapper[4915]: I1124 21:51:34.078261 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5e4c-account-create-dnwgt"] Nov 24 21:51:34 crc kubenswrapper[4915]: I1124 21:51:34.087112 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-tdtzl"] Nov 24 21:51:34 crc kubenswrapper[4915]: I1124 21:51:34.096924 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5e4c-account-create-dnwgt"] Nov 24 21:51:34 crc kubenswrapper[4915]: I1124 21:51:34.442015 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70f4148e-7141-4cb7-9ed3-e0369c008c28" path="/var/lib/kubelet/pods/70f4148e-7141-4cb7-9ed3-e0369c008c28/volumes" Nov 24 21:51:34 crc kubenswrapper[4915]: I1124 21:51:34.443900 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c74c4c4b-c291-42e8-a7f5-37fc513af9b0" path="/var/lib/kubelet/pods/c74c4c4b-c291-42e8-a7f5-37fc513af9b0/volumes" Nov 24 21:51:34 crc kubenswrapper[4915]: I1124 21:51:34.444627 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea12e14a-f2f3-476e-886d-5a1648322134" path="/var/lib/kubelet/pods/ea12e14a-f2f3-476e-886d-5a1648322134/volumes" Nov 24 21:51:40 crc kubenswrapper[4915]: I1124 21:51:40.040628 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-5fb97"] Nov 24 21:51:40 crc kubenswrapper[4915]: I1124 21:51:40.049849 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-5fb97"] Nov 24 21:51:40 crc kubenswrapper[4915]: I1124 21:51:40.444067 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cd7abca-28b3-4ddc-b209-7f95193ffa11" path="/var/lib/kubelet/pods/2cd7abca-28b3-4ddc-b209-7f95193ffa11/volumes" Nov 24 21:52:06 crc kubenswrapper[4915]: I1124 21:52:06.994715 4915 generic.go:334] "Generic (PLEG): container finished" podID="f71427ed-e785-491d-b980-63359324b3ac" containerID="e06429ee7a775126ec5e848985adf6c19f97928285cb758cf995f4cc9f03989e" exitCode=0 Nov 24 21:52:06 crc kubenswrapper[4915]: I1124 21:52:06.994817 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ht7z7" event={"ID":"f71427ed-e785-491d-b980-63359324b3ac","Type":"ContainerDied","Data":"e06429ee7a775126ec5e848985adf6c19f97928285cb758cf995f4cc9f03989e"} Nov 24 21:52:08 crc kubenswrapper[4915]: I1124 21:52:08.053074 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-vx4fc"] Nov 24 21:52:08 crc kubenswrapper[4915]: I1124 21:52:08.065320 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-vx4fc"] Nov 24 21:52:08 crc kubenswrapper[4915]: I1124 21:52:08.449175 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54e7cfde-938c-4b51-8cfc-a1f290de00fd" path="/var/lib/kubelet/pods/54e7cfde-938c-4b51-8cfc-a1f290de00fd/volumes" Nov 24 21:52:08 crc kubenswrapper[4915]: I1124 21:52:08.562549 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ht7z7" Nov 24 21:52:08 crc kubenswrapper[4915]: I1124 21:52:08.690396 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dzjh\" (UniqueName: \"kubernetes.io/projected/f71427ed-e785-491d-b980-63359324b3ac-kube-api-access-2dzjh\") pod \"f71427ed-e785-491d-b980-63359324b3ac\" (UID: \"f71427ed-e785-491d-b980-63359324b3ac\") " Nov 24 21:52:08 crc kubenswrapper[4915]: I1124 21:52:08.690487 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f71427ed-e785-491d-b980-63359324b3ac-ssh-key\") pod \"f71427ed-e785-491d-b980-63359324b3ac\" (UID: \"f71427ed-e785-491d-b980-63359324b3ac\") " Nov 24 21:52:08 crc kubenswrapper[4915]: I1124 21:52:08.690691 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f71427ed-e785-491d-b980-63359324b3ac-inventory\") pod \"f71427ed-e785-491d-b980-63359324b3ac\" (UID: \"f71427ed-e785-491d-b980-63359324b3ac\") " Nov 24 21:52:08 crc kubenswrapper[4915]: I1124 21:52:08.697976 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f71427ed-e785-491d-b980-63359324b3ac-kube-api-access-2dzjh" (OuterVolumeSpecName: "kube-api-access-2dzjh") pod "f71427ed-e785-491d-b980-63359324b3ac" (UID: "f71427ed-e785-491d-b980-63359324b3ac"). InnerVolumeSpecName "kube-api-access-2dzjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:52:08 crc kubenswrapper[4915]: I1124 21:52:08.726361 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f71427ed-e785-491d-b980-63359324b3ac-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f71427ed-e785-491d-b980-63359324b3ac" (UID: "f71427ed-e785-491d-b980-63359324b3ac"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:52:08 crc kubenswrapper[4915]: I1124 21:52:08.763970 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f71427ed-e785-491d-b980-63359324b3ac-inventory" (OuterVolumeSpecName: "inventory") pod "f71427ed-e785-491d-b980-63359324b3ac" (UID: "f71427ed-e785-491d-b980-63359324b3ac"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:52:08 crc kubenswrapper[4915]: I1124 21:52:08.793657 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dzjh\" (UniqueName: \"kubernetes.io/projected/f71427ed-e785-491d-b980-63359324b3ac-kube-api-access-2dzjh\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:08 crc kubenswrapper[4915]: I1124 21:52:08.793694 4915 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f71427ed-e785-491d-b980-63359324b3ac-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:08 crc kubenswrapper[4915]: I1124 21:52:08.793708 4915 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f71427ed-e785-491d-b980-63359324b3ac-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:52:09 crc kubenswrapper[4915]: I1124 21:52:09.021222 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ht7z7" event={"ID":"f71427ed-e785-491d-b980-63359324b3ac","Type":"ContainerDied","Data":"b84c03770db996bac8a9b16fac6667459dd1b3fa7adfd5c470f26460d2895b9d"} Nov 24 21:52:09 crc kubenswrapper[4915]: I1124 21:52:09.021844 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b84c03770db996bac8a9b16fac6667459dd1b3fa7adfd5c470f26460d2895b9d" Nov 24 21:52:09 crc kubenswrapper[4915]: I1124 21:52:09.021350 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-ht7z7" Nov 24 21:52:09 crc kubenswrapper[4915]: I1124 21:52:09.103751 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-97pjm"] Nov 24 21:52:09 crc kubenswrapper[4915]: E1124 21:52:09.104432 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f71427ed-e785-491d-b980-63359324b3ac" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 24 21:52:09 crc kubenswrapper[4915]: I1124 21:52:09.104452 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f71427ed-e785-491d-b980-63359324b3ac" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 24 21:52:09 crc kubenswrapper[4915]: I1124 21:52:09.104763 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f71427ed-e785-491d-b980-63359324b3ac" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 24 21:52:09 crc kubenswrapper[4915]: I1124 21:52:09.105899 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-97pjm" Nov 24 21:52:09 crc kubenswrapper[4915]: I1124 21:52:09.110084 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:52:09 crc kubenswrapper[4915]: I1124 21:52:09.110254 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkk6k" Nov 24 21:52:09 crc kubenswrapper[4915]: I1124 21:52:09.110323 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:52:09 crc kubenswrapper[4915]: I1124 21:52:09.110947 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:52:09 crc kubenswrapper[4915]: I1124 21:52:09.118973 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-97pjm"] Nov 24 21:52:09 crc kubenswrapper[4915]: I1124 21:52:09.201874 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kbvk\" (UniqueName: \"kubernetes.io/projected/8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe-kube-api-access-9kbvk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-97pjm\" (UID: \"8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-97pjm" Nov 24 21:52:09 crc kubenswrapper[4915]: I1124 21:52:09.201938 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-97pjm\" (UID: \"8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-97pjm" Nov 24 21:52:09 crc kubenswrapper[4915]: I1124 21:52:09.202012 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-97pjm\" (UID: \"8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-97pjm" Nov 24 21:52:09 crc kubenswrapper[4915]: I1124 21:52:09.304469 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kbvk\" (UniqueName: \"kubernetes.io/projected/8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe-kube-api-access-9kbvk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-97pjm\" (UID: \"8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-97pjm" Nov 24 21:52:09 crc kubenswrapper[4915]: I1124 21:52:09.304540 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-97pjm\" (UID: \"8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-97pjm" Nov 24 21:52:09 crc kubenswrapper[4915]: I1124 21:52:09.304615 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-97pjm\" (UID: \"8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-97pjm" Nov 24 21:52:09 crc kubenswrapper[4915]: I1124 21:52:09.309068 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-97pjm\" (UID: \"8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-97pjm" Nov 24 21:52:09 crc kubenswrapper[4915]: I1124 21:52:09.309109 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-97pjm\" (UID: \"8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-97pjm" Nov 24 21:52:09 crc kubenswrapper[4915]: I1124 21:52:09.321624 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kbvk\" (UniqueName: \"kubernetes.io/projected/8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe-kube-api-access-9kbvk\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-97pjm\" (UID: \"8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-97pjm" Nov 24 21:52:09 crc kubenswrapper[4915]: I1124 21:52:09.432096 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-97pjm" Nov 24 21:52:10 crc kubenswrapper[4915]: I1124 21:52:10.057299 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-97pjm"] Nov 24 21:52:10 crc kubenswrapper[4915]: I1124 21:52:10.065533 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 21:52:11 crc kubenswrapper[4915]: I1124 21:52:11.048605 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-97pjm" event={"ID":"8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe","Type":"ContainerStarted","Data":"4d38c20f589156e1dd1493359c83b6960ecf8af28739c6f8eede4b038c00a120"} Nov 24 21:52:11 crc kubenswrapper[4915]: I1124 21:52:11.050911 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-97pjm" event={"ID":"8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe","Type":"ContainerStarted","Data":"d530605817de783a8b942fe608e302d6766c73933d7ed1f86208c084d4a09ba6"} Nov 24 21:52:14 crc kubenswrapper[4915]: I1124 21:52:14.845396 4915 scope.go:117] "RemoveContainer" containerID="703956656f1d9507f18cecb6c6ca43fb84a8e621e26c2466956b78e2596c9361" Nov 24 21:52:14 crc kubenswrapper[4915]: I1124 21:52:14.879165 4915 scope.go:117] "RemoveContainer" containerID="21234988054c2e55dbac98bf6d4689db614603f6da9308a1cfdcde1089b2b0ea" Nov 24 21:52:14 crc kubenswrapper[4915]: I1124 21:52:14.962282 4915 scope.go:117] "RemoveContainer" containerID="c017670a0bfcff348bb4123c9a8d89bbd8e82682209fc96e8d1d194264d0bad7" Nov 24 21:52:15 crc kubenswrapper[4915]: I1124 21:52:15.006824 4915 scope.go:117] "RemoveContainer" containerID="c456e024068960c23b78981c13ee141152f417a3489600c1948b124478943411" Nov 24 21:52:15 crc kubenswrapper[4915]: I1124 21:52:15.062183 4915 scope.go:117] "RemoveContainer" containerID="f49029c1a7ba8da1e49745f42588ebbf7a17a19a621a1de42b9ff6dcfa1b75db" Nov 24 21:52:15 crc kubenswrapper[4915]: I1124 21:52:15.117370 4915 scope.go:117] "RemoveContainer" containerID="2825052bd516853da21b8403a38e8dcff8a97222ce103a096522461af3aeff31" Nov 24 21:52:15 crc kubenswrapper[4915]: I1124 21:52:15.176864 4915 scope.go:117] "RemoveContainer" containerID="79b3448254101653401e824b7557d8ae6adc7b9e82fdd59ef9b93cb5f5bf96b5" Nov 24 21:52:15 crc kubenswrapper[4915]: I1124 21:52:15.210741 4915 scope.go:117] "RemoveContainer" containerID="25ff21b6d6353d520d680971004d9b1c657f2b15b6ca52bafcf776e64747a427" Nov 24 21:52:15 crc kubenswrapper[4915]: I1124 21:52:15.242401 4915 scope.go:117] "RemoveContainer" containerID="f9340f261202a38f689b7c9fce867b689e2b68762d7b650e05089de27c813aad" Nov 24 21:52:15 crc kubenswrapper[4915]: I1124 21:52:15.275867 4915 scope.go:117] "RemoveContainer" containerID="a2b3bde45feac8319bb960ad4202e31c64449add4ca5639a7d333a90aa4f3376" Nov 24 21:52:15 crc kubenswrapper[4915]: I1124 21:52:15.306720 4915 scope.go:117] "RemoveContainer" containerID="ac7a3ec8f8851e686611fd946399d4e113eb7aaba9636134cc31e04ac3d1359c" Nov 24 21:52:15 crc kubenswrapper[4915]: I1124 21:52:15.336159 4915 scope.go:117] "RemoveContainer" containerID="1c5e973dcf06aa760d7b734fcfda712f8bc35303801b00aeaa81d1d07b41733c" Nov 24 21:52:15 crc kubenswrapper[4915]: I1124 21:52:15.361138 4915 scope.go:117] "RemoveContainer" containerID="b49f70e0f7cdc41a8164c858da5fd643c848ebc95f5a0e5ad313b65d19fa40e9" Nov 24 21:52:18 crc kubenswrapper[4915]: I1124 21:52:18.038056 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-97pjm" podStartSLOduration=8.493042168 podStartE2EDuration="9.038034022s" podCreationTimestamp="2025-11-24 21:52:09 +0000 UTC" firstStartedPulling="2025-11-24 21:52:10.065204645 +0000 UTC m=+1948.381456818" lastFinishedPulling="2025-11-24 21:52:10.610196499 +0000 UTC m=+1948.926448672" observedRunningTime="2025-11-24 21:52:11.078720789 +0000 UTC m=+1949.394972972" watchObservedRunningTime="2025-11-24 21:52:18.038034022 +0000 UTC m=+1956.354286195" Nov 24 21:52:18 crc kubenswrapper[4915]: I1124 21:52:18.040492 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-2dxdv"] Nov 24 21:52:18 crc kubenswrapper[4915]: I1124 21:52:18.049729 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-2dxdv"] Nov 24 21:52:18 crc kubenswrapper[4915]: I1124 21:52:18.443238 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="578a54f5-2d6f-4c21-b549-55cd00237570" path="/var/lib/kubelet/pods/578a54f5-2d6f-4c21-b549-55cd00237570/volumes" Nov 24 21:52:19 crc kubenswrapper[4915]: I1124 21:52:19.048845 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-lf4m9"] Nov 24 21:52:19 crc kubenswrapper[4915]: I1124 21:52:19.062184 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-lf4m9"] Nov 24 21:52:20 crc kubenswrapper[4915]: I1124 21:52:20.052555 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-6jszw"] Nov 24 21:52:20 crc kubenswrapper[4915]: I1124 21:52:20.070906 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-6jszw"] Nov 24 21:52:20 crc kubenswrapper[4915]: I1124 21:52:20.443383 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="582b5498-6320-42dc-9a5b-9a6fd28c791b" path="/var/lib/kubelet/pods/582b5498-6320-42dc-9a5b-9a6fd28c791b/volumes" Nov 24 21:52:20 crc kubenswrapper[4915]: I1124 21:52:20.446124 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c21efec-e27b-4e9e-bdc6-a4d9a0eab412" path="/var/lib/kubelet/pods/8c21efec-e27b-4e9e-bdc6-a4d9a0eab412/volumes" Nov 24 21:52:29 crc kubenswrapper[4915]: I1124 21:52:29.049757 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-kxbs8"] Nov 24 21:52:29 crc kubenswrapper[4915]: I1124 21:52:29.079707 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-kxbs8"] Nov 24 21:52:30 crc kubenswrapper[4915]: I1124 21:52:30.452611 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8612c3b5-24cc-431a-a888-8be923564356" path="/var/lib/kubelet/pods/8612c3b5-24cc-431a-a888-8be923564356/volumes" Nov 24 21:52:31 crc kubenswrapper[4915]: I1124 21:52:31.057720 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-9t9tc"] Nov 24 21:52:31 crc kubenswrapper[4915]: I1124 21:52:31.073501 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-9t9tc"] Nov 24 21:52:32 crc kubenswrapper[4915]: I1124 21:52:32.443434 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0646b5c8-f87a-4f27-9327-1bc87669623f" path="/var/lib/kubelet/pods/0646b5c8-f87a-4f27-9327-1bc87669623f/volumes" Nov 24 21:53:15 crc kubenswrapper[4915]: I1124 21:53:15.635597 4915 scope.go:117] "RemoveContainer" containerID="f14c105755b0b195f16aa2e008f514c7b484c7b9e33ba2ae6f3d7b07c625cfd7" Nov 24 21:53:15 crc kubenswrapper[4915]: I1124 21:53:15.680277 4915 scope.go:117] "RemoveContainer" containerID="3b3e7d12bbf100725c78908321ba25aaaf6c50003e7611cbd694cc1c36d254e9" Nov 24 21:53:15 crc kubenswrapper[4915]: I1124 21:53:15.750647 4915 scope.go:117] "RemoveContainer" containerID="45a0e566a416f85d2f01b60ed2472996b7b19a62bc2f30b07c270d1c82cc380b" Nov 24 21:53:15 crc kubenswrapper[4915]: I1124 21:53:15.814182 4915 scope.go:117] "RemoveContainer" containerID="ce8b42e8ea3db5b79b2a1cb9cb42c5123911d4be24151650c31db77bdffa919d" Nov 24 21:53:15 crc kubenswrapper[4915]: I1124 21:53:15.884587 4915 scope.go:117] "RemoveContainer" containerID="ce418a7f9c3df84160520ae439d16daefaa676f20397a06de14da0a48733b4b1" Nov 24 21:53:16 crc kubenswrapper[4915]: I1124 21:53:16.049907 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-xkncx"] Nov 24 21:53:16 crc kubenswrapper[4915]: I1124 21:53:16.060760 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-xkncx"] Nov 24 21:53:16 crc kubenswrapper[4915]: I1124 21:53:16.440074 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c50a58e9-839d-4c0a-9aea-e27e9b892db0" path="/var/lib/kubelet/pods/c50a58e9-839d-4c0a-9aea-e27e9b892db0/volumes" Nov 24 21:53:17 crc kubenswrapper[4915]: I1124 21:53:17.042013 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-678kk"] Nov 24 21:53:17 crc kubenswrapper[4915]: I1124 21:53:17.052344 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-1fec-account-create-c2mcb"] Nov 24 21:53:17 crc kubenswrapper[4915]: I1124 21:53:17.061614 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-541f-account-create-6xjb9"] Nov 24 21:53:17 crc kubenswrapper[4915]: I1124 21:53:17.073208 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-2688t"] Nov 24 21:53:17 crc kubenswrapper[4915]: I1124 21:53:17.082496 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-05c3-account-create-87xhn"] Nov 24 21:53:17 crc kubenswrapper[4915]: I1124 21:53:17.117980 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-541f-account-create-6xjb9"] Nov 24 21:53:17 crc kubenswrapper[4915]: I1124 21:53:17.132317 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-678kk"] Nov 24 21:53:17 crc kubenswrapper[4915]: I1124 21:53:17.144503 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-1fec-account-create-c2mcb"] Nov 24 21:53:17 crc kubenswrapper[4915]: I1124 21:53:17.157280 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-05c3-account-create-87xhn"] Nov 24 21:53:17 crc kubenswrapper[4915]: I1124 21:53:17.166804 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-2688t"] Nov 24 21:53:18 crc kubenswrapper[4915]: I1124 21:53:18.445446 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ca21771-e57e-49fc-a4c0-a8fea17fd2ea" path="/var/lib/kubelet/pods/0ca21771-e57e-49fc-a4c0-a8fea17fd2ea/volumes" Nov 24 21:53:18 crc kubenswrapper[4915]: I1124 21:53:18.447348 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4387d1ab-ac63-465a-a164-1fb5f78352da" path="/var/lib/kubelet/pods/4387d1ab-ac63-465a-a164-1fb5f78352da/volumes" Nov 24 21:53:18 crc kubenswrapper[4915]: I1124 21:53:18.450048 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ea7f227-3e0f-4e4f-9136-22a74b0f3057" path="/var/lib/kubelet/pods/5ea7f227-3e0f-4e4f-9136-22a74b0f3057/volumes" Nov 24 21:53:18 crc kubenswrapper[4915]: I1124 21:53:18.451106 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="789abde1-5e42-4032-8b87-e3c475c709e8" path="/var/lib/kubelet/pods/789abde1-5e42-4032-8b87-e3c475c709e8/volumes" Nov 24 21:53:18 crc kubenswrapper[4915]: I1124 21:53:18.453308 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1089ced-cc42-419b-a130-e71479e51b16" path="/var/lib/kubelet/pods/f1089ced-cc42-419b-a130-e71479e51b16/volumes" Nov 24 21:53:24 crc kubenswrapper[4915]: I1124 21:53:24.327679 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:53:24 crc kubenswrapper[4915]: I1124 21:53:24.327994 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:53:27 crc kubenswrapper[4915]: I1124 21:53:27.051208 4915 generic.go:334] "Generic (PLEG): container finished" podID="8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe" containerID="4d38c20f589156e1dd1493359c83b6960ecf8af28739c6f8eede4b038c00a120" exitCode=0 Nov 24 21:53:27 crc kubenswrapper[4915]: I1124 21:53:27.051291 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-97pjm" event={"ID":"8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe","Type":"ContainerDied","Data":"4d38c20f589156e1dd1493359c83b6960ecf8af28739c6f8eede4b038c00a120"} Nov 24 21:53:28 crc kubenswrapper[4915]: I1124 21:53:28.663027 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-97pjm" Nov 24 21:53:28 crc kubenswrapper[4915]: I1124 21:53:28.740973 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe-ssh-key\") pod \"8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe\" (UID: \"8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe\") " Nov 24 21:53:28 crc kubenswrapper[4915]: I1124 21:53:28.741019 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe-inventory\") pod \"8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe\" (UID: \"8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe\") " Nov 24 21:53:28 crc kubenswrapper[4915]: I1124 21:53:28.741080 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kbvk\" (UniqueName: \"kubernetes.io/projected/8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe-kube-api-access-9kbvk\") pod \"8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe\" (UID: \"8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe\") " Nov 24 21:53:28 crc kubenswrapper[4915]: I1124 21:53:28.747909 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe-kube-api-access-9kbvk" (OuterVolumeSpecName: "kube-api-access-9kbvk") pod "8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe" (UID: "8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe"). InnerVolumeSpecName "kube-api-access-9kbvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:53:28 crc kubenswrapper[4915]: I1124 21:53:28.784544 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe-inventory" (OuterVolumeSpecName: "inventory") pod "8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe" (UID: "8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:53:28 crc kubenswrapper[4915]: I1124 21:53:28.803149 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe" (UID: "8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:53:28 crc kubenswrapper[4915]: I1124 21:53:28.844068 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kbvk\" (UniqueName: \"kubernetes.io/projected/8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe-kube-api-access-9kbvk\") on node \"crc\" DevicePath \"\"" Nov 24 21:53:28 crc kubenswrapper[4915]: I1124 21:53:28.844127 4915 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:53:28 crc kubenswrapper[4915]: I1124 21:53:28.844140 4915 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:53:29 crc kubenswrapper[4915]: I1124 21:53:29.075549 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-97pjm" event={"ID":"8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe","Type":"ContainerDied","Data":"d530605817de783a8b942fe608e302d6766c73933d7ed1f86208c084d4a09ba6"} Nov 24 21:53:29 crc kubenswrapper[4915]: I1124 21:53:29.075596 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d530605817de783a8b942fe608e302d6766c73933d7ed1f86208c084d4a09ba6" Nov 24 21:53:29 crc kubenswrapper[4915]: I1124 21:53:29.075596 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-97pjm" Nov 24 21:53:29 crc kubenswrapper[4915]: I1124 21:53:29.187739 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8pf49"] Nov 24 21:53:29 crc kubenswrapper[4915]: E1124 21:53:29.188484 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 21:53:29 crc kubenswrapper[4915]: I1124 21:53:29.188511 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 21:53:29 crc kubenswrapper[4915]: I1124 21:53:29.188829 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 24 21:53:29 crc kubenswrapper[4915]: I1124 21:53:29.189883 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8pf49" Nov 24 21:53:29 crc kubenswrapper[4915]: I1124 21:53:29.194047 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:53:29 crc kubenswrapper[4915]: I1124 21:53:29.194256 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:53:29 crc kubenswrapper[4915]: I1124 21:53:29.194397 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkk6k" Nov 24 21:53:29 crc kubenswrapper[4915]: I1124 21:53:29.199205 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8pf49"] Nov 24 21:53:29 crc kubenswrapper[4915]: I1124 21:53:29.200839 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:53:29 crc kubenswrapper[4915]: I1124 21:53:29.258579 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thrjr\" (UniqueName: \"kubernetes.io/projected/12c05179-3387-4aad-aa3a-f23931dd6360-kube-api-access-thrjr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8pf49\" (UID: \"12c05179-3387-4aad-aa3a-f23931dd6360\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8pf49" Nov 24 21:53:29 crc kubenswrapper[4915]: I1124 21:53:29.258647 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12c05179-3387-4aad-aa3a-f23931dd6360-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8pf49\" (UID: \"12c05179-3387-4aad-aa3a-f23931dd6360\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8pf49" Nov 24 21:53:29 crc kubenswrapper[4915]: I1124 21:53:29.258929 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/12c05179-3387-4aad-aa3a-f23931dd6360-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8pf49\" (UID: \"12c05179-3387-4aad-aa3a-f23931dd6360\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8pf49" Nov 24 21:53:29 crc kubenswrapper[4915]: I1124 21:53:29.363550 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thrjr\" (UniqueName: \"kubernetes.io/projected/12c05179-3387-4aad-aa3a-f23931dd6360-kube-api-access-thrjr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8pf49\" (UID: \"12c05179-3387-4aad-aa3a-f23931dd6360\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8pf49" Nov 24 21:53:29 crc kubenswrapper[4915]: I1124 21:53:29.363607 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12c05179-3387-4aad-aa3a-f23931dd6360-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8pf49\" (UID: \"12c05179-3387-4aad-aa3a-f23931dd6360\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8pf49" Nov 24 21:53:29 crc kubenswrapper[4915]: I1124 21:53:29.363750 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/12c05179-3387-4aad-aa3a-f23931dd6360-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8pf49\" (UID: \"12c05179-3387-4aad-aa3a-f23931dd6360\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8pf49" Nov 24 21:53:29 crc kubenswrapper[4915]: I1124 21:53:29.367694 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12c05179-3387-4aad-aa3a-f23931dd6360-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8pf49\" (UID: \"12c05179-3387-4aad-aa3a-f23931dd6360\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8pf49" Nov 24 21:53:29 crc kubenswrapper[4915]: I1124 21:53:29.369248 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/12c05179-3387-4aad-aa3a-f23931dd6360-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8pf49\" (UID: \"12c05179-3387-4aad-aa3a-f23931dd6360\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8pf49" Nov 24 21:53:29 crc kubenswrapper[4915]: I1124 21:53:29.389817 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thrjr\" (UniqueName: \"kubernetes.io/projected/12c05179-3387-4aad-aa3a-f23931dd6360-kube-api-access-thrjr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8pf49\" (UID: \"12c05179-3387-4aad-aa3a-f23931dd6360\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8pf49" Nov 24 21:53:29 crc kubenswrapper[4915]: I1124 21:53:29.514517 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8pf49" Nov 24 21:53:30 crc kubenswrapper[4915]: I1124 21:53:30.155396 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8pf49"] Nov 24 21:53:31 crc kubenswrapper[4915]: I1124 21:53:31.101086 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8pf49" event={"ID":"12c05179-3387-4aad-aa3a-f23931dd6360","Type":"ContainerStarted","Data":"b065134d49651c110658e0f8ca813c7dcd2cc70088ef2668c07603b58de307dd"} Nov 24 21:53:32 crc kubenswrapper[4915]: I1124 21:53:32.115653 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8pf49" event={"ID":"12c05179-3387-4aad-aa3a-f23931dd6360","Type":"ContainerStarted","Data":"3dc547f826bad05b12bb1684e795f945ea67321f2b77a7fbafd6ceefa632d4e3"} Nov 24 21:53:32 crc kubenswrapper[4915]: I1124 21:53:32.136757 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8pf49" podStartSLOduration=2.519754371 podStartE2EDuration="3.1367312s" podCreationTimestamp="2025-11-24 21:53:29 +0000 UTC" firstStartedPulling="2025-11-24 21:53:30.161671594 +0000 UTC m=+2028.477923757" lastFinishedPulling="2025-11-24 21:53:30.778648413 +0000 UTC m=+2029.094900586" observedRunningTime="2025-11-24 21:53:32.129632809 +0000 UTC m=+2030.445885002" watchObservedRunningTime="2025-11-24 21:53:32.1367312 +0000 UTC m=+2030.452983383" Nov 24 21:53:36 crc kubenswrapper[4915]: I1124 21:53:36.168651 4915 generic.go:334] "Generic (PLEG): container finished" podID="12c05179-3387-4aad-aa3a-f23931dd6360" containerID="3dc547f826bad05b12bb1684e795f945ea67321f2b77a7fbafd6ceefa632d4e3" exitCode=0 Nov 24 21:53:36 crc kubenswrapper[4915]: I1124 21:53:36.168719 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8pf49" event={"ID":"12c05179-3387-4aad-aa3a-f23931dd6360","Type":"ContainerDied","Data":"3dc547f826bad05b12bb1684e795f945ea67321f2b77a7fbafd6ceefa632d4e3"} Nov 24 21:53:37 crc kubenswrapper[4915]: I1124 21:53:37.755912 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8pf49" Nov 24 21:53:37 crc kubenswrapper[4915]: I1124 21:53:37.865336 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thrjr\" (UniqueName: \"kubernetes.io/projected/12c05179-3387-4aad-aa3a-f23931dd6360-kube-api-access-thrjr\") pod \"12c05179-3387-4aad-aa3a-f23931dd6360\" (UID: \"12c05179-3387-4aad-aa3a-f23931dd6360\") " Nov 24 21:53:37 crc kubenswrapper[4915]: I1124 21:53:37.865464 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/12c05179-3387-4aad-aa3a-f23931dd6360-ssh-key\") pod \"12c05179-3387-4aad-aa3a-f23931dd6360\" (UID: \"12c05179-3387-4aad-aa3a-f23931dd6360\") " Nov 24 21:53:37 crc kubenswrapper[4915]: I1124 21:53:37.865521 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12c05179-3387-4aad-aa3a-f23931dd6360-inventory\") pod \"12c05179-3387-4aad-aa3a-f23931dd6360\" (UID: \"12c05179-3387-4aad-aa3a-f23931dd6360\") " Nov 24 21:53:37 crc kubenswrapper[4915]: I1124 21:53:37.878088 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12c05179-3387-4aad-aa3a-f23931dd6360-kube-api-access-thrjr" (OuterVolumeSpecName: "kube-api-access-thrjr") pod "12c05179-3387-4aad-aa3a-f23931dd6360" (UID: "12c05179-3387-4aad-aa3a-f23931dd6360"). InnerVolumeSpecName "kube-api-access-thrjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:53:37 crc kubenswrapper[4915]: I1124 21:53:37.908856 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12c05179-3387-4aad-aa3a-f23931dd6360-inventory" (OuterVolumeSpecName: "inventory") pod "12c05179-3387-4aad-aa3a-f23931dd6360" (UID: "12c05179-3387-4aad-aa3a-f23931dd6360"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:53:37 crc kubenswrapper[4915]: I1124 21:53:37.932898 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12c05179-3387-4aad-aa3a-f23931dd6360-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "12c05179-3387-4aad-aa3a-f23931dd6360" (UID: "12c05179-3387-4aad-aa3a-f23931dd6360"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:53:37 crc kubenswrapper[4915]: I1124 21:53:37.968734 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thrjr\" (UniqueName: \"kubernetes.io/projected/12c05179-3387-4aad-aa3a-f23931dd6360-kube-api-access-thrjr\") on node \"crc\" DevicePath \"\"" Nov 24 21:53:37 crc kubenswrapper[4915]: I1124 21:53:37.968771 4915 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/12c05179-3387-4aad-aa3a-f23931dd6360-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:53:37 crc kubenswrapper[4915]: I1124 21:53:37.968801 4915 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12c05179-3387-4aad-aa3a-f23931dd6360-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:53:38 crc kubenswrapper[4915]: I1124 21:53:38.204277 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8pf49" event={"ID":"12c05179-3387-4aad-aa3a-f23931dd6360","Type":"ContainerDied","Data":"b065134d49651c110658e0f8ca813c7dcd2cc70088ef2668c07603b58de307dd"} Nov 24 21:53:38 crc kubenswrapper[4915]: I1124 21:53:38.204330 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b065134d49651c110658e0f8ca813c7dcd2cc70088ef2668c07603b58de307dd" Nov 24 21:53:38 crc kubenswrapper[4915]: I1124 21:53:38.204478 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8pf49" Nov 24 21:53:38 crc kubenswrapper[4915]: I1124 21:53:38.301034 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-s9hmp"] Nov 24 21:53:38 crc kubenswrapper[4915]: E1124 21:53:38.301771 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12c05179-3387-4aad-aa3a-f23931dd6360" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 21:53:38 crc kubenswrapper[4915]: I1124 21:53:38.301840 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="12c05179-3387-4aad-aa3a-f23931dd6360" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 21:53:38 crc kubenswrapper[4915]: I1124 21:53:38.302360 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="12c05179-3387-4aad-aa3a-f23931dd6360" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 24 21:53:38 crc kubenswrapper[4915]: I1124 21:53:38.303928 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s9hmp" Nov 24 21:53:38 crc kubenswrapper[4915]: I1124 21:53:38.306664 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:53:38 crc kubenswrapper[4915]: I1124 21:53:38.306943 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:53:38 crc kubenswrapper[4915]: I1124 21:53:38.307473 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:53:38 crc kubenswrapper[4915]: I1124 21:53:38.307508 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkk6k" Nov 24 21:53:38 crc kubenswrapper[4915]: I1124 21:53:38.313129 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-s9hmp"] Nov 24 21:53:38 crc kubenswrapper[4915]: I1124 21:53:38.398012 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/389bed1d-1ac1-470b-af86-8adb74146ef0-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s9hmp\" (UID: \"389bed1d-1ac1-470b-af86-8adb74146ef0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s9hmp" Nov 24 21:53:38 crc kubenswrapper[4915]: I1124 21:53:38.398093 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfckr\" (UniqueName: \"kubernetes.io/projected/389bed1d-1ac1-470b-af86-8adb74146ef0-kube-api-access-vfckr\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s9hmp\" (UID: \"389bed1d-1ac1-470b-af86-8adb74146ef0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s9hmp" Nov 24 21:53:38 crc kubenswrapper[4915]: I1124 21:53:38.398437 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/389bed1d-1ac1-470b-af86-8adb74146ef0-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s9hmp\" (UID: \"389bed1d-1ac1-470b-af86-8adb74146ef0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s9hmp" Nov 24 21:53:38 crc kubenswrapper[4915]: I1124 21:53:38.500505 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/389bed1d-1ac1-470b-af86-8adb74146ef0-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s9hmp\" (UID: \"389bed1d-1ac1-470b-af86-8adb74146ef0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s9hmp" Nov 24 21:53:38 crc kubenswrapper[4915]: I1124 21:53:38.500648 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfckr\" (UniqueName: \"kubernetes.io/projected/389bed1d-1ac1-470b-af86-8adb74146ef0-kube-api-access-vfckr\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s9hmp\" (UID: \"389bed1d-1ac1-470b-af86-8adb74146ef0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s9hmp" Nov 24 21:53:38 crc kubenswrapper[4915]: I1124 21:53:38.500805 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/389bed1d-1ac1-470b-af86-8adb74146ef0-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s9hmp\" (UID: \"389bed1d-1ac1-470b-af86-8adb74146ef0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s9hmp" Nov 24 21:53:38 crc kubenswrapper[4915]: I1124 21:53:38.506354 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/389bed1d-1ac1-470b-af86-8adb74146ef0-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s9hmp\" (UID: \"389bed1d-1ac1-470b-af86-8adb74146ef0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s9hmp" Nov 24 21:53:38 crc kubenswrapper[4915]: I1124 21:53:38.516772 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/389bed1d-1ac1-470b-af86-8adb74146ef0-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s9hmp\" (UID: \"389bed1d-1ac1-470b-af86-8adb74146ef0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s9hmp" Nov 24 21:53:38 crc kubenswrapper[4915]: I1124 21:53:38.520711 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfckr\" (UniqueName: \"kubernetes.io/projected/389bed1d-1ac1-470b-af86-8adb74146ef0-kube-api-access-vfckr\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-s9hmp\" (UID: \"389bed1d-1ac1-470b-af86-8adb74146ef0\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s9hmp" Nov 24 21:53:38 crc kubenswrapper[4915]: I1124 21:53:38.628920 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s9hmp" Nov 24 21:53:39 crc kubenswrapper[4915]: I1124 21:53:39.201472 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-s9hmp"] Nov 24 21:53:40 crc kubenswrapper[4915]: I1124 21:53:40.242403 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s9hmp" event={"ID":"389bed1d-1ac1-470b-af86-8adb74146ef0","Type":"ContainerStarted","Data":"a06ab1066d3a178c344a8dcc9d221159b5bcbfb938548da25ea0c23f363828bc"} Nov 24 21:53:41 crc kubenswrapper[4915]: I1124 21:53:41.258950 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s9hmp" event={"ID":"389bed1d-1ac1-470b-af86-8adb74146ef0","Type":"ContainerStarted","Data":"bed57ff53d40c39ed329a949feedb1f22cdaed5a06944f472873cdebf86dcda3"} Nov 24 21:53:41 crc kubenswrapper[4915]: I1124 21:53:41.276082 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s9hmp" podStartSLOduration=1.699843566 podStartE2EDuration="3.276061824s" podCreationTimestamp="2025-11-24 21:53:38 +0000 UTC" firstStartedPulling="2025-11-24 21:53:39.20964927 +0000 UTC m=+2037.525901443" lastFinishedPulling="2025-11-24 21:53:40.785867528 +0000 UTC m=+2039.102119701" observedRunningTime="2025-11-24 21:53:41.273759391 +0000 UTC m=+2039.590011634" watchObservedRunningTime="2025-11-24 21:53:41.276061824 +0000 UTC m=+2039.592314007" Nov 24 21:53:54 crc kubenswrapper[4915]: I1124 21:53:54.326860 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:53:54 crc kubenswrapper[4915]: I1124 21:53:54.327308 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:54:01 crc kubenswrapper[4915]: I1124 21:54:01.056262 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9jg68"] Nov 24 21:54:01 crc kubenswrapper[4915]: I1124 21:54:01.076027 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9jg68"] Nov 24 21:54:02 crc kubenswrapper[4915]: I1124 21:54:02.445149 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a25cfd15-cca3-48ae-b45c-81cc7a690f31" path="/var/lib/kubelet/pods/a25cfd15-cca3-48ae-b45c-81cc7a690f31/volumes" Nov 24 21:54:16 crc kubenswrapper[4915]: I1124 21:54:16.067374 4915 scope.go:117] "RemoveContainer" containerID="8b8cbb70659d80e191530e7c40b2b924d401ef7433f8a2552d670ab31e15f985" Nov 24 21:54:16 crc kubenswrapper[4915]: I1124 21:54:16.136953 4915 scope.go:117] "RemoveContainer" containerID="6d1276a333bb1ea524f81c8c303c52a8e90f0e61337e70a168ecd6200bae1b69" Nov 24 21:54:16 crc kubenswrapper[4915]: I1124 21:54:16.175819 4915 scope.go:117] "RemoveContainer" containerID="37e689d3a38e76cdd258398372e5c5923d5c12e3a83ba02761d37b5ffabc9724" Nov 24 21:54:16 crc kubenswrapper[4915]: I1124 21:54:16.235921 4915 scope.go:117] "RemoveContainer" containerID="91fad0a063c9f8f10c3f275798bfc0c0c2923894ca886bc2b316a82457e87abe" Nov 24 21:54:16 crc kubenswrapper[4915]: I1124 21:54:16.314070 4915 scope.go:117] "RemoveContainer" containerID="1a4613153211c71d0c2870ad58288dd61754bc7b04500dc2b96683469fb627d4" Nov 24 21:54:16 crc kubenswrapper[4915]: I1124 21:54:16.363284 4915 scope.go:117] "RemoveContainer" containerID="af399af7155b788b197e6fe35e604ced994e0fcf1cbe057c53aa5c94798a105c" Nov 24 21:54:16 crc kubenswrapper[4915]: I1124 21:54:16.426573 4915 scope.go:117] "RemoveContainer" containerID="7ee3b3c6520d78d154dabb0b74611526ce9d58814d358c68ad6b5f8de5f292d6" Nov 24 21:54:17 crc kubenswrapper[4915]: I1124 21:54:17.832890 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5pm2v"] Nov 24 21:54:17 crc kubenswrapper[4915]: I1124 21:54:17.839764 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5pm2v" Nov 24 21:54:17 crc kubenswrapper[4915]: I1124 21:54:17.850311 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5pm2v"] Nov 24 21:54:18 crc kubenswrapper[4915]: I1124 21:54:18.041176 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb0bd7a5-a0f6-4d28-9add-729d3a831fb9-catalog-content\") pod \"redhat-operators-5pm2v\" (UID: \"bb0bd7a5-a0f6-4d28-9add-729d3a831fb9\") " pod="openshift-marketplace/redhat-operators-5pm2v" Nov 24 21:54:18 crc kubenswrapper[4915]: I1124 21:54:18.041304 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb0bd7a5-a0f6-4d28-9add-729d3a831fb9-utilities\") pod \"redhat-operators-5pm2v\" (UID: \"bb0bd7a5-a0f6-4d28-9add-729d3a831fb9\") " pod="openshift-marketplace/redhat-operators-5pm2v" Nov 24 21:54:18 crc kubenswrapper[4915]: I1124 21:54:18.041410 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxd2p\" (UniqueName: \"kubernetes.io/projected/bb0bd7a5-a0f6-4d28-9add-729d3a831fb9-kube-api-access-nxd2p\") pod \"redhat-operators-5pm2v\" (UID: \"bb0bd7a5-a0f6-4d28-9add-729d3a831fb9\") " pod="openshift-marketplace/redhat-operators-5pm2v" Nov 24 21:54:18 crc kubenswrapper[4915]: I1124 21:54:18.143497 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb0bd7a5-a0f6-4d28-9add-729d3a831fb9-catalog-content\") pod \"redhat-operators-5pm2v\" (UID: \"bb0bd7a5-a0f6-4d28-9add-729d3a831fb9\") " pod="openshift-marketplace/redhat-operators-5pm2v" Nov 24 21:54:18 crc kubenswrapper[4915]: I1124 21:54:18.143611 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb0bd7a5-a0f6-4d28-9add-729d3a831fb9-utilities\") pod \"redhat-operators-5pm2v\" (UID: \"bb0bd7a5-a0f6-4d28-9add-729d3a831fb9\") " pod="openshift-marketplace/redhat-operators-5pm2v" Nov 24 21:54:18 crc kubenswrapper[4915]: I1124 21:54:18.143668 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxd2p\" (UniqueName: \"kubernetes.io/projected/bb0bd7a5-a0f6-4d28-9add-729d3a831fb9-kube-api-access-nxd2p\") pod \"redhat-operators-5pm2v\" (UID: \"bb0bd7a5-a0f6-4d28-9add-729d3a831fb9\") " pod="openshift-marketplace/redhat-operators-5pm2v" Nov 24 21:54:18 crc kubenswrapper[4915]: I1124 21:54:18.144236 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb0bd7a5-a0f6-4d28-9add-729d3a831fb9-catalog-content\") pod \"redhat-operators-5pm2v\" (UID: \"bb0bd7a5-a0f6-4d28-9add-729d3a831fb9\") " pod="openshift-marketplace/redhat-operators-5pm2v" Nov 24 21:54:18 crc kubenswrapper[4915]: I1124 21:54:18.144428 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb0bd7a5-a0f6-4d28-9add-729d3a831fb9-utilities\") pod \"redhat-operators-5pm2v\" (UID: \"bb0bd7a5-a0f6-4d28-9add-729d3a831fb9\") " pod="openshift-marketplace/redhat-operators-5pm2v" Nov 24 21:54:18 crc kubenswrapper[4915]: I1124 21:54:18.163349 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxd2p\" (UniqueName: \"kubernetes.io/projected/bb0bd7a5-a0f6-4d28-9add-729d3a831fb9-kube-api-access-nxd2p\") pod \"redhat-operators-5pm2v\" (UID: \"bb0bd7a5-a0f6-4d28-9add-729d3a831fb9\") " pod="openshift-marketplace/redhat-operators-5pm2v" Nov 24 21:54:18 crc kubenswrapper[4915]: I1124 21:54:18.459385 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5pm2v" Nov 24 21:54:19 crc kubenswrapper[4915]: I1124 21:54:19.046965 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5pm2v"] Nov 24 21:54:19 crc kubenswrapper[4915]: I1124 21:54:19.740252 4915 generic.go:334] "Generic (PLEG): container finished" podID="bb0bd7a5-a0f6-4d28-9add-729d3a831fb9" containerID="1c9a82bed4370cce22711fefc47c244b32642f24a819f0df3f19bb5ccc0bcf9c" exitCode=0 Nov 24 21:54:19 crc kubenswrapper[4915]: I1124 21:54:19.740325 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pm2v" event={"ID":"bb0bd7a5-a0f6-4d28-9add-729d3a831fb9","Type":"ContainerDied","Data":"1c9a82bed4370cce22711fefc47c244b32642f24a819f0df3f19bb5ccc0bcf9c"} Nov 24 21:54:19 crc kubenswrapper[4915]: I1124 21:54:19.740487 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pm2v" event={"ID":"bb0bd7a5-a0f6-4d28-9add-729d3a831fb9","Type":"ContainerStarted","Data":"90cd9aef353743475bc7156a4e7cf9b74496abc935bb3fa9e0f9442fed44015a"} Nov 24 21:54:20 crc kubenswrapper[4915]: I1124 21:54:20.756985 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pm2v" event={"ID":"bb0bd7a5-a0f6-4d28-9add-729d3a831fb9","Type":"ContainerStarted","Data":"e0a59687e3e239412499563828462718d3001ae4f7c340f5b12d73bcee52f218"} Nov 24 21:54:22 crc kubenswrapper[4915]: I1124 21:54:22.793814 4915 generic.go:334] "Generic (PLEG): container finished" podID="389bed1d-1ac1-470b-af86-8adb74146ef0" containerID="bed57ff53d40c39ed329a949feedb1f22cdaed5a06944f472873cdebf86dcda3" exitCode=0 Nov 24 21:54:22 crc kubenswrapper[4915]: I1124 21:54:22.793889 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s9hmp" event={"ID":"389bed1d-1ac1-470b-af86-8adb74146ef0","Type":"ContainerDied","Data":"bed57ff53d40c39ed329a949feedb1f22cdaed5a06944f472873cdebf86dcda3"} Nov 24 21:54:24 crc kubenswrapper[4915]: I1124 21:54:24.327061 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:54:24 crc kubenswrapper[4915]: I1124 21:54:24.327763 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:54:24 crc kubenswrapper[4915]: I1124 21:54:24.327863 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 21:54:24 crc kubenswrapper[4915]: I1124 21:54:24.328989 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"556aae8bbced33e3b49c2bf37d5c3e01d83bc99ece680acbe35b92b0990e5b5d"} pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:54:24 crc kubenswrapper[4915]: I1124 21:54:24.329065 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" containerID="cri-o://556aae8bbced33e3b49c2bf37d5c3e01d83bc99ece680acbe35b92b0990e5b5d" gracePeriod=600 Nov 24 21:54:24 crc kubenswrapper[4915]: I1124 21:54:24.386922 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s9hmp" Nov 24 21:54:24 crc kubenswrapper[4915]: I1124 21:54:24.434212 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfckr\" (UniqueName: \"kubernetes.io/projected/389bed1d-1ac1-470b-af86-8adb74146ef0-kube-api-access-vfckr\") pod \"389bed1d-1ac1-470b-af86-8adb74146ef0\" (UID: \"389bed1d-1ac1-470b-af86-8adb74146ef0\") " Nov 24 21:54:24 crc kubenswrapper[4915]: I1124 21:54:24.434336 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/389bed1d-1ac1-470b-af86-8adb74146ef0-inventory\") pod \"389bed1d-1ac1-470b-af86-8adb74146ef0\" (UID: \"389bed1d-1ac1-470b-af86-8adb74146ef0\") " Nov 24 21:54:24 crc kubenswrapper[4915]: I1124 21:54:24.434480 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/389bed1d-1ac1-470b-af86-8adb74146ef0-ssh-key\") pod \"389bed1d-1ac1-470b-af86-8adb74146ef0\" (UID: \"389bed1d-1ac1-470b-af86-8adb74146ef0\") " Nov 24 21:54:24 crc kubenswrapper[4915]: I1124 21:54:24.488111 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/389bed1d-1ac1-470b-af86-8adb74146ef0-kube-api-access-vfckr" (OuterVolumeSpecName: "kube-api-access-vfckr") pod "389bed1d-1ac1-470b-af86-8adb74146ef0" (UID: "389bed1d-1ac1-470b-af86-8adb74146ef0"). InnerVolumeSpecName "kube-api-access-vfckr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:54:24 crc kubenswrapper[4915]: I1124 21:54:24.537400 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfckr\" (UniqueName: \"kubernetes.io/projected/389bed1d-1ac1-470b-af86-8adb74146ef0-kube-api-access-vfckr\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:24 crc kubenswrapper[4915]: I1124 21:54:24.617025 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/389bed1d-1ac1-470b-af86-8adb74146ef0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "389bed1d-1ac1-470b-af86-8adb74146ef0" (UID: "389bed1d-1ac1-470b-af86-8adb74146ef0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:54:24 crc kubenswrapper[4915]: I1124 21:54:24.626930 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/389bed1d-1ac1-470b-af86-8adb74146ef0-inventory" (OuterVolumeSpecName: "inventory") pod "389bed1d-1ac1-470b-af86-8adb74146ef0" (UID: "389bed1d-1ac1-470b-af86-8adb74146ef0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:54:24 crc kubenswrapper[4915]: I1124 21:54:24.640849 4915 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/389bed1d-1ac1-470b-af86-8adb74146ef0-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:24 crc kubenswrapper[4915]: I1124 21:54:24.640883 4915 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/389bed1d-1ac1-470b-af86-8adb74146ef0-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:24 crc kubenswrapper[4915]: I1124 21:54:24.822013 4915 generic.go:334] "Generic (PLEG): container finished" podID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerID="556aae8bbced33e3b49c2bf37d5c3e01d83bc99ece680acbe35b92b0990e5b5d" exitCode=0 Nov 24 21:54:24 crc kubenswrapper[4915]: I1124 21:54:24.822059 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerDied","Data":"556aae8bbced33e3b49c2bf37d5c3e01d83bc99ece680acbe35b92b0990e5b5d"} Nov 24 21:54:24 crc kubenswrapper[4915]: I1124 21:54:24.822322 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerStarted","Data":"22b32b27a01009367ce2885e17cf2d1b259359cb42594a96255fea61acb83388"} Nov 24 21:54:24 crc kubenswrapper[4915]: I1124 21:54:24.822345 4915 scope.go:117] "RemoveContainer" containerID="e35b421db64e0c0a1d101af5d1cb0bc0580962fd3824f2e5b2c603b4d4ef9489" Nov 24 21:54:24 crc kubenswrapper[4915]: I1124 21:54:24.825321 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s9hmp" event={"ID":"389bed1d-1ac1-470b-af86-8adb74146ef0","Type":"ContainerDied","Data":"a06ab1066d3a178c344a8dcc9d221159b5bcbfb938548da25ea0c23f363828bc"} Nov 24 21:54:24 crc kubenswrapper[4915]: I1124 21:54:24.825356 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a06ab1066d3a178c344a8dcc9d221159b5bcbfb938548da25ea0c23f363828bc" Nov 24 21:54:24 crc kubenswrapper[4915]: I1124 21:54:24.825409 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-s9hmp" Nov 24 21:54:24 crc kubenswrapper[4915]: I1124 21:54:24.911902 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ct6bj"] Nov 24 21:54:24 crc kubenswrapper[4915]: E1124 21:54:24.912438 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="389bed1d-1ac1-470b-af86-8adb74146ef0" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 21:54:24 crc kubenswrapper[4915]: I1124 21:54:24.912463 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="389bed1d-1ac1-470b-af86-8adb74146ef0" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 21:54:24 crc kubenswrapper[4915]: I1124 21:54:24.912749 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="389bed1d-1ac1-470b-af86-8adb74146ef0" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 24 21:54:24 crc kubenswrapper[4915]: I1124 21:54:24.913643 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ct6bj" Nov 24 21:54:24 crc kubenswrapper[4915]: I1124 21:54:24.919209 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:54:24 crc kubenswrapper[4915]: I1124 21:54:24.919765 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:54:24 crc kubenswrapper[4915]: I1124 21:54:24.919850 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:54:24 crc kubenswrapper[4915]: I1124 21:54:24.921586 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkk6k" Nov 24 21:54:24 crc kubenswrapper[4915]: I1124 21:54:24.927365 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ct6bj"] Nov 24 21:54:25 crc kubenswrapper[4915]: I1124 21:54:25.054206 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09f1f524-d962-4ff4-b8e4-d9e3ace2f492-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ct6bj\" (UID: \"09f1f524-d962-4ff4-b8e4-d9e3ace2f492\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ct6bj" Nov 24 21:54:25 crc kubenswrapper[4915]: I1124 21:54:25.054383 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dqvs\" (UniqueName: \"kubernetes.io/projected/09f1f524-d962-4ff4-b8e4-d9e3ace2f492-kube-api-access-5dqvs\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ct6bj\" (UID: \"09f1f524-d962-4ff4-b8e4-d9e3ace2f492\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ct6bj" Nov 24 21:54:25 crc kubenswrapper[4915]: I1124 21:54:25.054437 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09f1f524-d962-4ff4-b8e4-d9e3ace2f492-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ct6bj\" (UID: \"09f1f524-d962-4ff4-b8e4-d9e3ace2f492\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ct6bj" Nov 24 21:54:25 crc kubenswrapper[4915]: I1124 21:54:25.157391 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dqvs\" (UniqueName: \"kubernetes.io/projected/09f1f524-d962-4ff4-b8e4-d9e3ace2f492-kube-api-access-5dqvs\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ct6bj\" (UID: \"09f1f524-d962-4ff4-b8e4-d9e3ace2f492\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ct6bj" Nov 24 21:54:25 crc kubenswrapper[4915]: I1124 21:54:25.157525 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09f1f524-d962-4ff4-b8e4-d9e3ace2f492-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ct6bj\" (UID: \"09f1f524-d962-4ff4-b8e4-d9e3ace2f492\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ct6bj" Nov 24 21:54:25 crc kubenswrapper[4915]: I1124 21:54:25.157653 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09f1f524-d962-4ff4-b8e4-d9e3ace2f492-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ct6bj\" (UID: \"09f1f524-d962-4ff4-b8e4-d9e3ace2f492\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ct6bj" Nov 24 21:54:25 crc kubenswrapper[4915]: I1124 21:54:25.163873 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09f1f524-d962-4ff4-b8e4-d9e3ace2f492-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ct6bj\" (UID: \"09f1f524-d962-4ff4-b8e4-d9e3ace2f492\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ct6bj" Nov 24 21:54:25 crc kubenswrapper[4915]: I1124 21:54:25.164297 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09f1f524-d962-4ff4-b8e4-d9e3ace2f492-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ct6bj\" (UID: \"09f1f524-d962-4ff4-b8e4-d9e3ace2f492\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ct6bj" Nov 24 21:54:25 crc kubenswrapper[4915]: I1124 21:54:25.176349 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dqvs\" (UniqueName: \"kubernetes.io/projected/09f1f524-d962-4ff4-b8e4-d9e3ace2f492-kube-api-access-5dqvs\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-ct6bj\" (UID: \"09f1f524-d962-4ff4-b8e4-d9e3ace2f492\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ct6bj" Nov 24 21:54:25 crc kubenswrapper[4915]: I1124 21:54:25.231396 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ct6bj" Nov 24 21:54:25 crc kubenswrapper[4915]: I1124 21:54:25.817888 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ct6bj"] Nov 24 21:54:25 crc kubenswrapper[4915]: W1124 21:54:25.820528 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09f1f524_d962_4ff4_b8e4_d9e3ace2f492.slice/crio-9aa02c2ec463adae50081baee3fec79338190f0d21d86a00fcd0c837822f8d2a WatchSource:0}: Error finding container 9aa02c2ec463adae50081baee3fec79338190f0d21d86a00fcd0c837822f8d2a: Status 404 returned error can't find the container with id 9aa02c2ec463adae50081baee3fec79338190f0d21d86a00fcd0c837822f8d2a Nov 24 21:54:25 crc kubenswrapper[4915]: I1124 21:54:25.836753 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ct6bj" event={"ID":"09f1f524-d962-4ff4-b8e4-d9e3ace2f492","Type":"ContainerStarted","Data":"9aa02c2ec463adae50081baee3fec79338190f0d21d86a00fcd0c837822f8d2a"} Nov 24 21:54:25 crc kubenswrapper[4915]: I1124 21:54:25.839422 4915 generic.go:334] "Generic (PLEG): container finished" podID="bb0bd7a5-a0f6-4d28-9add-729d3a831fb9" containerID="e0a59687e3e239412499563828462718d3001ae4f7c340f5b12d73bcee52f218" exitCode=0 Nov 24 21:54:25 crc kubenswrapper[4915]: I1124 21:54:25.839474 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pm2v" event={"ID":"bb0bd7a5-a0f6-4d28-9add-729d3a831fb9","Type":"ContainerDied","Data":"e0a59687e3e239412499563828462718d3001ae4f7c340f5b12d73bcee52f218"} Nov 24 21:54:26 crc kubenswrapper[4915]: I1124 21:54:26.053904 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-nzxkj"] Nov 24 21:54:26 crc kubenswrapper[4915]: I1124 21:54:26.065683 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-405b-account-create-jjfvx"] Nov 24 21:54:26 crc kubenswrapper[4915]: I1124 21:54:26.076737 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-405b-account-create-jjfvx"] Nov 24 21:54:26 crc kubenswrapper[4915]: I1124 21:54:26.084990 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-nzxkj"] Nov 24 21:54:26 crc kubenswrapper[4915]: I1124 21:54:26.451294 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96df9840-ef15-4b5e-a23c-a7c0dbdf9d14" path="/var/lib/kubelet/pods/96df9840-ef15-4b5e-a23c-a7c0dbdf9d14/volumes" Nov 24 21:54:26 crc kubenswrapper[4915]: I1124 21:54:26.453369 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c94bae1f-7fd1-400e-b38f-6d835e52bb97" path="/var/lib/kubelet/pods/c94bae1f-7fd1-400e-b38f-6d835e52bb97/volumes" Nov 24 21:54:26 crc kubenswrapper[4915]: I1124 21:54:26.856388 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ct6bj" event={"ID":"09f1f524-d962-4ff4-b8e4-d9e3ace2f492","Type":"ContainerStarted","Data":"bc0bdf3c3da9c5da7a391d1cd9146823a498af0aa427671e0042b261925afd6e"} Nov 24 21:54:26 crc kubenswrapper[4915]: I1124 21:54:26.859037 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pm2v" event={"ID":"bb0bd7a5-a0f6-4d28-9add-729d3a831fb9","Type":"ContainerStarted","Data":"2782ef6e74342241387869e543bdd95aed57bf5a6098aa682d3cd5233eff0caa"} Nov 24 21:54:26 crc kubenswrapper[4915]: I1124 21:54:26.886113 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ct6bj" podStartSLOduration=2.4115695 podStartE2EDuration="2.886078195s" podCreationTimestamp="2025-11-24 21:54:24 +0000 UTC" firstStartedPulling="2025-11-24 21:54:25.823193169 +0000 UTC m=+2084.139445342" lastFinishedPulling="2025-11-24 21:54:26.297701824 +0000 UTC m=+2084.613954037" observedRunningTime="2025-11-24 21:54:26.876573409 +0000 UTC m=+2085.192825592" watchObservedRunningTime="2025-11-24 21:54:26.886078195 +0000 UTC m=+2085.202330368" Nov 24 21:54:26 crc kubenswrapper[4915]: I1124 21:54:26.900275 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5pm2v" podStartSLOduration=3.37983708 podStartE2EDuration="9.900251258s" podCreationTimestamp="2025-11-24 21:54:17 +0000 UTC" firstStartedPulling="2025-11-24 21:54:19.743303729 +0000 UTC m=+2078.059555942" lastFinishedPulling="2025-11-24 21:54:26.263717947 +0000 UTC m=+2084.579970120" observedRunningTime="2025-11-24 21:54:26.894917443 +0000 UTC m=+2085.211169616" watchObservedRunningTime="2025-11-24 21:54:26.900251258 +0000 UTC m=+2085.216503431" Nov 24 21:54:28 crc kubenswrapper[4915]: I1124 21:54:28.460199 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5pm2v" Nov 24 21:54:28 crc kubenswrapper[4915]: I1124 21:54:28.460761 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5pm2v" Nov 24 21:54:29 crc kubenswrapper[4915]: I1124 21:54:29.539404 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5pm2v" podUID="bb0bd7a5-a0f6-4d28-9add-729d3a831fb9" containerName="registry-server" probeResult="failure" output=< Nov 24 21:54:29 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Nov 24 21:54:29 crc kubenswrapper[4915]: > Nov 24 21:54:30 crc kubenswrapper[4915]: I1124 21:54:30.120766 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-298k5"] Nov 24 21:54:30 crc kubenswrapper[4915]: I1124 21:54:30.133188 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-298k5"] Nov 24 21:54:30 crc kubenswrapper[4915]: I1124 21:54:30.145840 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-jshft"] Nov 24 21:54:30 crc kubenswrapper[4915]: I1124 21:54:30.154405 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-jshft"] Nov 24 21:54:30 crc kubenswrapper[4915]: I1124 21:54:30.438237 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e0b0bf9-6641-4a71-8adb-2c6e97412718" path="/var/lib/kubelet/pods/5e0b0bf9-6641-4a71-8adb-2c6e97412718/volumes" Nov 24 21:54:30 crc kubenswrapper[4915]: I1124 21:54:30.438896 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb3b004f-82a5-46ab-aff4-223567ddd793" path="/var/lib/kubelet/pods/fb3b004f-82a5-46ab-aff4-223567ddd793/volumes" Nov 24 21:54:38 crc kubenswrapper[4915]: I1124 21:54:38.521100 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5pm2v" Nov 24 21:54:38 crc kubenswrapper[4915]: I1124 21:54:38.598408 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5pm2v" Nov 24 21:54:38 crc kubenswrapper[4915]: I1124 21:54:38.769259 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5pm2v"] Nov 24 21:54:40 crc kubenswrapper[4915]: I1124 21:54:40.025400 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5pm2v" podUID="bb0bd7a5-a0f6-4d28-9add-729d3a831fb9" containerName="registry-server" containerID="cri-o://2782ef6e74342241387869e543bdd95aed57bf5a6098aa682d3cd5233eff0caa" gracePeriod=2 Nov 24 21:54:40 crc kubenswrapper[4915]: I1124 21:54:40.609703 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5pm2v" Nov 24 21:54:40 crc kubenswrapper[4915]: I1124 21:54:40.800417 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxd2p\" (UniqueName: \"kubernetes.io/projected/bb0bd7a5-a0f6-4d28-9add-729d3a831fb9-kube-api-access-nxd2p\") pod \"bb0bd7a5-a0f6-4d28-9add-729d3a831fb9\" (UID: \"bb0bd7a5-a0f6-4d28-9add-729d3a831fb9\") " Nov 24 21:54:40 crc kubenswrapper[4915]: I1124 21:54:40.800696 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb0bd7a5-a0f6-4d28-9add-729d3a831fb9-catalog-content\") pod \"bb0bd7a5-a0f6-4d28-9add-729d3a831fb9\" (UID: \"bb0bd7a5-a0f6-4d28-9add-729d3a831fb9\") " Nov 24 21:54:40 crc kubenswrapper[4915]: I1124 21:54:40.800724 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb0bd7a5-a0f6-4d28-9add-729d3a831fb9-utilities\") pod \"bb0bd7a5-a0f6-4d28-9add-729d3a831fb9\" (UID: \"bb0bd7a5-a0f6-4d28-9add-729d3a831fb9\") " Nov 24 21:54:40 crc kubenswrapper[4915]: I1124 21:54:40.801684 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb0bd7a5-a0f6-4d28-9add-729d3a831fb9-utilities" (OuterVolumeSpecName: "utilities") pod "bb0bd7a5-a0f6-4d28-9add-729d3a831fb9" (UID: "bb0bd7a5-a0f6-4d28-9add-729d3a831fb9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:54:40 crc kubenswrapper[4915]: I1124 21:54:40.811917 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb0bd7a5-a0f6-4d28-9add-729d3a831fb9-kube-api-access-nxd2p" (OuterVolumeSpecName: "kube-api-access-nxd2p") pod "bb0bd7a5-a0f6-4d28-9add-729d3a831fb9" (UID: "bb0bd7a5-a0f6-4d28-9add-729d3a831fb9"). InnerVolumeSpecName "kube-api-access-nxd2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:54:40 crc kubenswrapper[4915]: I1124 21:54:40.897408 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb0bd7a5-a0f6-4d28-9add-729d3a831fb9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb0bd7a5-a0f6-4d28-9add-729d3a831fb9" (UID: "bb0bd7a5-a0f6-4d28-9add-729d3a831fb9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:54:40 crc kubenswrapper[4915]: I1124 21:54:40.903267 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb0bd7a5-a0f6-4d28-9add-729d3a831fb9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:40 crc kubenswrapper[4915]: I1124 21:54:40.903310 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb0bd7a5-a0f6-4d28-9add-729d3a831fb9-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:40 crc kubenswrapper[4915]: I1124 21:54:40.903321 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxd2p\" (UniqueName: \"kubernetes.io/projected/bb0bd7a5-a0f6-4d28-9add-729d3a831fb9-kube-api-access-nxd2p\") on node \"crc\" DevicePath \"\"" Nov 24 21:54:41 crc kubenswrapper[4915]: I1124 21:54:41.041799 4915 generic.go:334] "Generic (PLEG): container finished" podID="bb0bd7a5-a0f6-4d28-9add-729d3a831fb9" containerID="2782ef6e74342241387869e543bdd95aed57bf5a6098aa682d3cd5233eff0caa" exitCode=0 Nov 24 21:54:41 crc kubenswrapper[4915]: I1124 21:54:41.041852 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pm2v" event={"ID":"bb0bd7a5-a0f6-4d28-9add-729d3a831fb9","Type":"ContainerDied","Data":"2782ef6e74342241387869e543bdd95aed57bf5a6098aa682d3cd5233eff0caa"} Nov 24 21:54:41 crc kubenswrapper[4915]: I1124 21:54:41.041875 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5pm2v" Nov 24 21:54:41 crc kubenswrapper[4915]: I1124 21:54:41.041892 4915 scope.go:117] "RemoveContainer" containerID="2782ef6e74342241387869e543bdd95aed57bf5a6098aa682d3cd5233eff0caa" Nov 24 21:54:41 crc kubenswrapper[4915]: I1124 21:54:41.041881 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5pm2v" event={"ID":"bb0bd7a5-a0f6-4d28-9add-729d3a831fb9","Type":"ContainerDied","Data":"90cd9aef353743475bc7156a4e7cf9b74496abc935bb3fa9e0f9442fed44015a"} Nov 24 21:54:41 crc kubenswrapper[4915]: I1124 21:54:41.075099 4915 scope.go:117] "RemoveContainer" containerID="e0a59687e3e239412499563828462718d3001ae4f7c340f5b12d73bcee52f218" Nov 24 21:54:41 crc kubenswrapper[4915]: I1124 21:54:41.080861 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5pm2v"] Nov 24 21:54:41 crc kubenswrapper[4915]: I1124 21:54:41.090463 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5pm2v"] Nov 24 21:54:41 crc kubenswrapper[4915]: I1124 21:54:41.104209 4915 scope.go:117] "RemoveContainer" containerID="1c9a82bed4370cce22711fefc47c244b32642f24a819f0df3f19bb5ccc0bcf9c" Nov 24 21:54:41 crc kubenswrapper[4915]: I1124 21:54:41.205956 4915 scope.go:117] "RemoveContainer" containerID="2782ef6e74342241387869e543bdd95aed57bf5a6098aa682d3cd5233eff0caa" Nov 24 21:54:41 crc kubenswrapper[4915]: E1124 21:54:41.210921 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2782ef6e74342241387869e543bdd95aed57bf5a6098aa682d3cd5233eff0caa\": container with ID starting with 2782ef6e74342241387869e543bdd95aed57bf5a6098aa682d3cd5233eff0caa not found: ID does not exist" containerID="2782ef6e74342241387869e543bdd95aed57bf5a6098aa682d3cd5233eff0caa" Nov 24 21:54:41 crc kubenswrapper[4915]: I1124 21:54:41.210975 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2782ef6e74342241387869e543bdd95aed57bf5a6098aa682d3cd5233eff0caa"} err="failed to get container status \"2782ef6e74342241387869e543bdd95aed57bf5a6098aa682d3cd5233eff0caa\": rpc error: code = NotFound desc = could not find container \"2782ef6e74342241387869e543bdd95aed57bf5a6098aa682d3cd5233eff0caa\": container with ID starting with 2782ef6e74342241387869e543bdd95aed57bf5a6098aa682d3cd5233eff0caa not found: ID does not exist" Nov 24 21:54:41 crc kubenswrapper[4915]: I1124 21:54:41.211005 4915 scope.go:117] "RemoveContainer" containerID="e0a59687e3e239412499563828462718d3001ae4f7c340f5b12d73bcee52f218" Nov 24 21:54:41 crc kubenswrapper[4915]: E1124 21:54:41.213245 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0a59687e3e239412499563828462718d3001ae4f7c340f5b12d73bcee52f218\": container with ID starting with e0a59687e3e239412499563828462718d3001ae4f7c340f5b12d73bcee52f218 not found: ID does not exist" containerID="e0a59687e3e239412499563828462718d3001ae4f7c340f5b12d73bcee52f218" Nov 24 21:54:41 crc kubenswrapper[4915]: I1124 21:54:41.213295 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0a59687e3e239412499563828462718d3001ae4f7c340f5b12d73bcee52f218"} err="failed to get container status \"e0a59687e3e239412499563828462718d3001ae4f7c340f5b12d73bcee52f218\": rpc error: code = NotFound desc = could not find container \"e0a59687e3e239412499563828462718d3001ae4f7c340f5b12d73bcee52f218\": container with ID starting with e0a59687e3e239412499563828462718d3001ae4f7c340f5b12d73bcee52f218 not found: ID does not exist" Nov 24 21:54:41 crc kubenswrapper[4915]: I1124 21:54:41.213329 4915 scope.go:117] "RemoveContainer" containerID="1c9a82bed4370cce22711fefc47c244b32642f24a819f0df3f19bb5ccc0bcf9c" Nov 24 21:54:41 crc kubenswrapper[4915]: E1124 21:54:41.213676 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c9a82bed4370cce22711fefc47c244b32642f24a819f0df3f19bb5ccc0bcf9c\": container with ID starting with 1c9a82bed4370cce22711fefc47c244b32642f24a819f0df3f19bb5ccc0bcf9c not found: ID does not exist" containerID="1c9a82bed4370cce22711fefc47c244b32642f24a819f0df3f19bb5ccc0bcf9c" Nov 24 21:54:41 crc kubenswrapper[4915]: I1124 21:54:41.213698 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c9a82bed4370cce22711fefc47c244b32642f24a819f0df3f19bb5ccc0bcf9c"} err="failed to get container status \"1c9a82bed4370cce22711fefc47c244b32642f24a819f0df3f19bb5ccc0bcf9c\": rpc error: code = NotFound desc = could not find container \"1c9a82bed4370cce22711fefc47c244b32642f24a819f0df3f19bb5ccc0bcf9c\": container with ID starting with 1c9a82bed4370cce22711fefc47c244b32642f24a819f0df3f19bb5ccc0bcf9c not found: ID does not exist" Nov 24 21:54:42 crc kubenswrapper[4915]: I1124 21:54:42.447340 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb0bd7a5-a0f6-4d28-9add-729d3a831fb9" path="/var/lib/kubelet/pods/bb0bd7a5-a0f6-4d28-9add-729d3a831fb9/volumes" Nov 24 21:54:43 crc kubenswrapper[4915]: I1124 21:54:43.032990 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-cxqp9"] Nov 24 21:54:43 crc kubenswrapper[4915]: I1124 21:54:43.048342 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-cxqp9"] Nov 24 21:54:44 crc kubenswrapper[4915]: I1124 21:54:44.448352 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6" path="/var/lib/kubelet/pods/6c9caff7-bafc-4a66-8f60-fc3dc1f04ce6/volumes" Nov 24 21:55:14 crc kubenswrapper[4915]: I1124 21:55:14.060300 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-8gn7w"] Nov 24 21:55:14 crc kubenswrapper[4915]: I1124 21:55:14.071920 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-8gn7w"] Nov 24 21:55:14 crc kubenswrapper[4915]: I1124 21:55:14.450866 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2" path="/var/lib/kubelet/pods/1c7f3ade-fe1b-412a-8051-f4d7a9d8fbd2/volumes" Nov 24 21:55:16 crc kubenswrapper[4915]: I1124 21:55:16.602187 4915 scope.go:117] "RemoveContainer" containerID="7c04e6f7357657b30eb6eeae432165a3498378c803732965f225a4a23419eaac" Nov 24 21:55:16 crc kubenswrapper[4915]: I1124 21:55:16.646278 4915 scope.go:117] "RemoveContainer" containerID="7b3a95eafda5ef5aa58648bc3ebdfaf7e7e1a3145b21869374f404dce05bc8cc" Nov 24 21:55:16 crc kubenswrapper[4915]: I1124 21:55:16.761754 4915 scope.go:117] "RemoveContainer" containerID="a520360b07c830366aeddd990b071210934be56a39c28163798c5769febfd9c5" Nov 24 21:55:16 crc kubenswrapper[4915]: I1124 21:55:16.802447 4915 scope.go:117] "RemoveContainer" containerID="3235d4b0da129d58e102fd39dc35374f450ebe1a35b11865ee563b25f55364ee" Nov 24 21:55:16 crc kubenswrapper[4915]: I1124 21:55:16.896596 4915 scope.go:117] "RemoveContainer" containerID="6ecf36413b1fe9a95006d4a9c1d62fd142b7c0d6e4553b2e3485b1a2f446094f" Nov 24 21:55:16 crc kubenswrapper[4915]: I1124 21:55:16.945967 4915 scope.go:117] "RemoveContainer" containerID="8851b4319d1ff78ba7dbe3b800e3d65ae76886d09a7c90672b009bf9fbbb94e2" Nov 24 21:55:21 crc kubenswrapper[4915]: I1124 21:55:21.556975 4915 generic.go:334] "Generic (PLEG): container finished" podID="09f1f524-d962-4ff4-b8e4-d9e3ace2f492" containerID="bc0bdf3c3da9c5da7a391d1cd9146823a498af0aa427671e0042b261925afd6e" exitCode=0 Nov 24 21:55:21 crc kubenswrapper[4915]: I1124 21:55:21.557044 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ct6bj" event={"ID":"09f1f524-d962-4ff4-b8e4-d9e3ace2f492","Type":"ContainerDied","Data":"bc0bdf3c3da9c5da7a391d1cd9146823a498af0aa427671e0042b261925afd6e"} Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.053582 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ct6bj" Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.183873 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09f1f524-d962-4ff4-b8e4-d9e3ace2f492-inventory\") pod \"09f1f524-d962-4ff4-b8e4-d9e3ace2f492\" (UID: \"09f1f524-d962-4ff4-b8e4-d9e3ace2f492\") " Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.183957 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09f1f524-d962-4ff4-b8e4-d9e3ace2f492-ssh-key\") pod \"09f1f524-d962-4ff4-b8e4-d9e3ace2f492\" (UID: \"09f1f524-d962-4ff4-b8e4-d9e3ace2f492\") " Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.184262 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dqvs\" (UniqueName: \"kubernetes.io/projected/09f1f524-d962-4ff4-b8e4-d9e3ace2f492-kube-api-access-5dqvs\") pod \"09f1f524-d962-4ff4-b8e4-d9e3ace2f492\" (UID: \"09f1f524-d962-4ff4-b8e4-d9e3ace2f492\") " Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.191725 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09f1f524-d962-4ff4-b8e4-d9e3ace2f492-kube-api-access-5dqvs" (OuterVolumeSpecName: "kube-api-access-5dqvs") pod "09f1f524-d962-4ff4-b8e4-d9e3ace2f492" (UID: "09f1f524-d962-4ff4-b8e4-d9e3ace2f492"). InnerVolumeSpecName "kube-api-access-5dqvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.225423 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09f1f524-d962-4ff4-b8e4-d9e3ace2f492-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "09f1f524-d962-4ff4-b8e4-d9e3ace2f492" (UID: "09f1f524-d962-4ff4-b8e4-d9e3ace2f492"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.287300 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dqvs\" (UniqueName: \"kubernetes.io/projected/09f1f524-d962-4ff4-b8e4-d9e3ace2f492-kube-api-access-5dqvs\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.287346 4915 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/09f1f524-d962-4ff4-b8e4-d9e3ace2f492-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.341560 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09f1f524-d962-4ff4-b8e4-d9e3ace2f492-inventory" (OuterVolumeSpecName: "inventory") pod "09f1f524-d962-4ff4-b8e4-d9e3ace2f492" (UID: "09f1f524-d962-4ff4-b8e4-d9e3ace2f492"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.389706 4915 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/09f1f524-d962-4ff4-b8e4-d9e3ace2f492-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.585855 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ct6bj" event={"ID":"09f1f524-d962-4ff4-b8e4-d9e3ace2f492","Type":"ContainerDied","Data":"9aa02c2ec463adae50081baee3fec79338190f0d21d86a00fcd0c837822f8d2a"} Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.586248 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9aa02c2ec463adae50081baee3fec79338190f0d21d86a00fcd0c837822f8d2a" Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.585923 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-ct6bj" Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.674762 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-ggpl4"] Nov 24 21:55:23 crc kubenswrapper[4915]: E1124 21:55:23.675424 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0bd7a5-a0f6-4d28-9add-729d3a831fb9" containerName="registry-server" Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.675444 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0bd7a5-a0f6-4d28-9add-729d3a831fb9" containerName="registry-server" Nov 24 21:55:23 crc kubenswrapper[4915]: E1124 21:55:23.675460 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0bd7a5-a0f6-4d28-9add-729d3a831fb9" containerName="extract-utilities" Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.675467 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0bd7a5-a0f6-4d28-9add-729d3a831fb9" containerName="extract-utilities" Nov 24 21:55:23 crc kubenswrapper[4915]: E1124 21:55:23.675509 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0bd7a5-a0f6-4d28-9add-729d3a831fb9" containerName="extract-content" Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.675515 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0bd7a5-a0f6-4d28-9add-729d3a831fb9" containerName="extract-content" Nov 24 21:55:23 crc kubenswrapper[4915]: E1124 21:55:23.675529 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09f1f524-d962-4ff4-b8e4-d9e3ace2f492" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.675539 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f1f524-d962-4ff4-b8e4-d9e3ace2f492" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.675800 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="09f1f524-d962-4ff4-b8e4-d9e3ace2f492" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.675819 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0bd7a5-a0f6-4d28-9add-729d3a831fb9" containerName="registry-server" Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.676603 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ggpl4" Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.678382 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkk6k" Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.678898 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.679013 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.680859 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.689678 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-ggpl4"] Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.800482 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c2d7b8c5-027d-4ea1-8eed-33866c899a66-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-ggpl4\" (UID: \"c2d7b8c5-027d-4ea1-8eed-33866c899a66\") " pod="openstack/ssh-known-hosts-edpm-deployment-ggpl4" Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.800634 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9tjn\" (UniqueName: \"kubernetes.io/projected/c2d7b8c5-027d-4ea1-8eed-33866c899a66-kube-api-access-v9tjn\") pod \"ssh-known-hosts-edpm-deployment-ggpl4\" (UID: \"c2d7b8c5-027d-4ea1-8eed-33866c899a66\") " pod="openstack/ssh-known-hosts-edpm-deployment-ggpl4" Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.800668 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2d7b8c5-027d-4ea1-8eed-33866c899a66-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-ggpl4\" (UID: \"c2d7b8c5-027d-4ea1-8eed-33866c899a66\") " pod="openstack/ssh-known-hosts-edpm-deployment-ggpl4" Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.907036 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9tjn\" (UniqueName: \"kubernetes.io/projected/c2d7b8c5-027d-4ea1-8eed-33866c899a66-kube-api-access-v9tjn\") pod \"ssh-known-hosts-edpm-deployment-ggpl4\" (UID: \"c2d7b8c5-027d-4ea1-8eed-33866c899a66\") " pod="openstack/ssh-known-hosts-edpm-deployment-ggpl4" Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.907854 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2d7b8c5-027d-4ea1-8eed-33866c899a66-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-ggpl4\" (UID: \"c2d7b8c5-027d-4ea1-8eed-33866c899a66\") " pod="openstack/ssh-known-hosts-edpm-deployment-ggpl4" Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.909895 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c2d7b8c5-027d-4ea1-8eed-33866c899a66-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-ggpl4\" (UID: \"c2d7b8c5-027d-4ea1-8eed-33866c899a66\") " pod="openstack/ssh-known-hosts-edpm-deployment-ggpl4" Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.913263 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2d7b8c5-027d-4ea1-8eed-33866c899a66-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-ggpl4\" (UID: \"c2d7b8c5-027d-4ea1-8eed-33866c899a66\") " pod="openstack/ssh-known-hosts-edpm-deployment-ggpl4" Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.915131 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c2d7b8c5-027d-4ea1-8eed-33866c899a66-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-ggpl4\" (UID: \"c2d7b8c5-027d-4ea1-8eed-33866c899a66\") " pod="openstack/ssh-known-hosts-edpm-deployment-ggpl4" Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.923650 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9tjn\" (UniqueName: \"kubernetes.io/projected/c2d7b8c5-027d-4ea1-8eed-33866c899a66-kube-api-access-v9tjn\") pod \"ssh-known-hosts-edpm-deployment-ggpl4\" (UID: \"c2d7b8c5-027d-4ea1-8eed-33866c899a66\") " pod="openstack/ssh-known-hosts-edpm-deployment-ggpl4" Nov 24 21:55:23 crc kubenswrapper[4915]: I1124 21:55:23.997625 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ggpl4" Nov 24 21:55:24 crc kubenswrapper[4915]: I1124 21:55:24.600515 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-ggpl4"] Nov 24 21:55:25 crc kubenswrapper[4915]: I1124 21:55:25.635710 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ggpl4" event={"ID":"c2d7b8c5-027d-4ea1-8eed-33866c899a66","Type":"ContainerStarted","Data":"f6ecd66acbce012a954895e4237cd6c804195226b59d24cef53641857716454a"} Nov 24 21:55:25 crc kubenswrapper[4915]: I1124 21:55:25.636107 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ggpl4" event={"ID":"c2d7b8c5-027d-4ea1-8eed-33866c899a66","Type":"ContainerStarted","Data":"bac3b6282e9b8db8a79b7ae05d5395f69f1186723e2baff311caf1ba4200614d"} Nov 24 21:55:25 crc kubenswrapper[4915]: I1124 21:55:25.660210 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-ggpl4" podStartSLOduration=2.020746176 podStartE2EDuration="2.660190586s" podCreationTimestamp="2025-11-24 21:55:23 +0000 UTC" firstStartedPulling="2025-11-24 21:55:24.618568365 +0000 UTC m=+2142.934820538" lastFinishedPulling="2025-11-24 21:55:25.258012755 +0000 UTC m=+2143.574264948" observedRunningTime="2025-11-24 21:55:25.647264118 +0000 UTC m=+2143.963516291" watchObservedRunningTime="2025-11-24 21:55:25.660190586 +0000 UTC m=+2143.976442759" Nov 24 21:55:33 crc kubenswrapper[4915]: I1124 21:55:33.723173 4915 generic.go:334] "Generic (PLEG): container finished" podID="c2d7b8c5-027d-4ea1-8eed-33866c899a66" containerID="f6ecd66acbce012a954895e4237cd6c804195226b59d24cef53641857716454a" exitCode=0 Nov 24 21:55:33 crc kubenswrapper[4915]: I1124 21:55:33.723256 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ggpl4" event={"ID":"c2d7b8c5-027d-4ea1-8eed-33866c899a66","Type":"ContainerDied","Data":"f6ecd66acbce012a954895e4237cd6c804195226b59d24cef53641857716454a"} Nov 24 21:55:35 crc kubenswrapper[4915]: I1124 21:55:35.260239 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ggpl4" Nov 24 21:55:35 crc kubenswrapper[4915]: I1124 21:55:35.417242 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2d7b8c5-027d-4ea1-8eed-33866c899a66-ssh-key-openstack-edpm-ipam\") pod \"c2d7b8c5-027d-4ea1-8eed-33866c899a66\" (UID: \"c2d7b8c5-027d-4ea1-8eed-33866c899a66\") " Nov 24 21:55:35 crc kubenswrapper[4915]: I1124 21:55:35.417333 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9tjn\" (UniqueName: \"kubernetes.io/projected/c2d7b8c5-027d-4ea1-8eed-33866c899a66-kube-api-access-v9tjn\") pod \"c2d7b8c5-027d-4ea1-8eed-33866c899a66\" (UID: \"c2d7b8c5-027d-4ea1-8eed-33866c899a66\") " Nov 24 21:55:35 crc kubenswrapper[4915]: I1124 21:55:35.417402 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c2d7b8c5-027d-4ea1-8eed-33866c899a66-inventory-0\") pod \"c2d7b8c5-027d-4ea1-8eed-33866c899a66\" (UID: \"c2d7b8c5-027d-4ea1-8eed-33866c899a66\") " Nov 24 21:55:35 crc kubenswrapper[4915]: I1124 21:55:35.432139 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2d7b8c5-027d-4ea1-8eed-33866c899a66-kube-api-access-v9tjn" (OuterVolumeSpecName: "kube-api-access-v9tjn") pod "c2d7b8c5-027d-4ea1-8eed-33866c899a66" (UID: "c2d7b8c5-027d-4ea1-8eed-33866c899a66"). InnerVolumeSpecName "kube-api-access-v9tjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:55:35 crc kubenswrapper[4915]: I1124 21:55:35.452129 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2d7b8c5-027d-4ea1-8eed-33866c899a66-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "c2d7b8c5-027d-4ea1-8eed-33866c899a66" (UID: "c2d7b8c5-027d-4ea1-8eed-33866c899a66"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:35 crc kubenswrapper[4915]: I1124 21:55:35.468380 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2d7b8c5-027d-4ea1-8eed-33866c899a66-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c2d7b8c5-027d-4ea1-8eed-33866c899a66" (UID: "c2d7b8c5-027d-4ea1-8eed-33866c899a66"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:35 crc kubenswrapper[4915]: I1124 21:55:35.519962 4915 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c2d7b8c5-027d-4ea1-8eed-33866c899a66-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:35 crc kubenswrapper[4915]: I1124 21:55:35.520000 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9tjn\" (UniqueName: \"kubernetes.io/projected/c2d7b8c5-027d-4ea1-8eed-33866c899a66-kube-api-access-v9tjn\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:35 crc kubenswrapper[4915]: I1124 21:55:35.520012 4915 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/c2d7b8c5-027d-4ea1-8eed-33866c899a66-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:35 crc kubenswrapper[4915]: I1124 21:55:35.775074 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-ggpl4" event={"ID":"c2d7b8c5-027d-4ea1-8eed-33866c899a66","Type":"ContainerDied","Data":"bac3b6282e9b8db8a79b7ae05d5395f69f1186723e2baff311caf1ba4200614d"} Nov 24 21:55:35 crc kubenswrapper[4915]: I1124 21:55:35.775128 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bac3b6282e9b8db8a79b7ae05d5395f69f1186723e2baff311caf1ba4200614d" Nov 24 21:55:35 crc kubenswrapper[4915]: I1124 21:55:35.775222 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-ggpl4" Nov 24 21:55:35 crc kubenswrapper[4915]: I1124 21:55:35.853763 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4nrw5"] Nov 24 21:55:35 crc kubenswrapper[4915]: E1124 21:55:35.854760 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2d7b8c5-027d-4ea1-8eed-33866c899a66" containerName="ssh-known-hosts-edpm-deployment" Nov 24 21:55:35 crc kubenswrapper[4915]: I1124 21:55:35.854796 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2d7b8c5-027d-4ea1-8eed-33866c899a66" containerName="ssh-known-hosts-edpm-deployment" Nov 24 21:55:35 crc kubenswrapper[4915]: I1124 21:55:35.855113 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2d7b8c5-027d-4ea1-8eed-33866c899a66" containerName="ssh-known-hosts-edpm-deployment" Nov 24 21:55:35 crc kubenswrapper[4915]: I1124 21:55:35.856189 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4nrw5" Nov 24 21:55:35 crc kubenswrapper[4915]: I1124 21:55:35.858809 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:55:35 crc kubenswrapper[4915]: I1124 21:55:35.859001 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:55:35 crc kubenswrapper[4915]: I1124 21:55:35.859295 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkk6k" Nov 24 21:55:35 crc kubenswrapper[4915]: I1124 21:55:35.866032 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:55:35 crc kubenswrapper[4915]: I1124 21:55:35.880944 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4nrw5"] Nov 24 21:55:36 crc kubenswrapper[4915]: I1124 21:55:36.031915 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hmdf\" (UniqueName: \"kubernetes.io/projected/16282458-e621-4ced-9063-b7106c2fbd91-kube-api-access-2hmdf\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4nrw5\" (UID: \"16282458-e621-4ced-9063-b7106c2fbd91\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4nrw5" Nov 24 21:55:36 crc kubenswrapper[4915]: I1124 21:55:36.032387 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16282458-e621-4ced-9063-b7106c2fbd91-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4nrw5\" (UID: \"16282458-e621-4ced-9063-b7106c2fbd91\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4nrw5" Nov 24 21:55:36 crc kubenswrapper[4915]: I1124 21:55:36.032485 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/16282458-e621-4ced-9063-b7106c2fbd91-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4nrw5\" (UID: \"16282458-e621-4ced-9063-b7106c2fbd91\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4nrw5" Nov 24 21:55:36 crc kubenswrapper[4915]: I1124 21:55:36.134601 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16282458-e621-4ced-9063-b7106c2fbd91-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4nrw5\" (UID: \"16282458-e621-4ced-9063-b7106c2fbd91\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4nrw5" Nov 24 21:55:36 crc kubenswrapper[4915]: I1124 21:55:36.134846 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/16282458-e621-4ced-9063-b7106c2fbd91-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4nrw5\" (UID: \"16282458-e621-4ced-9063-b7106c2fbd91\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4nrw5" Nov 24 21:55:36 crc kubenswrapper[4915]: I1124 21:55:36.134997 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hmdf\" (UniqueName: \"kubernetes.io/projected/16282458-e621-4ced-9063-b7106c2fbd91-kube-api-access-2hmdf\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4nrw5\" (UID: \"16282458-e621-4ced-9063-b7106c2fbd91\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4nrw5" Nov 24 21:55:36 crc kubenswrapper[4915]: I1124 21:55:36.138620 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16282458-e621-4ced-9063-b7106c2fbd91-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4nrw5\" (UID: \"16282458-e621-4ced-9063-b7106c2fbd91\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4nrw5" Nov 24 21:55:36 crc kubenswrapper[4915]: I1124 21:55:36.138866 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/16282458-e621-4ced-9063-b7106c2fbd91-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4nrw5\" (UID: \"16282458-e621-4ced-9063-b7106c2fbd91\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4nrw5" Nov 24 21:55:36 crc kubenswrapper[4915]: I1124 21:55:36.152691 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hmdf\" (UniqueName: \"kubernetes.io/projected/16282458-e621-4ced-9063-b7106c2fbd91-kube-api-access-2hmdf\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4nrw5\" (UID: \"16282458-e621-4ced-9063-b7106c2fbd91\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4nrw5" Nov 24 21:55:36 crc kubenswrapper[4915]: I1124 21:55:36.174062 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4nrw5" Nov 24 21:55:36 crc kubenswrapper[4915]: W1124 21:55:36.776513 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16282458_e621_4ced_9063_b7106c2fbd91.slice/crio-3f0913b09a99ce499fa138eba1e9b89da6f9ab1caf93f852dad04f3743550ed6 WatchSource:0}: Error finding container 3f0913b09a99ce499fa138eba1e9b89da6f9ab1caf93f852dad04f3743550ed6: Status 404 returned error can't find the container with id 3f0913b09a99ce499fa138eba1e9b89da6f9ab1caf93f852dad04f3743550ed6 Nov 24 21:55:36 crc kubenswrapper[4915]: I1124 21:55:36.785039 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4nrw5"] Nov 24 21:55:37 crc kubenswrapper[4915]: I1124 21:55:37.803919 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4nrw5" event={"ID":"16282458-e621-4ced-9063-b7106c2fbd91","Type":"ContainerStarted","Data":"ffb6c0bcc3849cb33f5c14cd65bf1ce0cdd248881bbc429707c018d8adbecbee"} Nov 24 21:55:37 crc kubenswrapper[4915]: I1124 21:55:37.804389 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4nrw5" event={"ID":"16282458-e621-4ced-9063-b7106c2fbd91","Type":"ContainerStarted","Data":"3f0913b09a99ce499fa138eba1e9b89da6f9ab1caf93f852dad04f3743550ed6"} Nov 24 21:55:37 crc kubenswrapper[4915]: I1124 21:55:37.856481 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4nrw5" podStartSLOduration=2.369058284 podStartE2EDuration="2.856441997s" podCreationTimestamp="2025-11-24 21:55:35 +0000 UTC" firstStartedPulling="2025-11-24 21:55:36.778690799 +0000 UTC m=+2155.094942972" lastFinishedPulling="2025-11-24 21:55:37.266074492 +0000 UTC m=+2155.582326685" observedRunningTime="2025-11-24 21:55:37.847968828 +0000 UTC m=+2156.164221021" watchObservedRunningTime="2025-11-24 21:55:37.856441997 +0000 UTC m=+2156.172694170" Nov 24 21:55:46 crc kubenswrapper[4915]: I1124 21:55:46.912381 4915 generic.go:334] "Generic (PLEG): container finished" podID="16282458-e621-4ced-9063-b7106c2fbd91" containerID="ffb6c0bcc3849cb33f5c14cd65bf1ce0cdd248881bbc429707c018d8adbecbee" exitCode=0 Nov 24 21:55:46 crc kubenswrapper[4915]: I1124 21:55:46.912492 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4nrw5" event={"ID":"16282458-e621-4ced-9063-b7106c2fbd91","Type":"ContainerDied","Data":"ffb6c0bcc3849cb33f5c14cd65bf1ce0cdd248881bbc429707c018d8adbecbee"} Nov 24 21:55:48 crc kubenswrapper[4915]: I1124 21:55:48.396635 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4nrw5" Nov 24 21:55:48 crc kubenswrapper[4915]: I1124 21:55:48.558687 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/16282458-e621-4ced-9063-b7106c2fbd91-ssh-key\") pod \"16282458-e621-4ced-9063-b7106c2fbd91\" (UID: \"16282458-e621-4ced-9063-b7106c2fbd91\") " Nov 24 21:55:48 crc kubenswrapper[4915]: I1124 21:55:48.559952 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hmdf\" (UniqueName: \"kubernetes.io/projected/16282458-e621-4ced-9063-b7106c2fbd91-kube-api-access-2hmdf\") pod \"16282458-e621-4ced-9063-b7106c2fbd91\" (UID: \"16282458-e621-4ced-9063-b7106c2fbd91\") " Nov 24 21:55:48 crc kubenswrapper[4915]: I1124 21:55:48.560363 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16282458-e621-4ced-9063-b7106c2fbd91-inventory\") pod \"16282458-e621-4ced-9063-b7106c2fbd91\" (UID: \"16282458-e621-4ced-9063-b7106c2fbd91\") " Nov 24 21:55:48 crc kubenswrapper[4915]: I1124 21:55:48.578078 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16282458-e621-4ced-9063-b7106c2fbd91-kube-api-access-2hmdf" (OuterVolumeSpecName: "kube-api-access-2hmdf") pod "16282458-e621-4ced-9063-b7106c2fbd91" (UID: "16282458-e621-4ced-9063-b7106c2fbd91"). InnerVolumeSpecName "kube-api-access-2hmdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:55:48 crc kubenswrapper[4915]: I1124 21:55:48.664690 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hmdf\" (UniqueName: \"kubernetes.io/projected/16282458-e621-4ced-9063-b7106c2fbd91-kube-api-access-2hmdf\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:48 crc kubenswrapper[4915]: I1124 21:55:48.665118 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16282458-e621-4ced-9063-b7106c2fbd91-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "16282458-e621-4ced-9063-b7106c2fbd91" (UID: "16282458-e621-4ced-9063-b7106c2fbd91"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:48 crc kubenswrapper[4915]: I1124 21:55:48.683416 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16282458-e621-4ced-9063-b7106c2fbd91-inventory" (OuterVolumeSpecName: "inventory") pod "16282458-e621-4ced-9063-b7106c2fbd91" (UID: "16282458-e621-4ced-9063-b7106c2fbd91"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:55:48 crc kubenswrapper[4915]: I1124 21:55:48.766749 4915 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/16282458-e621-4ced-9063-b7106c2fbd91-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:48 crc kubenswrapper[4915]: I1124 21:55:48.766794 4915 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/16282458-e621-4ced-9063-b7106c2fbd91-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:55:48 crc kubenswrapper[4915]: I1124 21:55:48.935841 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4nrw5" event={"ID":"16282458-e621-4ced-9063-b7106c2fbd91","Type":"ContainerDied","Data":"3f0913b09a99ce499fa138eba1e9b89da6f9ab1caf93f852dad04f3743550ed6"} Nov 24 21:55:48 crc kubenswrapper[4915]: I1124 21:55:48.935887 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f0913b09a99ce499fa138eba1e9b89da6f9ab1caf93f852dad04f3743550ed6" Nov 24 21:55:48 crc kubenswrapper[4915]: I1124 21:55:48.935921 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4nrw5" Nov 24 21:55:49 crc kubenswrapper[4915]: I1124 21:55:49.039601 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bl2vm"] Nov 24 21:55:49 crc kubenswrapper[4915]: E1124 21:55:49.040183 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16282458-e621-4ced-9063-b7106c2fbd91" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 21:55:49 crc kubenswrapper[4915]: I1124 21:55:49.040203 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="16282458-e621-4ced-9063-b7106c2fbd91" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 21:55:49 crc kubenswrapper[4915]: I1124 21:55:49.040455 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="16282458-e621-4ced-9063-b7106c2fbd91" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 24 21:55:49 crc kubenswrapper[4915]: I1124 21:55:49.041289 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bl2vm" Nov 24 21:55:49 crc kubenswrapper[4915]: I1124 21:55:49.044055 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:55:49 crc kubenswrapper[4915]: I1124 21:55:49.044467 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:55:49 crc kubenswrapper[4915]: I1124 21:55:49.044489 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:55:49 crc kubenswrapper[4915]: I1124 21:55:49.045531 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkk6k" Nov 24 21:55:49 crc kubenswrapper[4915]: I1124 21:55:49.059890 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bl2vm"] Nov 24 21:55:49 crc kubenswrapper[4915]: I1124 21:55:49.175968 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0a81a419-2ab6-4e5b-9f8f-91dee9db382c-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bl2vm\" (UID: \"0a81a419-2ab6-4e5b-9f8f-91dee9db382c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bl2vm" Nov 24 21:55:49 crc kubenswrapper[4915]: I1124 21:55:49.176345 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a81a419-2ab6-4e5b-9f8f-91dee9db382c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bl2vm\" (UID: \"0a81a419-2ab6-4e5b-9f8f-91dee9db382c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bl2vm" Nov 24 21:55:49 crc kubenswrapper[4915]: I1124 21:55:49.176440 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrnm8\" (UniqueName: \"kubernetes.io/projected/0a81a419-2ab6-4e5b-9f8f-91dee9db382c-kube-api-access-qrnm8\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bl2vm\" (UID: \"0a81a419-2ab6-4e5b-9f8f-91dee9db382c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bl2vm" Nov 24 21:55:49 crc kubenswrapper[4915]: I1124 21:55:49.278538 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0a81a419-2ab6-4e5b-9f8f-91dee9db382c-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bl2vm\" (UID: \"0a81a419-2ab6-4e5b-9f8f-91dee9db382c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bl2vm" Nov 24 21:55:49 crc kubenswrapper[4915]: I1124 21:55:49.278601 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a81a419-2ab6-4e5b-9f8f-91dee9db382c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bl2vm\" (UID: \"0a81a419-2ab6-4e5b-9f8f-91dee9db382c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bl2vm" Nov 24 21:55:49 crc kubenswrapper[4915]: I1124 21:55:49.278690 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrnm8\" (UniqueName: \"kubernetes.io/projected/0a81a419-2ab6-4e5b-9f8f-91dee9db382c-kube-api-access-qrnm8\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bl2vm\" (UID: \"0a81a419-2ab6-4e5b-9f8f-91dee9db382c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bl2vm" Nov 24 21:55:49 crc kubenswrapper[4915]: I1124 21:55:49.283029 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0a81a419-2ab6-4e5b-9f8f-91dee9db382c-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bl2vm\" (UID: \"0a81a419-2ab6-4e5b-9f8f-91dee9db382c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bl2vm" Nov 24 21:55:49 crc kubenswrapper[4915]: I1124 21:55:49.283597 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a81a419-2ab6-4e5b-9f8f-91dee9db382c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bl2vm\" (UID: \"0a81a419-2ab6-4e5b-9f8f-91dee9db382c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bl2vm" Nov 24 21:55:49 crc kubenswrapper[4915]: I1124 21:55:49.298133 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrnm8\" (UniqueName: \"kubernetes.io/projected/0a81a419-2ab6-4e5b-9f8f-91dee9db382c-kube-api-access-qrnm8\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bl2vm\" (UID: \"0a81a419-2ab6-4e5b-9f8f-91dee9db382c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bl2vm" Nov 24 21:55:49 crc kubenswrapper[4915]: I1124 21:55:49.360119 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bl2vm" Nov 24 21:55:50 crc kubenswrapper[4915]: I1124 21:55:50.003489 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bl2vm"] Nov 24 21:55:50 crc kubenswrapper[4915]: W1124 21:55:50.008637 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a81a419_2ab6_4e5b_9f8f_91dee9db382c.slice/crio-61bb63cd089ac12d1a2e09355c54cdacb7a43d3f9bdea49057a315cc7e9f2d1e WatchSource:0}: Error finding container 61bb63cd089ac12d1a2e09355c54cdacb7a43d3f9bdea49057a315cc7e9f2d1e: Status 404 returned error can't find the container with id 61bb63cd089ac12d1a2e09355c54cdacb7a43d3f9bdea49057a315cc7e9f2d1e Nov 24 21:55:50 crc kubenswrapper[4915]: I1124 21:55:50.961757 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bl2vm" event={"ID":"0a81a419-2ab6-4e5b-9f8f-91dee9db382c","Type":"ContainerStarted","Data":"e08e689f916e654ffff82d7398457db962aed2683bc486d1eb7a8525265473ea"} Nov 24 21:55:50 crc kubenswrapper[4915]: I1124 21:55:50.962332 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bl2vm" event={"ID":"0a81a419-2ab6-4e5b-9f8f-91dee9db382c","Type":"ContainerStarted","Data":"61bb63cd089ac12d1a2e09355c54cdacb7a43d3f9bdea49057a315cc7e9f2d1e"} Nov 24 21:55:50 crc kubenswrapper[4915]: I1124 21:55:50.979396 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bl2vm" podStartSLOduration=1.529266247 podStartE2EDuration="1.979376413s" podCreationTimestamp="2025-11-24 21:55:49 +0000 UTC" firstStartedPulling="2025-11-24 21:55:50.012945353 +0000 UTC m=+2168.329197536" lastFinishedPulling="2025-11-24 21:55:50.463055519 +0000 UTC m=+2168.779307702" observedRunningTime="2025-11-24 21:55:50.976491715 +0000 UTC m=+2169.292743928" watchObservedRunningTime="2025-11-24 21:55:50.979376413 +0000 UTC m=+2169.295628596" Nov 24 21:56:01 crc kubenswrapper[4915]: I1124 21:56:01.071770 4915 generic.go:334] "Generic (PLEG): container finished" podID="0a81a419-2ab6-4e5b-9f8f-91dee9db382c" containerID="e08e689f916e654ffff82d7398457db962aed2683bc486d1eb7a8525265473ea" exitCode=0 Nov 24 21:56:01 crc kubenswrapper[4915]: I1124 21:56:01.071850 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bl2vm" event={"ID":"0a81a419-2ab6-4e5b-9f8f-91dee9db382c","Type":"ContainerDied","Data":"e08e689f916e654ffff82d7398457db962aed2683bc486d1eb7a8525265473ea"} Nov 24 21:56:02 crc kubenswrapper[4915]: I1124 21:56:02.676122 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bl2vm" Nov 24 21:56:02 crc kubenswrapper[4915]: I1124 21:56:02.834607 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a81a419-2ab6-4e5b-9f8f-91dee9db382c-inventory\") pod \"0a81a419-2ab6-4e5b-9f8f-91dee9db382c\" (UID: \"0a81a419-2ab6-4e5b-9f8f-91dee9db382c\") " Nov 24 21:56:02 crc kubenswrapper[4915]: I1124 21:56:02.834741 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0a81a419-2ab6-4e5b-9f8f-91dee9db382c-ssh-key\") pod \"0a81a419-2ab6-4e5b-9f8f-91dee9db382c\" (UID: \"0a81a419-2ab6-4e5b-9f8f-91dee9db382c\") " Nov 24 21:56:02 crc kubenswrapper[4915]: I1124 21:56:02.834856 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrnm8\" (UniqueName: \"kubernetes.io/projected/0a81a419-2ab6-4e5b-9f8f-91dee9db382c-kube-api-access-qrnm8\") pod \"0a81a419-2ab6-4e5b-9f8f-91dee9db382c\" (UID: \"0a81a419-2ab6-4e5b-9f8f-91dee9db382c\") " Nov 24 21:56:02 crc kubenswrapper[4915]: I1124 21:56:02.879069 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a81a419-2ab6-4e5b-9f8f-91dee9db382c-kube-api-access-qrnm8" (OuterVolumeSpecName: "kube-api-access-qrnm8") pod "0a81a419-2ab6-4e5b-9f8f-91dee9db382c" (UID: "0a81a419-2ab6-4e5b-9f8f-91dee9db382c"). InnerVolumeSpecName "kube-api-access-qrnm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:56:02 crc kubenswrapper[4915]: I1124 21:56:02.904987 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a81a419-2ab6-4e5b-9f8f-91dee9db382c-inventory" (OuterVolumeSpecName: "inventory") pod "0a81a419-2ab6-4e5b-9f8f-91dee9db382c" (UID: "0a81a419-2ab6-4e5b-9f8f-91dee9db382c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:02 crc kubenswrapper[4915]: I1124 21:56:02.938312 4915 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0a81a419-2ab6-4e5b-9f8f-91dee9db382c-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:02 crc kubenswrapper[4915]: I1124 21:56:02.938344 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrnm8\" (UniqueName: \"kubernetes.io/projected/0a81a419-2ab6-4e5b-9f8f-91dee9db382c-kube-api-access-qrnm8\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:02 crc kubenswrapper[4915]: I1124 21:56:02.944978 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a81a419-2ab6-4e5b-9f8f-91dee9db382c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0a81a419-2ab6-4e5b-9f8f-91dee9db382c" (UID: "0a81a419-2ab6-4e5b-9f8f-91dee9db382c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.039929 4915 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0a81a419-2ab6-4e5b-9f8f-91dee9db382c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.096031 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bl2vm" event={"ID":"0a81a419-2ab6-4e5b-9f8f-91dee9db382c","Type":"ContainerDied","Data":"61bb63cd089ac12d1a2e09355c54cdacb7a43d3f9bdea49057a315cc7e9f2d1e"} Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.096074 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61bb63cd089ac12d1a2e09355c54cdacb7a43d3f9bdea49057a315cc7e9f2d1e" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.096185 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bl2vm" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.247113 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss"] Nov 24 21:56:03 crc kubenswrapper[4915]: E1124 21:56:03.257070 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a81a419-2ab6-4e5b-9f8f-91dee9db382c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.257110 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a81a419-2ab6-4e5b-9f8f-91dee9db382c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.257654 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a81a419-2ab6-4e5b-9f8f-91dee9db382c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.258915 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.261424 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.261597 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.261681 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.262189 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.262419 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.262602 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.262889 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.267588 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss"] Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.270394 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkk6k" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.270493 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.355039 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.355302 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.355396 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.355520 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.355616 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.355710 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.355841 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.355961 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.356045 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.356125 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.356214 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc2pv\" (UniqueName: \"kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-kube-api-access-sc2pv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.356244 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.356563 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.356747 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.357132 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.357227 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.459840 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc2pv\" (UniqueName: \"kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-kube-api-access-sc2pv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.459894 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.460013 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.460072 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.460137 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.460666 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.460721 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.460752 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.460855 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.460941 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.460998 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.461058 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.461095 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.461148 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.461195 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.461277 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.464050 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.464660 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.464673 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.466278 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.470086 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.471497 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.471896 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.472146 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.472190 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.473724 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.474317 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.474900 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.477446 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.477704 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.480077 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.481107 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc2pv\" (UniqueName: \"kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-kube-api-access-sc2pv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-nfdss\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:03 crc kubenswrapper[4915]: I1124 21:56:03.582802 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:04 crc kubenswrapper[4915]: I1124 21:56:04.161432 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss"] Nov 24 21:56:05 crc kubenswrapper[4915]: I1124 21:56:05.128703 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" event={"ID":"96495372-42d5-4a61-99d1-be56b844f795","Type":"ContainerStarted","Data":"21f1a43dd12d41cb884dafc1411e22f1889357c50f015f2322bb6208587cca2b"} Nov 24 21:56:05 crc kubenswrapper[4915]: I1124 21:56:05.129273 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" event={"ID":"96495372-42d5-4a61-99d1-be56b844f795","Type":"ContainerStarted","Data":"e99c730a49d0f7a80eba6fd02de929c46b2344eb5e463547a1d781d8b8d8eb91"} Nov 24 21:56:05 crc kubenswrapper[4915]: I1124 21:56:05.174187 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" podStartSLOduration=1.689982463 podStartE2EDuration="2.174133199s" podCreationTimestamp="2025-11-24 21:56:03 +0000 UTC" firstStartedPulling="2025-11-24 21:56:04.16965552 +0000 UTC m=+2182.485907713" lastFinishedPulling="2025-11-24 21:56:04.653806256 +0000 UTC m=+2182.970058449" observedRunningTime="2025-11-24 21:56:05.162700919 +0000 UTC m=+2183.478953102" watchObservedRunningTime="2025-11-24 21:56:05.174133199 +0000 UTC m=+2183.490385372" Nov 24 21:56:24 crc kubenswrapper[4915]: I1124 21:56:24.326847 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:56:24 crc kubenswrapper[4915]: I1124 21:56:24.327375 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:56:38 crc kubenswrapper[4915]: I1124 21:56:38.545977 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8dxc2"] Nov 24 21:56:38 crc kubenswrapper[4915]: I1124 21:56:38.550658 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8dxc2" Nov 24 21:56:38 crc kubenswrapper[4915]: I1124 21:56:38.564244 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8dxc2"] Nov 24 21:56:38 crc kubenswrapper[4915]: I1124 21:56:38.726308 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfk48\" (UniqueName: \"kubernetes.io/projected/72a38fb5-04d8-4d55-8f16-1aae3f27b4ea-kube-api-access-nfk48\") pod \"redhat-marketplace-8dxc2\" (UID: \"72a38fb5-04d8-4d55-8f16-1aae3f27b4ea\") " pod="openshift-marketplace/redhat-marketplace-8dxc2" Nov 24 21:56:38 crc kubenswrapper[4915]: I1124 21:56:38.726579 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72a38fb5-04d8-4d55-8f16-1aae3f27b4ea-utilities\") pod \"redhat-marketplace-8dxc2\" (UID: \"72a38fb5-04d8-4d55-8f16-1aae3f27b4ea\") " pod="openshift-marketplace/redhat-marketplace-8dxc2" Nov 24 21:56:38 crc kubenswrapper[4915]: I1124 21:56:38.726715 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72a38fb5-04d8-4d55-8f16-1aae3f27b4ea-catalog-content\") pod \"redhat-marketplace-8dxc2\" (UID: \"72a38fb5-04d8-4d55-8f16-1aae3f27b4ea\") " pod="openshift-marketplace/redhat-marketplace-8dxc2" Nov 24 21:56:38 crc kubenswrapper[4915]: I1124 21:56:38.829256 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72a38fb5-04d8-4d55-8f16-1aae3f27b4ea-utilities\") pod \"redhat-marketplace-8dxc2\" (UID: \"72a38fb5-04d8-4d55-8f16-1aae3f27b4ea\") " pod="openshift-marketplace/redhat-marketplace-8dxc2" Nov 24 21:56:38 crc kubenswrapper[4915]: I1124 21:56:38.829334 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72a38fb5-04d8-4d55-8f16-1aae3f27b4ea-catalog-content\") pod \"redhat-marketplace-8dxc2\" (UID: \"72a38fb5-04d8-4d55-8f16-1aae3f27b4ea\") " pod="openshift-marketplace/redhat-marketplace-8dxc2" Nov 24 21:56:38 crc kubenswrapper[4915]: I1124 21:56:38.829547 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfk48\" (UniqueName: \"kubernetes.io/projected/72a38fb5-04d8-4d55-8f16-1aae3f27b4ea-kube-api-access-nfk48\") pod \"redhat-marketplace-8dxc2\" (UID: \"72a38fb5-04d8-4d55-8f16-1aae3f27b4ea\") " pod="openshift-marketplace/redhat-marketplace-8dxc2" Nov 24 21:56:38 crc kubenswrapper[4915]: I1124 21:56:38.830171 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72a38fb5-04d8-4d55-8f16-1aae3f27b4ea-catalog-content\") pod \"redhat-marketplace-8dxc2\" (UID: \"72a38fb5-04d8-4d55-8f16-1aae3f27b4ea\") " pod="openshift-marketplace/redhat-marketplace-8dxc2" Nov 24 21:56:38 crc kubenswrapper[4915]: I1124 21:56:38.830297 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72a38fb5-04d8-4d55-8f16-1aae3f27b4ea-utilities\") pod \"redhat-marketplace-8dxc2\" (UID: \"72a38fb5-04d8-4d55-8f16-1aae3f27b4ea\") " pod="openshift-marketplace/redhat-marketplace-8dxc2" Nov 24 21:56:38 crc kubenswrapper[4915]: I1124 21:56:38.849847 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfk48\" (UniqueName: \"kubernetes.io/projected/72a38fb5-04d8-4d55-8f16-1aae3f27b4ea-kube-api-access-nfk48\") pod \"redhat-marketplace-8dxc2\" (UID: \"72a38fb5-04d8-4d55-8f16-1aae3f27b4ea\") " pod="openshift-marketplace/redhat-marketplace-8dxc2" Nov 24 21:56:38 crc kubenswrapper[4915]: I1124 21:56:38.881325 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8dxc2" Nov 24 21:56:39 crc kubenswrapper[4915]: I1124 21:56:39.422240 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8dxc2"] Nov 24 21:56:39 crc kubenswrapper[4915]: I1124 21:56:39.560676 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8dxc2" event={"ID":"72a38fb5-04d8-4d55-8f16-1aae3f27b4ea","Type":"ContainerStarted","Data":"443c30179c43fb1ab7b4756a8a6e66cb5c18c739a838ce2870c0bf7259e16ff6"} Nov 24 21:56:40 crc kubenswrapper[4915]: I1124 21:56:40.574170 4915 generic.go:334] "Generic (PLEG): container finished" podID="72a38fb5-04d8-4d55-8f16-1aae3f27b4ea" containerID="2068b757e29edce0619fc67bfeb80606581c2db968300645703730f394d4abf7" exitCode=0 Nov 24 21:56:40 crc kubenswrapper[4915]: I1124 21:56:40.574269 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8dxc2" event={"ID":"72a38fb5-04d8-4d55-8f16-1aae3f27b4ea","Type":"ContainerDied","Data":"2068b757e29edce0619fc67bfeb80606581c2db968300645703730f394d4abf7"} Nov 24 21:56:41 crc kubenswrapper[4915]: I1124 21:56:41.592552 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8dxc2" event={"ID":"72a38fb5-04d8-4d55-8f16-1aae3f27b4ea","Type":"ContainerStarted","Data":"3a4c4e2e33a75abbb0dc5f0355400b75ed1cb56de56c1a3f625df8e3601dcc27"} Nov 24 21:56:42 crc kubenswrapper[4915]: I1124 21:56:42.615582 4915 generic.go:334] "Generic (PLEG): container finished" podID="72a38fb5-04d8-4d55-8f16-1aae3f27b4ea" containerID="3a4c4e2e33a75abbb0dc5f0355400b75ed1cb56de56c1a3f625df8e3601dcc27" exitCode=0 Nov 24 21:56:42 crc kubenswrapper[4915]: I1124 21:56:42.615656 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8dxc2" event={"ID":"72a38fb5-04d8-4d55-8f16-1aae3f27b4ea","Type":"ContainerDied","Data":"3a4c4e2e33a75abbb0dc5f0355400b75ed1cb56de56c1a3f625df8e3601dcc27"} Nov 24 21:56:43 crc kubenswrapper[4915]: I1124 21:56:43.628641 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8dxc2" event={"ID":"72a38fb5-04d8-4d55-8f16-1aae3f27b4ea","Type":"ContainerStarted","Data":"411e2bc458cb57297bbb07f5b087bce6c6836a12eb8c029c750106598a966c2a"} Nov 24 21:56:43 crc kubenswrapper[4915]: I1124 21:56:43.649025 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8dxc2" podStartSLOduration=3.005803252 podStartE2EDuration="5.649009409s" podCreationTimestamp="2025-11-24 21:56:38 +0000 UTC" firstStartedPulling="2025-11-24 21:56:40.576439547 +0000 UTC m=+2218.892691720" lastFinishedPulling="2025-11-24 21:56:43.219645694 +0000 UTC m=+2221.535897877" observedRunningTime="2025-11-24 21:56:43.643217293 +0000 UTC m=+2221.959469456" watchObservedRunningTime="2025-11-24 21:56:43.649009409 +0000 UTC m=+2221.965261582" Nov 24 21:56:48 crc kubenswrapper[4915]: I1124 21:56:48.882466 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8dxc2" Nov 24 21:56:48 crc kubenswrapper[4915]: I1124 21:56:48.883313 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8dxc2" Nov 24 21:56:48 crc kubenswrapper[4915]: I1124 21:56:48.952938 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8dxc2" Nov 24 21:56:49 crc kubenswrapper[4915]: I1124 21:56:49.802459 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8dxc2" Nov 24 21:56:49 crc kubenswrapper[4915]: I1124 21:56:49.860278 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8dxc2"] Nov 24 21:56:51 crc kubenswrapper[4915]: I1124 21:56:51.738434 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8dxc2" podUID="72a38fb5-04d8-4d55-8f16-1aae3f27b4ea" containerName="registry-server" containerID="cri-o://411e2bc458cb57297bbb07f5b087bce6c6836a12eb8c029c750106598a966c2a" gracePeriod=2 Nov 24 21:56:52 crc kubenswrapper[4915]: I1124 21:56:52.339854 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8dxc2" Nov 24 21:56:52 crc kubenswrapper[4915]: I1124 21:56:52.429667 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72a38fb5-04d8-4d55-8f16-1aae3f27b4ea-catalog-content\") pod \"72a38fb5-04d8-4d55-8f16-1aae3f27b4ea\" (UID: \"72a38fb5-04d8-4d55-8f16-1aae3f27b4ea\") " Nov 24 21:56:52 crc kubenswrapper[4915]: I1124 21:56:52.429908 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfk48\" (UniqueName: \"kubernetes.io/projected/72a38fb5-04d8-4d55-8f16-1aae3f27b4ea-kube-api-access-nfk48\") pod \"72a38fb5-04d8-4d55-8f16-1aae3f27b4ea\" (UID: \"72a38fb5-04d8-4d55-8f16-1aae3f27b4ea\") " Nov 24 21:56:52 crc kubenswrapper[4915]: I1124 21:56:52.429950 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72a38fb5-04d8-4d55-8f16-1aae3f27b4ea-utilities\") pod \"72a38fb5-04d8-4d55-8f16-1aae3f27b4ea\" (UID: \"72a38fb5-04d8-4d55-8f16-1aae3f27b4ea\") " Nov 24 21:56:52 crc kubenswrapper[4915]: I1124 21:56:52.431292 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72a38fb5-04d8-4d55-8f16-1aae3f27b4ea-utilities" (OuterVolumeSpecName: "utilities") pod "72a38fb5-04d8-4d55-8f16-1aae3f27b4ea" (UID: "72a38fb5-04d8-4d55-8f16-1aae3f27b4ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:56:52 crc kubenswrapper[4915]: I1124 21:56:52.441182 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72a38fb5-04d8-4d55-8f16-1aae3f27b4ea-kube-api-access-nfk48" (OuterVolumeSpecName: "kube-api-access-nfk48") pod "72a38fb5-04d8-4d55-8f16-1aae3f27b4ea" (UID: "72a38fb5-04d8-4d55-8f16-1aae3f27b4ea"). InnerVolumeSpecName "kube-api-access-nfk48". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:56:52 crc kubenswrapper[4915]: I1124 21:56:52.474687 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72a38fb5-04d8-4d55-8f16-1aae3f27b4ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72a38fb5-04d8-4d55-8f16-1aae3f27b4ea" (UID: "72a38fb5-04d8-4d55-8f16-1aae3f27b4ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:56:52 crc kubenswrapper[4915]: I1124 21:56:52.534199 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72a38fb5-04d8-4d55-8f16-1aae3f27b4ea-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:52 crc kubenswrapper[4915]: I1124 21:56:52.534243 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfk48\" (UniqueName: \"kubernetes.io/projected/72a38fb5-04d8-4d55-8f16-1aae3f27b4ea-kube-api-access-nfk48\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:52 crc kubenswrapper[4915]: I1124 21:56:52.534258 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72a38fb5-04d8-4d55-8f16-1aae3f27b4ea-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:52 crc kubenswrapper[4915]: I1124 21:56:52.753933 4915 generic.go:334] "Generic (PLEG): container finished" podID="72a38fb5-04d8-4d55-8f16-1aae3f27b4ea" containerID="411e2bc458cb57297bbb07f5b087bce6c6836a12eb8c029c750106598a966c2a" exitCode=0 Nov 24 21:56:52 crc kubenswrapper[4915]: I1124 21:56:52.754088 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8dxc2" Nov 24 21:56:52 crc kubenswrapper[4915]: I1124 21:56:52.754860 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8dxc2" event={"ID":"72a38fb5-04d8-4d55-8f16-1aae3f27b4ea","Type":"ContainerDied","Data":"411e2bc458cb57297bbb07f5b087bce6c6836a12eb8c029c750106598a966c2a"} Nov 24 21:56:52 crc kubenswrapper[4915]: I1124 21:56:52.756810 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8dxc2" event={"ID":"72a38fb5-04d8-4d55-8f16-1aae3f27b4ea","Type":"ContainerDied","Data":"443c30179c43fb1ab7b4756a8a6e66cb5c18c739a838ce2870c0bf7259e16ff6"} Nov 24 21:56:52 crc kubenswrapper[4915]: I1124 21:56:52.756842 4915 scope.go:117] "RemoveContainer" containerID="411e2bc458cb57297bbb07f5b087bce6c6836a12eb8c029c750106598a966c2a" Nov 24 21:56:52 crc kubenswrapper[4915]: I1124 21:56:52.799051 4915 scope.go:117] "RemoveContainer" containerID="3a4c4e2e33a75abbb0dc5f0355400b75ed1cb56de56c1a3f625df8e3601dcc27" Nov 24 21:56:52 crc kubenswrapper[4915]: I1124 21:56:52.808936 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8dxc2"] Nov 24 21:56:52 crc kubenswrapper[4915]: I1124 21:56:52.828150 4915 scope.go:117] "RemoveContainer" containerID="2068b757e29edce0619fc67bfeb80606581c2db968300645703730f394d4abf7" Nov 24 21:56:52 crc kubenswrapper[4915]: I1124 21:56:52.835223 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8dxc2"] Nov 24 21:56:52 crc kubenswrapper[4915]: I1124 21:56:52.912806 4915 scope.go:117] "RemoveContainer" containerID="411e2bc458cb57297bbb07f5b087bce6c6836a12eb8c029c750106598a966c2a" Nov 24 21:56:52 crc kubenswrapper[4915]: E1124 21:56:52.913466 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"411e2bc458cb57297bbb07f5b087bce6c6836a12eb8c029c750106598a966c2a\": container with ID starting with 411e2bc458cb57297bbb07f5b087bce6c6836a12eb8c029c750106598a966c2a not found: ID does not exist" containerID="411e2bc458cb57297bbb07f5b087bce6c6836a12eb8c029c750106598a966c2a" Nov 24 21:56:52 crc kubenswrapper[4915]: I1124 21:56:52.913513 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"411e2bc458cb57297bbb07f5b087bce6c6836a12eb8c029c750106598a966c2a"} err="failed to get container status \"411e2bc458cb57297bbb07f5b087bce6c6836a12eb8c029c750106598a966c2a\": rpc error: code = NotFound desc = could not find container \"411e2bc458cb57297bbb07f5b087bce6c6836a12eb8c029c750106598a966c2a\": container with ID starting with 411e2bc458cb57297bbb07f5b087bce6c6836a12eb8c029c750106598a966c2a not found: ID does not exist" Nov 24 21:56:52 crc kubenswrapper[4915]: I1124 21:56:52.913543 4915 scope.go:117] "RemoveContainer" containerID="3a4c4e2e33a75abbb0dc5f0355400b75ed1cb56de56c1a3f625df8e3601dcc27" Nov 24 21:56:52 crc kubenswrapper[4915]: E1124 21:56:52.913960 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a4c4e2e33a75abbb0dc5f0355400b75ed1cb56de56c1a3f625df8e3601dcc27\": container with ID starting with 3a4c4e2e33a75abbb0dc5f0355400b75ed1cb56de56c1a3f625df8e3601dcc27 not found: ID does not exist" containerID="3a4c4e2e33a75abbb0dc5f0355400b75ed1cb56de56c1a3f625df8e3601dcc27" Nov 24 21:56:52 crc kubenswrapper[4915]: I1124 21:56:52.914042 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a4c4e2e33a75abbb0dc5f0355400b75ed1cb56de56c1a3f625df8e3601dcc27"} err="failed to get container status \"3a4c4e2e33a75abbb0dc5f0355400b75ed1cb56de56c1a3f625df8e3601dcc27\": rpc error: code = NotFound desc = could not find container \"3a4c4e2e33a75abbb0dc5f0355400b75ed1cb56de56c1a3f625df8e3601dcc27\": container with ID starting with 3a4c4e2e33a75abbb0dc5f0355400b75ed1cb56de56c1a3f625df8e3601dcc27 not found: ID does not exist" Nov 24 21:56:52 crc kubenswrapper[4915]: I1124 21:56:52.914094 4915 scope.go:117] "RemoveContainer" containerID="2068b757e29edce0619fc67bfeb80606581c2db968300645703730f394d4abf7" Nov 24 21:56:52 crc kubenswrapper[4915]: E1124 21:56:52.914549 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2068b757e29edce0619fc67bfeb80606581c2db968300645703730f394d4abf7\": container with ID starting with 2068b757e29edce0619fc67bfeb80606581c2db968300645703730f394d4abf7 not found: ID does not exist" containerID="2068b757e29edce0619fc67bfeb80606581c2db968300645703730f394d4abf7" Nov 24 21:56:52 crc kubenswrapper[4915]: I1124 21:56:52.914640 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2068b757e29edce0619fc67bfeb80606581c2db968300645703730f394d4abf7"} err="failed to get container status \"2068b757e29edce0619fc67bfeb80606581c2db968300645703730f394d4abf7\": rpc error: code = NotFound desc = could not find container \"2068b757e29edce0619fc67bfeb80606581c2db968300645703730f394d4abf7\": container with ID starting with 2068b757e29edce0619fc67bfeb80606581c2db968300645703730f394d4abf7 not found: ID does not exist" Nov 24 21:56:54 crc kubenswrapper[4915]: I1124 21:56:54.328259 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:56:54 crc kubenswrapper[4915]: I1124 21:56:54.328860 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:56:54 crc kubenswrapper[4915]: I1124 21:56:54.452501 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72a38fb5-04d8-4d55-8f16-1aae3f27b4ea" path="/var/lib/kubelet/pods/72a38fb5-04d8-4d55-8f16-1aae3f27b4ea/volumes" Nov 24 21:56:54 crc kubenswrapper[4915]: I1124 21:56:54.784372 4915 generic.go:334] "Generic (PLEG): container finished" podID="96495372-42d5-4a61-99d1-be56b844f795" containerID="21f1a43dd12d41cb884dafc1411e22f1889357c50f015f2322bb6208587cca2b" exitCode=0 Nov 24 21:56:54 crc kubenswrapper[4915]: I1124 21:56:54.784425 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" event={"ID":"96495372-42d5-4a61-99d1-be56b844f795","Type":"ContainerDied","Data":"21f1a43dd12d41cb884dafc1411e22f1889357c50f015f2322bb6208587cca2b"} Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.278921 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.332064 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-neutron-metadata-combined-ca-bundle\") pod \"96495372-42d5-4a61-99d1-be56b844f795\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.332225 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-inventory\") pod \"96495372-42d5-4a61-99d1-be56b844f795\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.332283 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-ovn-combined-ca-bundle\") pod \"96495372-42d5-4a61-99d1-be56b844f795\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.332357 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"96495372-42d5-4a61-99d1-be56b844f795\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.332433 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-bootstrap-combined-ca-bundle\") pod \"96495372-42d5-4a61-99d1-be56b844f795\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.332498 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"96495372-42d5-4a61-99d1-be56b844f795\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.339380 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "96495372-42d5-4a61-99d1-be56b844f795" (UID: "96495372-42d5-4a61-99d1-be56b844f795"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.340806 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "96495372-42d5-4a61-99d1-be56b844f795" (UID: "96495372-42d5-4a61-99d1-be56b844f795"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.341970 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "96495372-42d5-4a61-99d1-be56b844f795" (UID: "96495372-42d5-4a61-99d1-be56b844f795"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.343410 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "96495372-42d5-4a61-99d1-be56b844f795" (UID: "96495372-42d5-4a61-99d1-be56b844f795"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.355000 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "96495372-42d5-4a61-99d1-be56b844f795" (UID: "96495372-42d5-4a61-99d1-be56b844f795"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.377432 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-inventory" (OuterVolumeSpecName: "inventory") pod "96495372-42d5-4a61-99d1-be56b844f795" (UID: "96495372-42d5-4a61-99d1-be56b844f795"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.434591 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc2pv\" (UniqueName: \"kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-kube-api-access-sc2pv\") pod \"96495372-42d5-4a61-99d1-be56b844f795\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.434628 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-repo-setup-combined-ca-bundle\") pod \"96495372-42d5-4a61-99d1-be56b844f795\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.434662 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-ssh-key\") pod \"96495372-42d5-4a61-99d1-be56b844f795\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.434695 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-telemetry-power-monitoring-combined-ca-bundle\") pod \"96495372-42d5-4a61-99d1-be56b844f795\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.434731 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"96495372-42d5-4a61-99d1-be56b844f795\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.434771 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-openstack-edpm-ipam-ovn-default-certs-0\") pod \"96495372-42d5-4a61-99d1-be56b844f795\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.434811 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-telemetry-combined-ca-bundle\") pod \"96495372-42d5-4a61-99d1-be56b844f795\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.434845 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-nova-combined-ca-bundle\") pod \"96495372-42d5-4a61-99d1-be56b844f795\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.434862 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"96495372-42d5-4a61-99d1-be56b844f795\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.434900 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-libvirt-combined-ca-bundle\") pod \"96495372-42d5-4a61-99d1-be56b844f795\" (UID: \"96495372-42d5-4a61-99d1-be56b844f795\") " Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.435359 4915 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.435381 4915 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.435391 4915 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.435404 4915 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.435416 4915 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.435427 4915 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.440559 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "96495372-42d5-4a61-99d1-be56b844f795" (UID: "96495372-42d5-4a61-99d1-be56b844f795"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.440635 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-kube-api-access-sc2pv" (OuterVolumeSpecName: "kube-api-access-sc2pv") pod "96495372-42d5-4a61-99d1-be56b844f795" (UID: "96495372-42d5-4a61-99d1-be56b844f795"). InnerVolumeSpecName "kube-api-access-sc2pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.441237 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "96495372-42d5-4a61-99d1-be56b844f795" (UID: "96495372-42d5-4a61-99d1-be56b844f795"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.441758 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "96495372-42d5-4a61-99d1-be56b844f795" (UID: "96495372-42d5-4a61-99d1-be56b844f795"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.441850 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "96495372-42d5-4a61-99d1-be56b844f795" (UID: "96495372-42d5-4a61-99d1-be56b844f795"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.442344 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "96495372-42d5-4a61-99d1-be56b844f795" (UID: "96495372-42d5-4a61-99d1-be56b844f795"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.444659 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "96495372-42d5-4a61-99d1-be56b844f795" (UID: "96495372-42d5-4a61-99d1-be56b844f795"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.444741 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "96495372-42d5-4a61-99d1-be56b844f795" (UID: "96495372-42d5-4a61-99d1-be56b844f795"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.444833 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "96495372-42d5-4a61-99d1-be56b844f795" (UID: "96495372-42d5-4a61-99d1-be56b844f795"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.507841 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "96495372-42d5-4a61-99d1-be56b844f795" (UID: "96495372-42d5-4a61-99d1-be56b844f795"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.537262 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc2pv\" (UniqueName: \"kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-kube-api-access-sc2pv\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.537300 4915 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.537313 4915 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.537326 4915 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.537343 4915 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.537358 4915 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.537370 4915 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.537384 4915 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.537395 4915 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/96495372-42d5-4a61-99d1-be56b844f795-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.537407 4915 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96495372-42d5-4a61-99d1-be56b844f795-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.815737 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" event={"ID":"96495372-42d5-4a61-99d1-be56b844f795","Type":"ContainerDied","Data":"e99c730a49d0f7a80eba6fd02de929c46b2344eb5e463547a1d781d8b8d8eb91"} Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.815817 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e99c730a49d0f7a80eba6fd02de929c46b2344eb5e463547a1d781d8b8d8eb91" Nov 24 21:56:56 crc kubenswrapper[4915]: I1124 21:56:56.815888 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-nfdss" Nov 24 21:56:57 crc kubenswrapper[4915]: I1124 21:56:57.011924 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbgtx"] Nov 24 21:56:57 crc kubenswrapper[4915]: E1124 21:56:57.012351 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72a38fb5-04d8-4d55-8f16-1aae3f27b4ea" containerName="extract-content" Nov 24 21:56:57 crc kubenswrapper[4915]: I1124 21:56:57.012371 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="72a38fb5-04d8-4d55-8f16-1aae3f27b4ea" containerName="extract-content" Nov 24 21:56:57 crc kubenswrapper[4915]: E1124 21:56:57.012386 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96495372-42d5-4a61-99d1-be56b844f795" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 21:56:57 crc kubenswrapper[4915]: I1124 21:56:57.012396 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="96495372-42d5-4a61-99d1-be56b844f795" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 21:56:57 crc kubenswrapper[4915]: E1124 21:56:57.012422 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72a38fb5-04d8-4d55-8f16-1aae3f27b4ea" containerName="extract-utilities" Nov 24 21:56:57 crc kubenswrapper[4915]: I1124 21:56:57.012431 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="72a38fb5-04d8-4d55-8f16-1aae3f27b4ea" containerName="extract-utilities" Nov 24 21:56:57 crc kubenswrapper[4915]: E1124 21:56:57.012449 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72a38fb5-04d8-4d55-8f16-1aae3f27b4ea" containerName="registry-server" Nov 24 21:56:57 crc kubenswrapper[4915]: I1124 21:56:57.012455 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="72a38fb5-04d8-4d55-8f16-1aae3f27b4ea" containerName="registry-server" Nov 24 21:56:57 crc kubenswrapper[4915]: I1124 21:56:57.012664 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="72a38fb5-04d8-4d55-8f16-1aae3f27b4ea" containerName="registry-server" Nov 24 21:56:57 crc kubenswrapper[4915]: I1124 21:56:57.012695 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="96495372-42d5-4a61-99d1-be56b844f795" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 24 21:56:57 crc kubenswrapper[4915]: I1124 21:56:57.013443 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbgtx" Nov 24 21:56:57 crc kubenswrapper[4915]: I1124 21:56:57.016380 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 24 21:56:57 crc kubenswrapper[4915]: I1124 21:56:57.016636 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkk6k" Nov 24 21:56:57 crc kubenswrapper[4915]: I1124 21:56:57.016664 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:56:57 crc kubenswrapper[4915]: I1124 21:56:57.016863 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:56:57 crc kubenswrapper[4915]: I1124 21:56:57.019612 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:56:57 crc kubenswrapper[4915]: I1124 21:56:57.034804 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbgtx"] Nov 24 21:56:57 crc kubenswrapper[4915]: I1124 21:56:57.055298 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d4639e-cc06-4db8-b81d-336f8ef4bda5-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wbgtx\" (UID: \"48d4639e-cc06-4db8-b81d-336f8ef4bda5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbgtx" Nov 24 21:56:57 crc kubenswrapper[4915]: I1124 21:56:57.055345 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48d4639e-cc06-4db8-b81d-336f8ef4bda5-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wbgtx\" (UID: \"48d4639e-cc06-4db8-b81d-336f8ef4bda5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbgtx" Nov 24 21:56:57 crc kubenswrapper[4915]: I1124 21:56:57.055449 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48d4639e-cc06-4db8-b81d-336f8ef4bda5-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wbgtx\" (UID: \"48d4639e-cc06-4db8-b81d-336f8ef4bda5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbgtx" Nov 24 21:56:57 crc kubenswrapper[4915]: I1124 21:56:57.055556 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/48d4639e-cc06-4db8-b81d-336f8ef4bda5-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wbgtx\" (UID: \"48d4639e-cc06-4db8-b81d-336f8ef4bda5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbgtx" Nov 24 21:56:57 crc kubenswrapper[4915]: I1124 21:56:57.055687 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvcjk\" (UniqueName: \"kubernetes.io/projected/48d4639e-cc06-4db8-b81d-336f8ef4bda5-kube-api-access-fvcjk\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wbgtx\" (UID: \"48d4639e-cc06-4db8-b81d-336f8ef4bda5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbgtx" Nov 24 21:56:57 crc kubenswrapper[4915]: I1124 21:56:57.158064 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48d4639e-cc06-4db8-b81d-336f8ef4bda5-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wbgtx\" (UID: \"48d4639e-cc06-4db8-b81d-336f8ef4bda5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbgtx" Nov 24 21:56:57 crc kubenswrapper[4915]: I1124 21:56:57.158231 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/48d4639e-cc06-4db8-b81d-336f8ef4bda5-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wbgtx\" (UID: \"48d4639e-cc06-4db8-b81d-336f8ef4bda5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbgtx" Nov 24 21:56:57 crc kubenswrapper[4915]: I1124 21:56:57.158388 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvcjk\" (UniqueName: \"kubernetes.io/projected/48d4639e-cc06-4db8-b81d-336f8ef4bda5-kube-api-access-fvcjk\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wbgtx\" (UID: \"48d4639e-cc06-4db8-b81d-336f8ef4bda5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbgtx" Nov 24 21:56:57 crc kubenswrapper[4915]: I1124 21:56:57.158475 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d4639e-cc06-4db8-b81d-336f8ef4bda5-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wbgtx\" (UID: \"48d4639e-cc06-4db8-b81d-336f8ef4bda5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbgtx" Nov 24 21:56:57 crc kubenswrapper[4915]: I1124 21:56:57.158514 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48d4639e-cc06-4db8-b81d-336f8ef4bda5-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wbgtx\" (UID: \"48d4639e-cc06-4db8-b81d-336f8ef4bda5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbgtx" Nov 24 21:56:57 crc kubenswrapper[4915]: I1124 21:56:57.159293 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/48d4639e-cc06-4db8-b81d-336f8ef4bda5-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wbgtx\" (UID: \"48d4639e-cc06-4db8-b81d-336f8ef4bda5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbgtx" Nov 24 21:56:57 crc kubenswrapper[4915]: I1124 21:56:57.163846 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48d4639e-cc06-4db8-b81d-336f8ef4bda5-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wbgtx\" (UID: \"48d4639e-cc06-4db8-b81d-336f8ef4bda5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbgtx" Nov 24 21:56:57 crc kubenswrapper[4915]: I1124 21:56:57.163959 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d4639e-cc06-4db8-b81d-336f8ef4bda5-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wbgtx\" (UID: \"48d4639e-cc06-4db8-b81d-336f8ef4bda5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbgtx" Nov 24 21:56:57 crc kubenswrapper[4915]: I1124 21:56:57.165660 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48d4639e-cc06-4db8-b81d-336f8ef4bda5-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wbgtx\" (UID: \"48d4639e-cc06-4db8-b81d-336f8ef4bda5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbgtx" Nov 24 21:56:57 crc kubenswrapper[4915]: I1124 21:56:57.183160 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvcjk\" (UniqueName: \"kubernetes.io/projected/48d4639e-cc06-4db8-b81d-336f8ef4bda5-kube-api-access-fvcjk\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-wbgtx\" (UID: \"48d4639e-cc06-4db8-b81d-336f8ef4bda5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbgtx" Nov 24 21:56:57 crc kubenswrapper[4915]: I1124 21:56:57.332566 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbgtx" Nov 24 21:56:58 crc kubenswrapper[4915]: I1124 21:56:58.013118 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbgtx"] Nov 24 21:56:58 crc kubenswrapper[4915]: I1124 21:56:58.842124 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbgtx" event={"ID":"48d4639e-cc06-4db8-b81d-336f8ef4bda5","Type":"ContainerStarted","Data":"0dbde7a0a429aedbd624d1f905d1e8a56403e3e5a7b010edb471f9ffc0b6e8ed"} Nov 24 21:56:58 crc kubenswrapper[4915]: I1124 21:56:58.844138 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbgtx" event={"ID":"48d4639e-cc06-4db8-b81d-336f8ef4bda5","Type":"ContainerStarted","Data":"043346fb7d67e231dd0db572d93f5b7a8b9f4abbae4341df4cb2f09aa51a6528"} Nov 24 21:56:58 crc kubenswrapper[4915]: I1124 21:56:58.877916 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbgtx" podStartSLOduration=2.3870609959999998 podStartE2EDuration="2.877893912s" podCreationTimestamp="2025-11-24 21:56:56 +0000 UTC" firstStartedPulling="2025-11-24 21:56:58.022197052 +0000 UTC m=+2236.338449225" lastFinishedPulling="2025-11-24 21:56:58.513029958 +0000 UTC m=+2236.829282141" observedRunningTime="2025-11-24 21:56:58.873523603 +0000 UTC m=+2237.189775816" watchObservedRunningTime="2025-11-24 21:56:58.877893912 +0000 UTC m=+2237.194146095" Nov 24 21:57:24 crc kubenswrapper[4915]: I1124 21:57:24.326833 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 21:57:24 crc kubenswrapper[4915]: I1124 21:57:24.327367 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 21:57:24 crc kubenswrapper[4915]: I1124 21:57:24.327414 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 21:57:24 crc kubenswrapper[4915]: I1124 21:57:24.328438 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"22b32b27a01009367ce2885e17cf2d1b259359cb42594a96255fea61acb83388"} pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 21:57:24 crc kubenswrapper[4915]: I1124 21:57:24.328516 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" containerID="cri-o://22b32b27a01009367ce2885e17cf2d1b259359cb42594a96255fea61acb83388" gracePeriod=600 Nov 24 21:57:24 crc kubenswrapper[4915]: E1124 21:57:24.455375 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:57:24 crc kubenswrapper[4915]: I1124 21:57:24.933412 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nknx6"] Nov 24 21:57:24 crc kubenswrapper[4915]: I1124 21:57:24.935962 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nknx6" Nov 24 21:57:24 crc kubenswrapper[4915]: I1124 21:57:24.956175 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nknx6"] Nov 24 21:57:24 crc kubenswrapper[4915]: I1124 21:57:24.983245 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47dmb\" (UniqueName: \"kubernetes.io/projected/8915f9c5-acb2-4ea4-8522-d4906bd59e36-kube-api-access-47dmb\") pod \"community-operators-nknx6\" (UID: \"8915f9c5-acb2-4ea4-8522-d4906bd59e36\") " pod="openshift-marketplace/community-operators-nknx6" Nov 24 21:57:24 crc kubenswrapper[4915]: I1124 21:57:24.984236 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8915f9c5-acb2-4ea4-8522-d4906bd59e36-utilities\") pod \"community-operators-nknx6\" (UID: \"8915f9c5-acb2-4ea4-8522-d4906bd59e36\") " pod="openshift-marketplace/community-operators-nknx6" Nov 24 21:57:24 crc kubenswrapper[4915]: I1124 21:57:24.987573 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8915f9c5-acb2-4ea4-8522-d4906bd59e36-catalog-content\") pod \"community-operators-nknx6\" (UID: \"8915f9c5-acb2-4ea4-8522-d4906bd59e36\") " pod="openshift-marketplace/community-operators-nknx6" Nov 24 21:57:25 crc kubenswrapper[4915]: I1124 21:57:25.089761 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8915f9c5-acb2-4ea4-8522-d4906bd59e36-utilities\") pod \"community-operators-nknx6\" (UID: \"8915f9c5-acb2-4ea4-8522-d4906bd59e36\") " pod="openshift-marketplace/community-operators-nknx6" Nov 24 21:57:25 crc kubenswrapper[4915]: I1124 21:57:25.089862 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8915f9c5-acb2-4ea4-8522-d4906bd59e36-catalog-content\") pod \"community-operators-nknx6\" (UID: \"8915f9c5-acb2-4ea4-8522-d4906bd59e36\") " pod="openshift-marketplace/community-operators-nknx6" Nov 24 21:57:25 crc kubenswrapper[4915]: I1124 21:57:25.090012 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47dmb\" (UniqueName: \"kubernetes.io/projected/8915f9c5-acb2-4ea4-8522-d4906bd59e36-kube-api-access-47dmb\") pod \"community-operators-nknx6\" (UID: \"8915f9c5-acb2-4ea4-8522-d4906bd59e36\") " pod="openshift-marketplace/community-operators-nknx6" Nov 24 21:57:25 crc kubenswrapper[4915]: I1124 21:57:25.090572 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8915f9c5-acb2-4ea4-8522-d4906bd59e36-utilities\") pod \"community-operators-nknx6\" (UID: \"8915f9c5-acb2-4ea4-8522-d4906bd59e36\") " pod="openshift-marketplace/community-operators-nknx6" Nov 24 21:57:25 crc kubenswrapper[4915]: I1124 21:57:25.090584 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8915f9c5-acb2-4ea4-8522-d4906bd59e36-catalog-content\") pod \"community-operators-nknx6\" (UID: \"8915f9c5-acb2-4ea4-8522-d4906bd59e36\") " pod="openshift-marketplace/community-operators-nknx6" Nov 24 21:57:25 crc kubenswrapper[4915]: I1124 21:57:25.114688 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47dmb\" (UniqueName: \"kubernetes.io/projected/8915f9c5-acb2-4ea4-8522-d4906bd59e36-kube-api-access-47dmb\") pod \"community-operators-nknx6\" (UID: \"8915f9c5-acb2-4ea4-8522-d4906bd59e36\") " pod="openshift-marketplace/community-operators-nknx6" Nov 24 21:57:25 crc kubenswrapper[4915]: I1124 21:57:25.190327 4915 generic.go:334] "Generic (PLEG): container finished" podID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerID="22b32b27a01009367ce2885e17cf2d1b259359cb42594a96255fea61acb83388" exitCode=0 Nov 24 21:57:25 crc kubenswrapper[4915]: I1124 21:57:25.190364 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerDied","Data":"22b32b27a01009367ce2885e17cf2d1b259359cb42594a96255fea61acb83388"} Nov 24 21:57:25 crc kubenswrapper[4915]: I1124 21:57:25.190394 4915 scope.go:117] "RemoveContainer" containerID="556aae8bbced33e3b49c2bf37d5c3e01d83bc99ece680acbe35b92b0990e5b5d" Nov 24 21:57:25 crc kubenswrapper[4915]: I1124 21:57:25.191611 4915 scope.go:117] "RemoveContainer" containerID="22b32b27a01009367ce2885e17cf2d1b259359cb42594a96255fea61acb83388" Nov 24 21:57:25 crc kubenswrapper[4915]: E1124 21:57:25.192098 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:57:25 crc kubenswrapper[4915]: I1124 21:57:25.262949 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nknx6" Nov 24 21:57:25 crc kubenswrapper[4915]: W1124 21:57:25.841257 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8915f9c5_acb2_4ea4_8522_d4906bd59e36.slice/crio-f93da9faf3c073dd9d5e27b6df48a60a02b924180f0a213a6303ca50be597111 WatchSource:0}: Error finding container f93da9faf3c073dd9d5e27b6df48a60a02b924180f0a213a6303ca50be597111: Status 404 returned error can't find the container with id f93da9faf3c073dd9d5e27b6df48a60a02b924180f0a213a6303ca50be597111 Nov 24 21:57:25 crc kubenswrapper[4915]: I1124 21:57:25.841404 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nknx6"] Nov 24 21:57:26 crc kubenswrapper[4915]: I1124 21:57:26.205176 4915 generic.go:334] "Generic (PLEG): container finished" podID="8915f9c5-acb2-4ea4-8522-d4906bd59e36" containerID="3ce2ca0518213c5f796dccc96c3e63574ac8188a9c2f6504a0d3164ba217cb48" exitCode=0 Nov 24 21:57:26 crc kubenswrapper[4915]: I1124 21:57:26.205318 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nknx6" event={"ID":"8915f9c5-acb2-4ea4-8522-d4906bd59e36","Type":"ContainerDied","Data":"3ce2ca0518213c5f796dccc96c3e63574ac8188a9c2f6504a0d3164ba217cb48"} Nov 24 21:57:26 crc kubenswrapper[4915]: I1124 21:57:26.205605 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nknx6" event={"ID":"8915f9c5-acb2-4ea4-8522-d4906bd59e36","Type":"ContainerStarted","Data":"f93da9faf3c073dd9d5e27b6df48a60a02b924180f0a213a6303ca50be597111"} Nov 24 21:57:26 crc kubenswrapper[4915]: I1124 21:57:26.208262 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 21:57:27 crc kubenswrapper[4915]: I1124 21:57:27.222910 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nknx6" event={"ID":"8915f9c5-acb2-4ea4-8522-d4906bd59e36","Type":"ContainerStarted","Data":"a748e5bbc203cc68ea5bdd7bc56ff57a290ea31a783cf1b810ec52dbe8e9bd23"} Nov 24 21:57:29 crc kubenswrapper[4915]: I1124 21:57:29.248095 4915 generic.go:334] "Generic (PLEG): container finished" podID="8915f9c5-acb2-4ea4-8522-d4906bd59e36" containerID="a748e5bbc203cc68ea5bdd7bc56ff57a290ea31a783cf1b810ec52dbe8e9bd23" exitCode=0 Nov 24 21:57:29 crc kubenswrapper[4915]: I1124 21:57:29.248218 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nknx6" event={"ID":"8915f9c5-acb2-4ea4-8522-d4906bd59e36","Type":"ContainerDied","Data":"a748e5bbc203cc68ea5bdd7bc56ff57a290ea31a783cf1b810ec52dbe8e9bd23"} Nov 24 21:57:30 crc kubenswrapper[4915]: I1124 21:57:30.260148 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nknx6" event={"ID":"8915f9c5-acb2-4ea4-8522-d4906bd59e36","Type":"ContainerStarted","Data":"18fbde89d03396c13d09fc370e200b31f0dc19f210be25a949be52be4e652454"} Nov 24 21:57:30 crc kubenswrapper[4915]: I1124 21:57:30.282882 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nknx6" podStartSLOduration=2.787427687 podStartE2EDuration="6.28286005s" podCreationTimestamp="2025-11-24 21:57:24 +0000 UTC" firstStartedPulling="2025-11-24 21:57:26.208068011 +0000 UTC m=+2264.524320184" lastFinishedPulling="2025-11-24 21:57:29.703500354 +0000 UTC m=+2268.019752547" observedRunningTime="2025-11-24 21:57:30.279807428 +0000 UTC m=+2268.596059621" watchObservedRunningTime="2025-11-24 21:57:30.28286005 +0000 UTC m=+2268.599112233" Nov 24 21:57:35 crc kubenswrapper[4915]: I1124 21:57:35.263153 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nknx6" Nov 24 21:57:35 crc kubenswrapper[4915]: I1124 21:57:35.263975 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nknx6" Nov 24 21:57:35 crc kubenswrapper[4915]: I1124 21:57:35.333268 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nknx6" Nov 24 21:57:35 crc kubenswrapper[4915]: I1124 21:57:35.388367 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nknx6" Nov 24 21:57:35 crc kubenswrapper[4915]: I1124 21:57:35.587119 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nknx6"] Nov 24 21:57:37 crc kubenswrapper[4915]: I1124 21:57:37.346934 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nknx6" podUID="8915f9c5-acb2-4ea4-8522-d4906bd59e36" containerName="registry-server" containerID="cri-o://18fbde89d03396c13d09fc370e200b31f0dc19f210be25a949be52be4e652454" gracePeriod=2 Nov 24 21:57:37 crc kubenswrapper[4915]: I1124 21:57:37.949839 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nknx6" Nov 24 21:57:38 crc kubenswrapper[4915]: I1124 21:57:38.012291 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47dmb\" (UniqueName: \"kubernetes.io/projected/8915f9c5-acb2-4ea4-8522-d4906bd59e36-kube-api-access-47dmb\") pod \"8915f9c5-acb2-4ea4-8522-d4906bd59e36\" (UID: \"8915f9c5-acb2-4ea4-8522-d4906bd59e36\") " Nov 24 21:57:38 crc kubenswrapper[4915]: I1124 21:57:38.012487 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8915f9c5-acb2-4ea4-8522-d4906bd59e36-utilities\") pod \"8915f9c5-acb2-4ea4-8522-d4906bd59e36\" (UID: \"8915f9c5-acb2-4ea4-8522-d4906bd59e36\") " Nov 24 21:57:38 crc kubenswrapper[4915]: I1124 21:57:38.012643 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8915f9c5-acb2-4ea4-8522-d4906bd59e36-catalog-content\") pod \"8915f9c5-acb2-4ea4-8522-d4906bd59e36\" (UID: \"8915f9c5-acb2-4ea4-8522-d4906bd59e36\") " Nov 24 21:57:38 crc kubenswrapper[4915]: I1124 21:57:38.013280 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8915f9c5-acb2-4ea4-8522-d4906bd59e36-utilities" (OuterVolumeSpecName: "utilities") pod "8915f9c5-acb2-4ea4-8522-d4906bd59e36" (UID: "8915f9c5-acb2-4ea4-8522-d4906bd59e36"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:57:38 crc kubenswrapper[4915]: I1124 21:57:38.013639 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8915f9c5-acb2-4ea4-8522-d4906bd59e36-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:38 crc kubenswrapper[4915]: I1124 21:57:38.017901 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8915f9c5-acb2-4ea4-8522-d4906bd59e36-kube-api-access-47dmb" (OuterVolumeSpecName: "kube-api-access-47dmb") pod "8915f9c5-acb2-4ea4-8522-d4906bd59e36" (UID: "8915f9c5-acb2-4ea4-8522-d4906bd59e36"). InnerVolumeSpecName "kube-api-access-47dmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:57:38 crc kubenswrapper[4915]: I1124 21:57:38.062145 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8915f9c5-acb2-4ea4-8522-d4906bd59e36-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8915f9c5-acb2-4ea4-8522-d4906bd59e36" (UID: "8915f9c5-acb2-4ea4-8522-d4906bd59e36"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:57:38 crc kubenswrapper[4915]: I1124 21:57:38.117425 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8915f9c5-acb2-4ea4-8522-d4906bd59e36-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:38 crc kubenswrapper[4915]: I1124 21:57:38.117463 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47dmb\" (UniqueName: \"kubernetes.io/projected/8915f9c5-acb2-4ea4-8522-d4906bd59e36-kube-api-access-47dmb\") on node \"crc\" DevicePath \"\"" Nov 24 21:57:38 crc kubenswrapper[4915]: I1124 21:57:38.358337 4915 generic.go:334] "Generic (PLEG): container finished" podID="8915f9c5-acb2-4ea4-8522-d4906bd59e36" containerID="18fbde89d03396c13d09fc370e200b31f0dc19f210be25a949be52be4e652454" exitCode=0 Nov 24 21:57:38 crc kubenswrapper[4915]: I1124 21:57:38.358383 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nknx6" event={"ID":"8915f9c5-acb2-4ea4-8522-d4906bd59e36","Type":"ContainerDied","Data":"18fbde89d03396c13d09fc370e200b31f0dc19f210be25a949be52be4e652454"} Nov 24 21:57:38 crc kubenswrapper[4915]: I1124 21:57:38.358413 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nknx6" event={"ID":"8915f9c5-acb2-4ea4-8522-d4906bd59e36","Type":"ContainerDied","Data":"f93da9faf3c073dd9d5e27b6df48a60a02b924180f0a213a6303ca50be597111"} Nov 24 21:57:38 crc kubenswrapper[4915]: I1124 21:57:38.358436 4915 scope.go:117] "RemoveContainer" containerID="18fbde89d03396c13d09fc370e200b31f0dc19f210be25a949be52be4e652454" Nov 24 21:57:38 crc kubenswrapper[4915]: I1124 21:57:38.358458 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nknx6" Nov 24 21:57:38 crc kubenswrapper[4915]: I1124 21:57:38.388100 4915 scope.go:117] "RemoveContainer" containerID="a748e5bbc203cc68ea5bdd7bc56ff57a290ea31a783cf1b810ec52dbe8e9bd23" Nov 24 21:57:38 crc kubenswrapper[4915]: I1124 21:57:38.390075 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nknx6"] Nov 24 21:57:38 crc kubenswrapper[4915]: I1124 21:57:38.401042 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nknx6"] Nov 24 21:57:38 crc kubenswrapper[4915]: I1124 21:57:38.414735 4915 scope.go:117] "RemoveContainer" containerID="3ce2ca0518213c5f796dccc96c3e63574ac8188a9c2f6504a0d3164ba217cb48" Nov 24 21:57:38 crc kubenswrapper[4915]: I1124 21:57:38.426906 4915 scope.go:117] "RemoveContainer" containerID="22b32b27a01009367ce2885e17cf2d1b259359cb42594a96255fea61acb83388" Nov 24 21:57:38 crc kubenswrapper[4915]: E1124 21:57:38.427260 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:57:38 crc kubenswrapper[4915]: I1124 21:57:38.439662 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8915f9c5-acb2-4ea4-8522-d4906bd59e36" path="/var/lib/kubelet/pods/8915f9c5-acb2-4ea4-8522-d4906bd59e36/volumes" Nov 24 21:57:38 crc kubenswrapper[4915]: I1124 21:57:38.460442 4915 scope.go:117] "RemoveContainer" containerID="18fbde89d03396c13d09fc370e200b31f0dc19f210be25a949be52be4e652454" Nov 24 21:57:38 crc kubenswrapper[4915]: E1124 21:57:38.461202 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18fbde89d03396c13d09fc370e200b31f0dc19f210be25a949be52be4e652454\": container with ID starting with 18fbde89d03396c13d09fc370e200b31f0dc19f210be25a949be52be4e652454 not found: ID does not exist" containerID="18fbde89d03396c13d09fc370e200b31f0dc19f210be25a949be52be4e652454" Nov 24 21:57:38 crc kubenswrapper[4915]: I1124 21:57:38.461242 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18fbde89d03396c13d09fc370e200b31f0dc19f210be25a949be52be4e652454"} err="failed to get container status \"18fbde89d03396c13d09fc370e200b31f0dc19f210be25a949be52be4e652454\": rpc error: code = NotFound desc = could not find container \"18fbde89d03396c13d09fc370e200b31f0dc19f210be25a949be52be4e652454\": container with ID starting with 18fbde89d03396c13d09fc370e200b31f0dc19f210be25a949be52be4e652454 not found: ID does not exist" Nov 24 21:57:38 crc kubenswrapper[4915]: I1124 21:57:38.461266 4915 scope.go:117] "RemoveContainer" containerID="a748e5bbc203cc68ea5bdd7bc56ff57a290ea31a783cf1b810ec52dbe8e9bd23" Nov 24 21:57:38 crc kubenswrapper[4915]: E1124 21:57:38.461604 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a748e5bbc203cc68ea5bdd7bc56ff57a290ea31a783cf1b810ec52dbe8e9bd23\": container with ID starting with a748e5bbc203cc68ea5bdd7bc56ff57a290ea31a783cf1b810ec52dbe8e9bd23 not found: ID does not exist" containerID="a748e5bbc203cc68ea5bdd7bc56ff57a290ea31a783cf1b810ec52dbe8e9bd23" Nov 24 21:57:38 crc kubenswrapper[4915]: I1124 21:57:38.461635 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a748e5bbc203cc68ea5bdd7bc56ff57a290ea31a783cf1b810ec52dbe8e9bd23"} err="failed to get container status \"a748e5bbc203cc68ea5bdd7bc56ff57a290ea31a783cf1b810ec52dbe8e9bd23\": rpc error: code = NotFound desc = could not find container \"a748e5bbc203cc68ea5bdd7bc56ff57a290ea31a783cf1b810ec52dbe8e9bd23\": container with ID starting with a748e5bbc203cc68ea5bdd7bc56ff57a290ea31a783cf1b810ec52dbe8e9bd23 not found: ID does not exist" Nov 24 21:57:38 crc kubenswrapper[4915]: I1124 21:57:38.461655 4915 scope.go:117] "RemoveContainer" containerID="3ce2ca0518213c5f796dccc96c3e63574ac8188a9c2f6504a0d3164ba217cb48" Nov 24 21:57:38 crc kubenswrapper[4915]: E1124 21:57:38.461940 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ce2ca0518213c5f796dccc96c3e63574ac8188a9c2f6504a0d3164ba217cb48\": container with ID starting with 3ce2ca0518213c5f796dccc96c3e63574ac8188a9c2f6504a0d3164ba217cb48 not found: ID does not exist" containerID="3ce2ca0518213c5f796dccc96c3e63574ac8188a9c2f6504a0d3164ba217cb48" Nov 24 21:57:38 crc kubenswrapper[4915]: I1124 21:57:38.461974 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ce2ca0518213c5f796dccc96c3e63574ac8188a9c2f6504a0d3164ba217cb48"} err="failed to get container status \"3ce2ca0518213c5f796dccc96c3e63574ac8188a9c2f6504a0d3164ba217cb48\": rpc error: code = NotFound desc = could not find container \"3ce2ca0518213c5f796dccc96c3e63574ac8188a9c2f6504a0d3164ba217cb48\": container with ID starting with 3ce2ca0518213c5f796dccc96c3e63574ac8188a9c2f6504a0d3164ba217cb48 not found: ID does not exist" Nov 24 21:57:47 crc kubenswrapper[4915]: I1124 21:57:47.807190 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c6hq2"] Nov 24 21:57:47 crc kubenswrapper[4915]: E1124 21:57:47.808289 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8915f9c5-acb2-4ea4-8522-d4906bd59e36" containerName="registry-server" Nov 24 21:57:47 crc kubenswrapper[4915]: I1124 21:57:47.808307 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8915f9c5-acb2-4ea4-8522-d4906bd59e36" containerName="registry-server" Nov 24 21:57:47 crc kubenswrapper[4915]: E1124 21:57:47.808327 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8915f9c5-acb2-4ea4-8522-d4906bd59e36" containerName="extract-utilities" Nov 24 21:57:47 crc kubenswrapper[4915]: I1124 21:57:47.808336 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8915f9c5-acb2-4ea4-8522-d4906bd59e36" containerName="extract-utilities" Nov 24 21:57:47 crc kubenswrapper[4915]: E1124 21:57:47.808372 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8915f9c5-acb2-4ea4-8522-d4906bd59e36" containerName="extract-content" Nov 24 21:57:47 crc kubenswrapper[4915]: I1124 21:57:47.808379 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8915f9c5-acb2-4ea4-8522-d4906bd59e36" containerName="extract-content" Nov 24 21:57:47 crc kubenswrapper[4915]: I1124 21:57:47.808737 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="8915f9c5-acb2-4ea4-8522-d4906bd59e36" containerName="registry-server" Nov 24 21:57:47 crc kubenswrapper[4915]: I1124 21:57:47.811666 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6hq2" Nov 24 21:57:47 crc kubenswrapper[4915]: I1124 21:57:47.832395 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c6hq2"] Nov 24 21:57:47 crc kubenswrapper[4915]: I1124 21:57:47.879841 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/329ee345-193f-47cc-9216-c127effd58cf-utilities\") pod \"certified-operators-c6hq2\" (UID: \"329ee345-193f-47cc-9216-c127effd58cf\") " pod="openshift-marketplace/certified-operators-c6hq2" Nov 24 21:57:47 crc kubenswrapper[4915]: I1124 21:57:47.879981 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/329ee345-193f-47cc-9216-c127effd58cf-catalog-content\") pod \"certified-operators-c6hq2\" (UID: \"329ee345-193f-47cc-9216-c127effd58cf\") " pod="openshift-marketplace/certified-operators-c6hq2" Nov 24 21:57:47 crc kubenswrapper[4915]: I1124 21:57:47.880265 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7pwp\" (UniqueName: \"kubernetes.io/projected/329ee345-193f-47cc-9216-c127effd58cf-kube-api-access-b7pwp\") pod \"certified-operators-c6hq2\" (UID: \"329ee345-193f-47cc-9216-c127effd58cf\") " pod="openshift-marketplace/certified-operators-c6hq2" Nov 24 21:57:47 crc kubenswrapper[4915]: I1124 21:57:47.982946 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/329ee345-193f-47cc-9216-c127effd58cf-catalog-content\") pod \"certified-operators-c6hq2\" (UID: \"329ee345-193f-47cc-9216-c127effd58cf\") " pod="openshift-marketplace/certified-operators-c6hq2" Nov 24 21:57:47 crc kubenswrapper[4915]: I1124 21:57:47.983153 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7pwp\" (UniqueName: \"kubernetes.io/projected/329ee345-193f-47cc-9216-c127effd58cf-kube-api-access-b7pwp\") pod \"certified-operators-c6hq2\" (UID: \"329ee345-193f-47cc-9216-c127effd58cf\") " pod="openshift-marketplace/certified-operators-c6hq2" Nov 24 21:57:47 crc kubenswrapper[4915]: I1124 21:57:47.983360 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/329ee345-193f-47cc-9216-c127effd58cf-utilities\") pod \"certified-operators-c6hq2\" (UID: \"329ee345-193f-47cc-9216-c127effd58cf\") " pod="openshift-marketplace/certified-operators-c6hq2" Nov 24 21:57:47 crc kubenswrapper[4915]: I1124 21:57:47.983518 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/329ee345-193f-47cc-9216-c127effd58cf-catalog-content\") pod \"certified-operators-c6hq2\" (UID: \"329ee345-193f-47cc-9216-c127effd58cf\") " pod="openshift-marketplace/certified-operators-c6hq2" Nov 24 21:57:47 crc kubenswrapper[4915]: I1124 21:57:47.983699 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/329ee345-193f-47cc-9216-c127effd58cf-utilities\") pod \"certified-operators-c6hq2\" (UID: \"329ee345-193f-47cc-9216-c127effd58cf\") " pod="openshift-marketplace/certified-operators-c6hq2" Nov 24 21:57:48 crc kubenswrapper[4915]: I1124 21:57:48.008369 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7pwp\" (UniqueName: \"kubernetes.io/projected/329ee345-193f-47cc-9216-c127effd58cf-kube-api-access-b7pwp\") pod \"certified-operators-c6hq2\" (UID: \"329ee345-193f-47cc-9216-c127effd58cf\") " pod="openshift-marketplace/certified-operators-c6hq2" Nov 24 21:57:48 crc kubenswrapper[4915]: I1124 21:57:48.155430 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6hq2" Nov 24 21:57:48 crc kubenswrapper[4915]: I1124 21:57:48.675584 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c6hq2"] Nov 24 21:57:49 crc kubenswrapper[4915]: I1124 21:57:49.532158 4915 generic.go:334] "Generic (PLEG): container finished" podID="329ee345-193f-47cc-9216-c127effd58cf" containerID="fefa4b252a5ffa3af53c24250aef295963f9019ad3668dcb52b892c026817d7c" exitCode=0 Nov 24 21:57:49 crc kubenswrapper[4915]: I1124 21:57:49.532491 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6hq2" event={"ID":"329ee345-193f-47cc-9216-c127effd58cf","Type":"ContainerDied","Data":"fefa4b252a5ffa3af53c24250aef295963f9019ad3668dcb52b892c026817d7c"} Nov 24 21:57:49 crc kubenswrapper[4915]: I1124 21:57:49.532535 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6hq2" event={"ID":"329ee345-193f-47cc-9216-c127effd58cf","Type":"ContainerStarted","Data":"e7b289a7ea29a75ceff562487ec170b00209935c7ddc3eaf6bf0f794d4385c25"} Nov 24 21:57:51 crc kubenswrapper[4915]: I1124 21:57:51.556526 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6hq2" event={"ID":"329ee345-193f-47cc-9216-c127effd58cf","Type":"ContainerStarted","Data":"ef996c04ba6451d591b081787e93f1da4e18827a6a0bf5cffaaa40b526b5d2a0"} Nov 24 21:57:52 crc kubenswrapper[4915]: I1124 21:57:52.576743 4915 generic.go:334] "Generic (PLEG): container finished" podID="329ee345-193f-47cc-9216-c127effd58cf" containerID="ef996c04ba6451d591b081787e93f1da4e18827a6a0bf5cffaaa40b526b5d2a0" exitCode=0 Nov 24 21:57:52 crc kubenswrapper[4915]: I1124 21:57:52.576843 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6hq2" event={"ID":"329ee345-193f-47cc-9216-c127effd58cf","Type":"ContainerDied","Data":"ef996c04ba6451d591b081787e93f1da4e18827a6a0bf5cffaaa40b526b5d2a0"} Nov 24 21:57:53 crc kubenswrapper[4915]: I1124 21:57:53.427600 4915 scope.go:117] "RemoveContainer" containerID="22b32b27a01009367ce2885e17cf2d1b259359cb42594a96255fea61acb83388" Nov 24 21:57:53 crc kubenswrapper[4915]: E1124 21:57:53.428493 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:57:53 crc kubenswrapper[4915]: I1124 21:57:53.593095 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6hq2" event={"ID":"329ee345-193f-47cc-9216-c127effd58cf","Type":"ContainerStarted","Data":"2e146aec3ac3cdc08f0d40ab239c1acb1283feaf04389377968d74de21f1ae9c"} Nov 24 21:57:53 crc kubenswrapper[4915]: I1124 21:57:53.627846 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c6hq2" podStartSLOduration=3.141510213 podStartE2EDuration="6.627825319s" podCreationTimestamp="2025-11-24 21:57:47 +0000 UTC" firstStartedPulling="2025-11-24 21:57:49.535417003 +0000 UTC m=+2287.851669176" lastFinishedPulling="2025-11-24 21:57:53.021732109 +0000 UTC m=+2291.337984282" observedRunningTime="2025-11-24 21:57:53.615518346 +0000 UTC m=+2291.931770539" watchObservedRunningTime="2025-11-24 21:57:53.627825319 +0000 UTC m=+2291.944077502" Nov 24 21:57:58 crc kubenswrapper[4915]: I1124 21:57:58.156570 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c6hq2" Nov 24 21:57:58 crc kubenswrapper[4915]: I1124 21:57:58.157079 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c6hq2" Nov 24 21:57:58 crc kubenswrapper[4915]: I1124 21:57:58.213387 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c6hq2" Nov 24 21:57:58 crc kubenswrapper[4915]: I1124 21:57:58.723439 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c6hq2" Nov 24 21:57:58 crc kubenswrapper[4915]: I1124 21:57:58.804717 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c6hq2"] Nov 24 21:58:00 crc kubenswrapper[4915]: I1124 21:58:00.683626 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c6hq2" podUID="329ee345-193f-47cc-9216-c127effd58cf" containerName="registry-server" containerID="cri-o://2e146aec3ac3cdc08f0d40ab239c1acb1283feaf04389377968d74de21f1ae9c" gracePeriod=2 Nov 24 21:58:01 crc kubenswrapper[4915]: I1124 21:58:01.231808 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6hq2" Nov 24 21:58:01 crc kubenswrapper[4915]: I1124 21:58:01.421999 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/329ee345-193f-47cc-9216-c127effd58cf-catalog-content\") pod \"329ee345-193f-47cc-9216-c127effd58cf\" (UID: \"329ee345-193f-47cc-9216-c127effd58cf\") " Nov 24 21:58:01 crc kubenswrapper[4915]: I1124 21:58:01.422098 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7pwp\" (UniqueName: \"kubernetes.io/projected/329ee345-193f-47cc-9216-c127effd58cf-kube-api-access-b7pwp\") pod \"329ee345-193f-47cc-9216-c127effd58cf\" (UID: \"329ee345-193f-47cc-9216-c127effd58cf\") " Nov 24 21:58:01 crc kubenswrapper[4915]: I1124 21:58:01.422487 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/329ee345-193f-47cc-9216-c127effd58cf-utilities\") pod \"329ee345-193f-47cc-9216-c127effd58cf\" (UID: \"329ee345-193f-47cc-9216-c127effd58cf\") " Nov 24 21:58:01 crc kubenswrapper[4915]: I1124 21:58:01.423129 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/329ee345-193f-47cc-9216-c127effd58cf-utilities" (OuterVolumeSpecName: "utilities") pod "329ee345-193f-47cc-9216-c127effd58cf" (UID: "329ee345-193f-47cc-9216-c127effd58cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:58:01 crc kubenswrapper[4915]: I1124 21:58:01.423856 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/329ee345-193f-47cc-9216-c127effd58cf-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:01 crc kubenswrapper[4915]: I1124 21:58:01.436859 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/329ee345-193f-47cc-9216-c127effd58cf-kube-api-access-b7pwp" (OuterVolumeSpecName: "kube-api-access-b7pwp") pod "329ee345-193f-47cc-9216-c127effd58cf" (UID: "329ee345-193f-47cc-9216-c127effd58cf"). InnerVolumeSpecName "kube-api-access-b7pwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:58:01 crc kubenswrapper[4915]: I1124 21:58:01.483858 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/329ee345-193f-47cc-9216-c127effd58cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "329ee345-193f-47cc-9216-c127effd58cf" (UID: "329ee345-193f-47cc-9216-c127effd58cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 21:58:01 crc kubenswrapper[4915]: I1124 21:58:01.526522 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/329ee345-193f-47cc-9216-c127effd58cf-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:01 crc kubenswrapper[4915]: I1124 21:58:01.526857 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7pwp\" (UniqueName: \"kubernetes.io/projected/329ee345-193f-47cc-9216-c127effd58cf-kube-api-access-b7pwp\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:01 crc kubenswrapper[4915]: I1124 21:58:01.696025 4915 generic.go:334] "Generic (PLEG): container finished" podID="329ee345-193f-47cc-9216-c127effd58cf" containerID="2e146aec3ac3cdc08f0d40ab239c1acb1283feaf04389377968d74de21f1ae9c" exitCode=0 Nov 24 21:58:01 crc kubenswrapper[4915]: I1124 21:58:01.696077 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6hq2" event={"ID":"329ee345-193f-47cc-9216-c127effd58cf","Type":"ContainerDied","Data":"2e146aec3ac3cdc08f0d40ab239c1acb1283feaf04389377968d74de21f1ae9c"} Nov 24 21:58:01 crc kubenswrapper[4915]: I1124 21:58:01.696096 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6hq2" Nov 24 21:58:01 crc kubenswrapper[4915]: I1124 21:58:01.696123 4915 scope.go:117] "RemoveContainer" containerID="2e146aec3ac3cdc08f0d40ab239c1acb1283feaf04389377968d74de21f1ae9c" Nov 24 21:58:01 crc kubenswrapper[4915]: I1124 21:58:01.696108 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6hq2" event={"ID":"329ee345-193f-47cc-9216-c127effd58cf","Type":"ContainerDied","Data":"e7b289a7ea29a75ceff562487ec170b00209935c7ddc3eaf6bf0f794d4385c25"} Nov 24 21:58:01 crc kubenswrapper[4915]: I1124 21:58:01.775794 4915 scope.go:117] "RemoveContainer" containerID="ef996c04ba6451d591b081787e93f1da4e18827a6a0bf5cffaaa40b526b5d2a0" Nov 24 21:58:01 crc kubenswrapper[4915]: I1124 21:58:01.779245 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c6hq2"] Nov 24 21:58:01 crc kubenswrapper[4915]: I1124 21:58:01.810109 4915 scope.go:117] "RemoveContainer" containerID="fefa4b252a5ffa3af53c24250aef295963f9019ad3668dcb52b892c026817d7c" Nov 24 21:58:01 crc kubenswrapper[4915]: I1124 21:58:01.815443 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c6hq2"] Nov 24 21:58:01 crc kubenswrapper[4915]: I1124 21:58:01.851765 4915 scope.go:117] "RemoveContainer" containerID="2e146aec3ac3cdc08f0d40ab239c1acb1283feaf04389377968d74de21f1ae9c" Nov 24 21:58:01 crc kubenswrapper[4915]: E1124 21:58:01.852229 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e146aec3ac3cdc08f0d40ab239c1acb1283feaf04389377968d74de21f1ae9c\": container with ID starting with 2e146aec3ac3cdc08f0d40ab239c1acb1283feaf04389377968d74de21f1ae9c not found: ID does not exist" containerID="2e146aec3ac3cdc08f0d40ab239c1acb1283feaf04389377968d74de21f1ae9c" Nov 24 21:58:01 crc kubenswrapper[4915]: I1124 21:58:01.852278 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e146aec3ac3cdc08f0d40ab239c1acb1283feaf04389377968d74de21f1ae9c"} err="failed to get container status \"2e146aec3ac3cdc08f0d40ab239c1acb1283feaf04389377968d74de21f1ae9c\": rpc error: code = NotFound desc = could not find container \"2e146aec3ac3cdc08f0d40ab239c1acb1283feaf04389377968d74de21f1ae9c\": container with ID starting with 2e146aec3ac3cdc08f0d40ab239c1acb1283feaf04389377968d74de21f1ae9c not found: ID does not exist" Nov 24 21:58:01 crc kubenswrapper[4915]: I1124 21:58:01.852310 4915 scope.go:117] "RemoveContainer" containerID="ef996c04ba6451d591b081787e93f1da4e18827a6a0bf5cffaaa40b526b5d2a0" Nov 24 21:58:01 crc kubenswrapper[4915]: E1124 21:58:01.852709 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef996c04ba6451d591b081787e93f1da4e18827a6a0bf5cffaaa40b526b5d2a0\": container with ID starting with ef996c04ba6451d591b081787e93f1da4e18827a6a0bf5cffaaa40b526b5d2a0 not found: ID does not exist" containerID="ef996c04ba6451d591b081787e93f1da4e18827a6a0bf5cffaaa40b526b5d2a0" Nov 24 21:58:01 crc kubenswrapper[4915]: I1124 21:58:01.852740 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef996c04ba6451d591b081787e93f1da4e18827a6a0bf5cffaaa40b526b5d2a0"} err="failed to get container status \"ef996c04ba6451d591b081787e93f1da4e18827a6a0bf5cffaaa40b526b5d2a0\": rpc error: code = NotFound desc = could not find container \"ef996c04ba6451d591b081787e93f1da4e18827a6a0bf5cffaaa40b526b5d2a0\": container with ID starting with ef996c04ba6451d591b081787e93f1da4e18827a6a0bf5cffaaa40b526b5d2a0 not found: ID does not exist" Nov 24 21:58:01 crc kubenswrapper[4915]: I1124 21:58:01.852761 4915 scope.go:117] "RemoveContainer" containerID="fefa4b252a5ffa3af53c24250aef295963f9019ad3668dcb52b892c026817d7c" Nov 24 21:58:01 crc kubenswrapper[4915]: E1124 21:58:01.853071 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fefa4b252a5ffa3af53c24250aef295963f9019ad3668dcb52b892c026817d7c\": container with ID starting with fefa4b252a5ffa3af53c24250aef295963f9019ad3668dcb52b892c026817d7c not found: ID does not exist" containerID="fefa4b252a5ffa3af53c24250aef295963f9019ad3668dcb52b892c026817d7c" Nov 24 21:58:01 crc kubenswrapper[4915]: I1124 21:58:01.853096 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fefa4b252a5ffa3af53c24250aef295963f9019ad3668dcb52b892c026817d7c"} err="failed to get container status \"fefa4b252a5ffa3af53c24250aef295963f9019ad3668dcb52b892c026817d7c\": rpc error: code = NotFound desc = could not find container \"fefa4b252a5ffa3af53c24250aef295963f9019ad3668dcb52b892c026817d7c\": container with ID starting with fefa4b252a5ffa3af53c24250aef295963f9019ad3668dcb52b892c026817d7c not found: ID does not exist" Nov 24 21:58:02 crc kubenswrapper[4915]: I1124 21:58:02.444117 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="329ee345-193f-47cc-9216-c127effd58cf" path="/var/lib/kubelet/pods/329ee345-193f-47cc-9216-c127effd58cf/volumes" Nov 24 21:58:07 crc kubenswrapper[4915]: I1124 21:58:07.819445 4915 generic.go:334] "Generic (PLEG): container finished" podID="48d4639e-cc06-4db8-b81d-336f8ef4bda5" containerID="0dbde7a0a429aedbd624d1f905d1e8a56403e3e5a7b010edb471f9ffc0b6e8ed" exitCode=0 Nov 24 21:58:07 crc kubenswrapper[4915]: I1124 21:58:07.819515 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbgtx" event={"ID":"48d4639e-cc06-4db8-b81d-336f8ef4bda5","Type":"ContainerDied","Data":"0dbde7a0a429aedbd624d1f905d1e8a56403e3e5a7b010edb471f9ffc0b6e8ed"} Nov 24 21:58:08 crc kubenswrapper[4915]: I1124 21:58:08.427923 4915 scope.go:117] "RemoveContainer" containerID="22b32b27a01009367ce2885e17cf2d1b259359cb42594a96255fea61acb83388" Nov 24 21:58:08 crc kubenswrapper[4915]: E1124 21:58:08.429957 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:58:09 crc kubenswrapper[4915]: I1124 21:58:09.443275 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbgtx" Nov 24 21:58:09 crc kubenswrapper[4915]: I1124 21:58:09.561116 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvcjk\" (UniqueName: \"kubernetes.io/projected/48d4639e-cc06-4db8-b81d-336f8ef4bda5-kube-api-access-fvcjk\") pod \"48d4639e-cc06-4db8-b81d-336f8ef4bda5\" (UID: \"48d4639e-cc06-4db8-b81d-336f8ef4bda5\") " Nov 24 21:58:09 crc kubenswrapper[4915]: I1124 21:58:09.561557 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/48d4639e-cc06-4db8-b81d-336f8ef4bda5-ovncontroller-config-0\") pod \"48d4639e-cc06-4db8-b81d-336f8ef4bda5\" (UID: \"48d4639e-cc06-4db8-b81d-336f8ef4bda5\") " Nov 24 21:58:09 crc kubenswrapper[4915]: I1124 21:58:09.561675 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d4639e-cc06-4db8-b81d-336f8ef4bda5-ovn-combined-ca-bundle\") pod \"48d4639e-cc06-4db8-b81d-336f8ef4bda5\" (UID: \"48d4639e-cc06-4db8-b81d-336f8ef4bda5\") " Nov 24 21:58:09 crc kubenswrapper[4915]: I1124 21:58:09.561801 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48d4639e-cc06-4db8-b81d-336f8ef4bda5-inventory\") pod \"48d4639e-cc06-4db8-b81d-336f8ef4bda5\" (UID: \"48d4639e-cc06-4db8-b81d-336f8ef4bda5\") " Nov 24 21:58:09 crc kubenswrapper[4915]: I1124 21:58:09.561860 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48d4639e-cc06-4db8-b81d-336f8ef4bda5-ssh-key\") pod \"48d4639e-cc06-4db8-b81d-336f8ef4bda5\" (UID: \"48d4639e-cc06-4db8-b81d-336f8ef4bda5\") " Nov 24 21:58:09 crc kubenswrapper[4915]: I1124 21:58:09.570098 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48d4639e-cc06-4db8-b81d-336f8ef4bda5-kube-api-access-fvcjk" (OuterVolumeSpecName: "kube-api-access-fvcjk") pod "48d4639e-cc06-4db8-b81d-336f8ef4bda5" (UID: "48d4639e-cc06-4db8-b81d-336f8ef4bda5"). InnerVolumeSpecName "kube-api-access-fvcjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:58:09 crc kubenswrapper[4915]: I1124 21:58:09.576893 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48d4639e-cc06-4db8-b81d-336f8ef4bda5-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "48d4639e-cc06-4db8-b81d-336f8ef4bda5" (UID: "48d4639e-cc06-4db8-b81d-336f8ef4bda5"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:09 crc kubenswrapper[4915]: I1124 21:58:09.599330 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48d4639e-cc06-4db8-b81d-336f8ef4bda5-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "48d4639e-cc06-4db8-b81d-336f8ef4bda5" (UID: "48d4639e-cc06-4db8-b81d-336f8ef4bda5"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 21:58:09 crc kubenswrapper[4915]: I1124 21:58:09.614219 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48d4639e-cc06-4db8-b81d-336f8ef4bda5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "48d4639e-cc06-4db8-b81d-336f8ef4bda5" (UID: "48d4639e-cc06-4db8-b81d-336f8ef4bda5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:09 crc kubenswrapper[4915]: I1124 21:58:09.624446 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48d4639e-cc06-4db8-b81d-336f8ef4bda5-inventory" (OuterVolumeSpecName: "inventory") pod "48d4639e-cc06-4db8-b81d-336f8ef4bda5" (UID: "48d4639e-cc06-4db8-b81d-336f8ef4bda5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:58:09 crc kubenswrapper[4915]: I1124 21:58:09.665097 4915 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48d4639e-cc06-4db8-b81d-336f8ef4bda5-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:09 crc kubenswrapper[4915]: I1124 21:58:09.665127 4915 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/48d4639e-cc06-4db8-b81d-336f8ef4bda5-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:09 crc kubenswrapper[4915]: I1124 21:58:09.665136 4915 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/48d4639e-cc06-4db8-b81d-336f8ef4bda5-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:09 crc kubenswrapper[4915]: I1124 21:58:09.665146 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvcjk\" (UniqueName: \"kubernetes.io/projected/48d4639e-cc06-4db8-b81d-336f8ef4bda5-kube-api-access-fvcjk\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:09 crc kubenswrapper[4915]: I1124 21:58:09.665154 4915 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/48d4639e-cc06-4db8-b81d-336f8ef4bda5-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:58:09 crc kubenswrapper[4915]: I1124 21:58:09.846474 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbgtx" event={"ID":"48d4639e-cc06-4db8-b81d-336f8ef4bda5","Type":"ContainerDied","Data":"043346fb7d67e231dd0db572d93f5b7a8b9f4abbae4341df4cb2f09aa51a6528"} Nov 24 21:58:09 crc kubenswrapper[4915]: I1124 21:58:09.846518 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="043346fb7d67e231dd0db572d93f5b7a8b9f4abbae4341df4cb2f09aa51a6528" Nov 24 21:58:09 crc kubenswrapper[4915]: I1124 21:58:09.846566 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-wbgtx" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.044114 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5"] Nov 24 21:58:10 crc kubenswrapper[4915]: E1124 21:58:10.044707 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="329ee345-193f-47cc-9216-c127effd58cf" containerName="extract-content" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.044733 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="329ee345-193f-47cc-9216-c127effd58cf" containerName="extract-content" Nov 24 21:58:10 crc kubenswrapper[4915]: E1124 21:58:10.044758 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="329ee345-193f-47cc-9216-c127effd58cf" containerName="extract-utilities" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.044767 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="329ee345-193f-47cc-9216-c127effd58cf" containerName="extract-utilities" Nov 24 21:58:10 crc kubenswrapper[4915]: E1124 21:58:10.044815 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="329ee345-193f-47cc-9216-c127effd58cf" containerName="registry-server" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.044823 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="329ee345-193f-47cc-9216-c127effd58cf" containerName="registry-server" Nov 24 21:58:10 crc kubenswrapper[4915]: E1124 21:58:10.044860 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48d4639e-cc06-4db8-b81d-336f8ef4bda5" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.044870 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="48d4639e-cc06-4db8-b81d-336f8ef4bda5" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.045202 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="48d4639e-cc06-4db8-b81d-336f8ef4bda5" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.045227 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="329ee345-193f-47cc-9216-c127effd58cf" containerName="registry-server" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.046201 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.048595 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.049148 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkk6k" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.049262 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.050025 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.051604 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.052340 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.063677 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5"] Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.177330 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5\" (UID: \"62407755-3fa0-4c4d-90ff-cf42f25bdbc6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.177420 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5\" (UID: \"62407755-3fa0-4c4d-90ff-cf42f25bdbc6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.177586 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmjgg\" (UniqueName: \"kubernetes.io/projected/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-kube-api-access-zmjgg\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5\" (UID: \"62407755-3fa0-4c4d-90ff-cf42f25bdbc6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.177626 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5\" (UID: \"62407755-3fa0-4c4d-90ff-cf42f25bdbc6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.177681 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5\" (UID: \"62407755-3fa0-4c4d-90ff-cf42f25bdbc6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.177725 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5\" (UID: \"62407755-3fa0-4c4d-90ff-cf42f25bdbc6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.280903 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmjgg\" (UniqueName: \"kubernetes.io/projected/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-kube-api-access-zmjgg\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5\" (UID: \"62407755-3fa0-4c4d-90ff-cf42f25bdbc6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.280984 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5\" (UID: \"62407755-3fa0-4c4d-90ff-cf42f25bdbc6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.281242 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5\" (UID: \"62407755-3fa0-4c4d-90ff-cf42f25bdbc6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.281309 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5\" (UID: \"62407755-3fa0-4c4d-90ff-cf42f25bdbc6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.281410 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5\" (UID: \"62407755-3fa0-4c4d-90ff-cf42f25bdbc6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.281459 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5\" (UID: \"62407755-3fa0-4c4d-90ff-cf42f25bdbc6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.287029 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5\" (UID: \"62407755-3fa0-4c4d-90ff-cf42f25bdbc6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.287299 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5\" (UID: \"62407755-3fa0-4c4d-90ff-cf42f25bdbc6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.287497 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5\" (UID: \"62407755-3fa0-4c4d-90ff-cf42f25bdbc6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.287600 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5\" (UID: \"62407755-3fa0-4c4d-90ff-cf42f25bdbc6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.290994 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5\" (UID: \"62407755-3fa0-4c4d-90ff-cf42f25bdbc6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.309645 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmjgg\" (UniqueName: \"kubernetes.io/projected/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-kube-api-access-zmjgg\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5\" (UID: \"62407755-3fa0-4c4d-90ff-cf42f25bdbc6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5" Nov 24 21:58:10 crc kubenswrapper[4915]: I1124 21:58:10.372343 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5" Nov 24 21:58:11 crc kubenswrapper[4915]: I1124 21:58:11.010736 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5"] Nov 24 21:58:11 crc kubenswrapper[4915]: I1124 21:58:11.869740 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5" event={"ID":"62407755-3fa0-4c4d-90ff-cf42f25bdbc6","Type":"ContainerStarted","Data":"182c6fac5150c38927b7d09275d28ce0b0cbcf4808c06ebc29693ec043ed4cad"} Nov 24 21:58:11 crc kubenswrapper[4915]: I1124 21:58:11.870203 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5" event={"ID":"62407755-3fa0-4c4d-90ff-cf42f25bdbc6","Type":"ContainerStarted","Data":"db82ad39b0857c743bcff2a7b29f0fdffa3eb670020985fb1eb7429a939822b0"} Nov 24 21:58:11 crc kubenswrapper[4915]: I1124 21:58:11.899288 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5" podStartSLOduration=1.419548729 podStartE2EDuration="1.899267845s" podCreationTimestamp="2025-11-24 21:58:10 +0000 UTC" firstStartedPulling="2025-11-24 21:58:11.018768985 +0000 UTC m=+2309.335021158" lastFinishedPulling="2025-11-24 21:58:11.498488091 +0000 UTC m=+2309.814740274" observedRunningTime="2025-11-24 21:58:11.895095223 +0000 UTC m=+2310.211347406" watchObservedRunningTime="2025-11-24 21:58:11.899267845 +0000 UTC m=+2310.215520018" Nov 24 21:58:22 crc kubenswrapper[4915]: I1124 21:58:22.439456 4915 scope.go:117] "RemoveContainer" containerID="22b32b27a01009367ce2885e17cf2d1b259359cb42594a96255fea61acb83388" Nov 24 21:58:22 crc kubenswrapper[4915]: E1124 21:58:22.440368 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:58:38 crc kubenswrapper[4915]: I1124 21:58:38.426536 4915 scope.go:117] "RemoveContainer" containerID="22b32b27a01009367ce2885e17cf2d1b259359cb42594a96255fea61acb83388" Nov 24 21:58:38 crc kubenswrapper[4915]: E1124 21:58:38.427590 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:58:53 crc kubenswrapper[4915]: I1124 21:58:53.426964 4915 scope.go:117] "RemoveContainer" containerID="22b32b27a01009367ce2885e17cf2d1b259359cb42594a96255fea61acb83388" Nov 24 21:58:53 crc kubenswrapper[4915]: E1124 21:58:53.428139 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:59:05 crc kubenswrapper[4915]: I1124 21:59:05.427199 4915 scope.go:117] "RemoveContainer" containerID="22b32b27a01009367ce2885e17cf2d1b259359cb42594a96255fea61acb83388" Nov 24 21:59:05 crc kubenswrapper[4915]: E1124 21:59:05.428204 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:59:05 crc kubenswrapper[4915]: I1124 21:59:05.563034 4915 generic.go:334] "Generic (PLEG): container finished" podID="62407755-3fa0-4c4d-90ff-cf42f25bdbc6" containerID="182c6fac5150c38927b7d09275d28ce0b0cbcf4808c06ebc29693ec043ed4cad" exitCode=0 Nov 24 21:59:05 crc kubenswrapper[4915]: I1124 21:59:05.563129 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5" event={"ID":"62407755-3fa0-4c4d-90ff-cf42f25bdbc6","Type":"ContainerDied","Data":"182c6fac5150c38927b7d09275d28ce0b0cbcf4808c06ebc29693ec043ed4cad"} Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.064079 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.141887 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-inventory\") pod \"62407755-3fa0-4c4d-90ff-cf42f25bdbc6\" (UID: \"62407755-3fa0-4c4d-90ff-cf42f25bdbc6\") " Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.141956 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-neutron-metadata-combined-ca-bundle\") pod \"62407755-3fa0-4c4d-90ff-cf42f25bdbc6\" (UID: \"62407755-3fa0-4c4d-90ff-cf42f25bdbc6\") " Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.142030 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-neutron-ovn-metadata-agent-neutron-config-0\") pod \"62407755-3fa0-4c4d-90ff-cf42f25bdbc6\" (UID: \"62407755-3fa0-4c4d-90ff-cf42f25bdbc6\") " Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.142153 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmjgg\" (UniqueName: \"kubernetes.io/projected/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-kube-api-access-zmjgg\") pod \"62407755-3fa0-4c4d-90ff-cf42f25bdbc6\" (UID: \"62407755-3fa0-4c4d-90ff-cf42f25bdbc6\") " Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.143324 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-ssh-key\") pod \"62407755-3fa0-4c4d-90ff-cf42f25bdbc6\" (UID: \"62407755-3fa0-4c4d-90ff-cf42f25bdbc6\") " Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.143666 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-nova-metadata-neutron-config-0\") pod \"62407755-3fa0-4c4d-90ff-cf42f25bdbc6\" (UID: \"62407755-3fa0-4c4d-90ff-cf42f25bdbc6\") " Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.160892 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-kube-api-access-zmjgg" (OuterVolumeSpecName: "kube-api-access-zmjgg") pod "62407755-3fa0-4c4d-90ff-cf42f25bdbc6" (UID: "62407755-3fa0-4c4d-90ff-cf42f25bdbc6"). InnerVolumeSpecName "kube-api-access-zmjgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.168886 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "62407755-3fa0-4c4d-90ff-cf42f25bdbc6" (UID: "62407755-3fa0-4c4d-90ff-cf42f25bdbc6"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.179274 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-inventory" (OuterVolumeSpecName: "inventory") pod "62407755-3fa0-4c4d-90ff-cf42f25bdbc6" (UID: "62407755-3fa0-4c4d-90ff-cf42f25bdbc6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.180848 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "62407755-3fa0-4c4d-90ff-cf42f25bdbc6" (UID: "62407755-3fa0-4c4d-90ff-cf42f25bdbc6"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.184809 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "62407755-3fa0-4c4d-90ff-cf42f25bdbc6" (UID: "62407755-3fa0-4c4d-90ff-cf42f25bdbc6"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.196908 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "62407755-3fa0-4c4d-90ff-cf42f25bdbc6" (UID: "62407755-3fa0-4c4d-90ff-cf42f25bdbc6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.247401 4915 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.247449 4915 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.247472 4915 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.247495 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmjgg\" (UniqueName: \"kubernetes.io/projected/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-kube-api-access-zmjgg\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.247512 4915 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.247528 4915 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/62407755-3fa0-4c4d-90ff-cf42f25bdbc6-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.589868 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5" event={"ID":"62407755-3fa0-4c4d-90ff-cf42f25bdbc6","Type":"ContainerDied","Data":"db82ad39b0857c743bcff2a7b29f0fdffa3eb670020985fb1eb7429a939822b0"} Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.590005 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db82ad39b0857c743bcff2a7b29f0fdffa3eb670020985fb1eb7429a939822b0" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.590085 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.698759 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr"] Nov 24 21:59:07 crc kubenswrapper[4915]: E1124 21:59:07.699680 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62407755-3fa0-4c4d-90ff-cf42f25bdbc6" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.699709 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="62407755-3fa0-4c4d-90ff-cf42f25bdbc6" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.700154 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="62407755-3fa0-4c4d-90ff-cf42f25bdbc6" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.701245 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.703647 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.703675 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.703685 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkk6k" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.704993 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.705171 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.710887 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr"] Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.758173 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93a85159-d030-4598-8bad-305ba5b7a459-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr\" (UID: \"93a85159-d030-4598-8bad-305ba5b7a459\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.758344 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a85159-d030-4598-8bad-305ba5b7a459-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr\" (UID: \"93a85159-d030-4598-8bad-305ba5b7a459\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.758395 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9rbn\" (UniqueName: \"kubernetes.io/projected/93a85159-d030-4598-8bad-305ba5b7a459-kube-api-access-n9rbn\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr\" (UID: \"93a85159-d030-4598-8bad-305ba5b7a459\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.758429 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/93a85159-d030-4598-8bad-305ba5b7a459-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr\" (UID: \"93a85159-d030-4598-8bad-305ba5b7a459\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.758465 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/93a85159-d030-4598-8bad-305ba5b7a459-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr\" (UID: \"93a85159-d030-4598-8bad-305ba5b7a459\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.861303 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93a85159-d030-4598-8bad-305ba5b7a459-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr\" (UID: \"93a85159-d030-4598-8bad-305ba5b7a459\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.861480 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a85159-d030-4598-8bad-305ba5b7a459-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr\" (UID: \"93a85159-d030-4598-8bad-305ba5b7a459\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.861522 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9rbn\" (UniqueName: \"kubernetes.io/projected/93a85159-d030-4598-8bad-305ba5b7a459-kube-api-access-n9rbn\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr\" (UID: \"93a85159-d030-4598-8bad-305ba5b7a459\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.861551 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/93a85159-d030-4598-8bad-305ba5b7a459-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr\" (UID: \"93a85159-d030-4598-8bad-305ba5b7a459\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.861582 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/93a85159-d030-4598-8bad-305ba5b7a459-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr\" (UID: \"93a85159-d030-4598-8bad-305ba5b7a459\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.865755 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/93a85159-d030-4598-8bad-305ba5b7a459-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr\" (UID: \"93a85159-d030-4598-8bad-305ba5b7a459\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.865755 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93a85159-d030-4598-8bad-305ba5b7a459-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr\" (UID: \"93a85159-d030-4598-8bad-305ba5b7a459\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.866190 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/93a85159-d030-4598-8bad-305ba5b7a459-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr\" (UID: \"93a85159-d030-4598-8bad-305ba5b7a459\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.866734 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a85159-d030-4598-8bad-305ba5b7a459-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr\" (UID: \"93a85159-d030-4598-8bad-305ba5b7a459\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr" Nov 24 21:59:07 crc kubenswrapper[4915]: I1124 21:59:07.882069 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9rbn\" (UniqueName: \"kubernetes.io/projected/93a85159-d030-4598-8bad-305ba5b7a459-kube-api-access-n9rbn\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr\" (UID: \"93a85159-d030-4598-8bad-305ba5b7a459\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr" Nov 24 21:59:08 crc kubenswrapper[4915]: I1124 21:59:08.017732 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr" Nov 24 21:59:08 crc kubenswrapper[4915]: I1124 21:59:08.607128 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr"] Nov 24 21:59:08 crc kubenswrapper[4915]: W1124 21:59:08.624224 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93a85159_d030_4598_8bad_305ba5b7a459.slice/crio-aa599eafab3e221cf4bdda7b7e85700caa5dd30acfe962c03082ee664850528b WatchSource:0}: Error finding container aa599eafab3e221cf4bdda7b7e85700caa5dd30acfe962c03082ee664850528b: Status 404 returned error can't find the container with id aa599eafab3e221cf4bdda7b7e85700caa5dd30acfe962c03082ee664850528b Nov 24 21:59:09 crc kubenswrapper[4915]: I1124 21:59:09.647958 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr" event={"ID":"93a85159-d030-4598-8bad-305ba5b7a459","Type":"ContainerStarted","Data":"ee6dd2c2801457444c4c13eb2322affc46ffe8b9d3e6f05d46e775982c4e4865"} Nov 24 21:59:09 crc kubenswrapper[4915]: I1124 21:59:09.648554 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr" event={"ID":"93a85159-d030-4598-8bad-305ba5b7a459","Type":"ContainerStarted","Data":"aa599eafab3e221cf4bdda7b7e85700caa5dd30acfe962c03082ee664850528b"} Nov 24 21:59:09 crc kubenswrapper[4915]: I1124 21:59:09.674595 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr" podStartSLOduration=2.260372775 podStartE2EDuration="2.674567886s" podCreationTimestamp="2025-11-24 21:59:07 +0000 UTC" firstStartedPulling="2025-11-24 21:59:08.629721801 +0000 UTC m=+2366.945973984" lastFinishedPulling="2025-11-24 21:59:09.043916882 +0000 UTC m=+2367.360169095" observedRunningTime="2025-11-24 21:59:09.671844953 +0000 UTC m=+2367.988097166" watchObservedRunningTime="2025-11-24 21:59:09.674567886 +0000 UTC m=+2367.990820099" Nov 24 21:59:17 crc kubenswrapper[4915]: I1124 21:59:17.427558 4915 scope.go:117] "RemoveContainer" containerID="22b32b27a01009367ce2885e17cf2d1b259359cb42594a96255fea61acb83388" Nov 24 21:59:17 crc kubenswrapper[4915]: E1124 21:59:17.428333 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:59:30 crc kubenswrapper[4915]: I1124 21:59:30.427194 4915 scope.go:117] "RemoveContainer" containerID="22b32b27a01009367ce2885e17cf2d1b259359cb42594a96255fea61acb83388" Nov 24 21:59:30 crc kubenswrapper[4915]: E1124 21:59:30.428644 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:59:42 crc kubenswrapper[4915]: I1124 21:59:42.428111 4915 scope.go:117] "RemoveContainer" containerID="22b32b27a01009367ce2885e17cf2d1b259359cb42594a96255fea61acb83388" Nov 24 21:59:42 crc kubenswrapper[4915]: E1124 21:59:42.429131 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 21:59:56 crc kubenswrapper[4915]: I1124 21:59:56.426991 4915 scope.go:117] "RemoveContainer" containerID="22b32b27a01009367ce2885e17cf2d1b259359cb42594a96255fea61acb83388" Nov 24 21:59:56 crc kubenswrapper[4915]: E1124 21:59:56.427713 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:00:00 crc kubenswrapper[4915]: I1124 22:00:00.148372 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400360-gh2dw"] Nov 24 22:00:00 crc kubenswrapper[4915]: I1124 22:00:00.150873 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-gh2dw" Nov 24 22:00:00 crc kubenswrapper[4915]: I1124 22:00:00.159319 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 22:00:00 crc kubenswrapper[4915]: I1124 22:00:00.159373 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 22:00:00 crc kubenswrapper[4915]: I1124 22:00:00.165207 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400360-gh2dw"] Nov 24 22:00:00 crc kubenswrapper[4915]: I1124 22:00:00.311355 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2782360d-c02c-49e5-978b-92d85dee47c0-config-volume\") pod \"collect-profiles-29400360-gh2dw\" (UID: \"2782360d-c02c-49e5-978b-92d85dee47c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-gh2dw" Nov 24 22:00:00 crc kubenswrapper[4915]: I1124 22:00:00.311408 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2782360d-c02c-49e5-978b-92d85dee47c0-secret-volume\") pod \"collect-profiles-29400360-gh2dw\" (UID: \"2782360d-c02c-49e5-978b-92d85dee47c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-gh2dw" Nov 24 22:00:00 crc kubenswrapper[4915]: I1124 22:00:00.311566 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nrfm\" (UniqueName: \"kubernetes.io/projected/2782360d-c02c-49e5-978b-92d85dee47c0-kube-api-access-9nrfm\") pod \"collect-profiles-29400360-gh2dw\" (UID: \"2782360d-c02c-49e5-978b-92d85dee47c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-gh2dw" Nov 24 22:00:00 crc kubenswrapper[4915]: I1124 22:00:00.413288 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2782360d-c02c-49e5-978b-92d85dee47c0-config-volume\") pod \"collect-profiles-29400360-gh2dw\" (UID: \"2782360d-c02c-49e5-978b-92d85dee47c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-gh2dw" Nov 24 22:00:00 crc kubenswrapper[4915]: I1124 22:00:00.413333 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2782360d-c02c-49e5-978b-92d85dee47c0-secret-volume\") pod \"collect-profiles-29400360-gh2dw\" (UID: \"2782360d-c02c-49e5-978b-92d85dee47c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-gh2dw" Nov 24 22:00:00 crc kubenswrapper[4915]: I1124 22:00:00.413387 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nrfm\" (UniqueName: \"kubernetes.io/projected/2782360d-c02c-49e5-978b-92d85dee47c0-kube-api-access-9nrfm\") pod \"collect-profiles-29400360-gh2dw\" (UID: \"2782360d-c02c-49e5-978b-92d85dee47c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-gh2dw" Nov 24 22:00:00 crc kubenswrapper[4915]: I1124 22:00:00.414261 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2782360d-c02c-49e5-978b-92d85dee47c0-config-volume\") pod \"collect-profiles-29400360-gh2dw\" (UID: \"2782360d-c02c-49e5-978b-92d85dee47c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-gh2dw" Nov 24 22:00:00 crc kubenswrapper[4915]: I1124 22:00:00.419202 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2782360d-c02c-49e5-978b-92d85dee47c0-secret-volume\") pod \"collect-profiles-29400360-gh2dw\" (UID: \"2782360d-c02c-49e5-978b-92d85dee47c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-gh2dw" Nov 24 22:00:00 crc kubenswrapper[4915]: I1124 22:00:00.429650 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nrfm\" (UniqueName: \"kubernetes.io/projected/2782360d-c02c-49e5-978b-92d85dee47c0-kube-api-access-9nrfm\") pod \"collect-profiles-29400360-gh2dw\" (UID: \"2782360d-c02c-49e5-978b-92d85dee47c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-gh2dw" Nov 24 22:00:00 crc kubenswrapper[4915]: I1124 22:00:00.501664 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-gh2dw" Nov 24 22:00:01 crc kubenswrapper[4915]: I1124 22:00:01.020369 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400360-gh2dw"] Nov 24 22:00:01 crc kubenswrapper[4915]: I1124 22:00:01.556186 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-gh2dw" event={"ID":"2782360d-c02c-49e5-978b-92d85dee47c0","Type":"ContainerStarted","Data":"6bf1527c96607dd0362fd85ee2349e8814c726b4a4fcf7e25c06f20ab51115d7"} Nov 24 22:00:01 crc kubenswrapper[4915]: I1124 22:00:01.556234 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-gh2dw" event={"ID":"2782360d-c02c-49e5-978b-92d85dee47c0","Type":"ContainerStarted","Data":"1275254c188be774e46d8749ad130b58d567029ad785a52e2f42558c35bdadce"} Nov 24 22:00:01 crc kubenswrapper[4915]: I1124 22:00:01.583541 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-gh2dw" podStartSLOduration=1.583512481 podStartE2EDuration="1.583512481s" podCreationTimestamp="2025-11-24 22:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 22:00:01.575561255 +0000 UTC m=+2419.891813448" watchObservedRunningTime="2025-11-24 22:00:01.583512481 +0000 UTC m=+2419.899764654" Nov 24 22:00:02 crc kubenswrapper[4915]: I1124 22:00:02.572084 4915 generic.go:334] "Generic (PLEG): container finished" podID="2782360d-c02c-49e5-978b-92d85dee47c0" containerID="6bf1527c96607dd0362fd85ee2349e8814c726b4a4fcf7e25c06f20ab51115d7" exitCode=0 Nov 24 22:00:02 crc kubenswrapper[4915]: I1124 22:00:02.572131 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-gh2dw" event={"ID":"2782360d-c02c-49e5-978b-92d85dee47c0","Type":"ContainerDied","Data":"6bf1527c96607dd0362fd85ee2349e8814c726b4a4fcf7e25c06f20ab51115d7"} Nov 24 22:00:03 crc kubenswrapper[4915]: I1124 22:00:03.975260 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-gh2dw" Nov 24 22:00:04 crc kubenswrapper[4915]: I1124 22:00:04.099960 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nrfm\" (UniqueName: \"kubernetes.io/projected/2782360d-c02c-49e5-978b-92d85dee47c0-kube-api-access-9nrfm\") pod \"2782360d-c02c-49e5-978b-92d85dee47c0\" (UID: \"2782360d-c02c-49e5-978b-92d85dee47c0\") " Nov 24 22:00:04 crc kubenswrapper[4915]: I1124 22:00:04.100192 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2782360d-c02c-49e5-978b-92d85dee47c0-config-volume\") pod \"2782360d-c02c-49e5-978b-92d85dee47c0\" (UID: \"2782360d-c02c-49e5-978b-92d85dee47c0\") " Nov 24 22:00:04 crc kubenswrapper[4915]: I1124 22:00:04.100285 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2782360d-c02c-49e5-978b-92d85dee47c0-secret-volume\") pod \"2782360d-c02c-49e5-978b-92d85dee47c0\" (UID: \"2782360d-c02c-49e5-978b-92d85dee47c0\") " Nov 24 22:00:04 crc kubenswrapper[4915]: I1124 22:00:04.101286 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2782360d-c02c-49e5-978b-92d85dee47c0-config-volume" (OuterVolumeSpecName: "config-volume") pod "2782360d-c02c-49e5-978b-92d85dee47c0" (UID: "2782360d-c02c-49e5-978b-92d85dee47c0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 22:00:04 crc kubenswrapper[4915]: I1124 22:00:04.109421 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2782360d-c02c-49e5-978b-92d85dee47c0-kube-api-access-9nrfm" (OuterVolumeSpecName: "kube-api-access-9nrfm") pod "2782360d-c02c-49e5-978b-92d85dee47c0" (UID: "2782360d-c02c-49e5-978b-92d85dee47c0"). InnerVolumeSpecName "kube-api-access-9nrfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:00:04 crc kubenswrapper[4915]: I1124 22:00:04.118608 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2782360d-c02c-49e5-978b-92d85dee47c0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2782360d-c02c-49e5-978b-92d85dee47c0" (UID: "2782360d-c02c-49e5-978b-92d85dee47c0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:00:04 crc kubenswrapper[4915]: I1124 22:00:04.205593 4915 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2782360d-c02c-49e5-978b-92d85dee47c0-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:04 crc kubenswrapper[4915]: I1124 22:00:04.205949 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nrfm\" (UniqueName: \"kubernetes.io/projected/2782360d-c02c-49e5-978b-92d85dee47c0-kube-api-access-9nrfm\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:04 crc kubenswrapper[4915]: I1124 22:00:04.206017 4915 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2782360d-c02c-49e5-978b-92d85dee47c0-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 22:00:04 crc kubenswrapper[4915]: I1124 22:00:04.593925 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-gh2dw" event={"ID":"2782360d-c02c-49e5-978b-92d85dee47c0","Type":"ContainerDied","Data":"1275254c188be774e46d8749ad130b58d567029ad785a52e2f42558c35bdadce"} Nov 24 22:00:04 crc kubenswrapper[4915]: I1124 22:00:04.594007 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1275254c188be774e46d8749ad130b58d567029ad785a52e2f42558c35bdadce" Nov 24 22:00:04 crc kubenswrapper[4915]: I1124 22:00:04.594088 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400360-gh2dw" Nov 24 22:00:04 crc kubenswrapper[4915]: I1124 22:00:04.679984 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400315-hnmlz"] Nov 24 22:00:04 crc kubenswrapper[4915]: I1124 22:00:04.695741 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400315-hnmlz"] Nov 24 22:00:06 crc kubenswrapper[4915]: I1124 22:00:06.448257 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8e29389-d3c0-4175-81dc-ecec5a0c5f35" path="/var/lib/kubelet/pods/d8e29389-d3c0-4175-81dc-ecec5a0c5f35/volumes" Nov 24 22:00:08 crc kubenswrapper[4915]: I1124 22:00:08.427586 4915 scope.go:117] "RemoveContainer" containerID="22b32b27a01009367ce2885e17cf2d1b259359cb42594a96255fea61acb83388" Nov 24 22:00:08 crc kubenswrapper[4915]: E1124 22:00:08.428737 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:00:17 crc kubenswrapper[4915]: I1124 22:00:17.394860 4915 scope.go:117] "RemoveContainer" containerID="d2c3b9371de3176ecfcf1bd0014531bf5846d041c44d265d3cdeea6e238ad4c9" Nov 24 22:00:19 crc kubenswrapper[4915]: I1124 22:00:19.427491 4915 scope.go:117] "RemoveContainer" containerID="22b32b27a01009367ce2885e17cf2d1b259359cb42594a96255fea61acb83388" Nov 24 22:00:19 crc kubenswrapper[4915]: E1124 22:00:19.428401 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:00:32 crc kubenswrapper[4915]: I1124 22:00:32.443001 4915 scope.go:117] "RemoveContainer" containerID="22b32b27a01009367ce2885e17cf2d1b259359cb42594a96255fea61acb83388" Nov 24 22:00:32 crc kubenswrapper[4915]: E1124 22:00:32.444319 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:00:46 crc kubenswrapper[4915]: I1124 22:00:46.427255 4915 scope.go:117] "RemoveContainer" containerID="22b32b27a01009367ce2885e17cf2d1b259359cb42594a96255fea61acb83388" Nov 24 22:00:46 crc kubenswrapper[4915]: E1124 22:00:46.428077 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:00:58 crc kubenswrapper[4915]: I1124 22:00:58.427663 4915 scope.go:117] "RemoveContainer" containerID="22b32b27a01009367ce2885e17cf2d1b259359cb42594a96255fea61acb83388" Nov 24 22:00:58 crc kubenswrapper[4915]: E1124 22:00:58.428477 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:01:00 crc kubenswrapper[4915]: I1124 22:01:00.180210 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29400361-bksjl"] Nov 24 22:01:00 crc kubenswrapper[4915]: E1124 22:01:00.182251 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2782360d-c02c-49e5-978b-92d85dee47c0" containerName="collect-profiles" Nov 24 22:01:00 crc kubenswrapper[4915]: I1124 22:01:00.182418 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="2782360d-c02c-49e5-978b-92d85dee47c0" containerName="collect-profiles" Nov 24 22:01:00 crc kubenswrapper[4915]: I1124 22:01:00.183070 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="2782360d-c02c-49e5-978b-92d85dee47c0" containerName="collect-profiles" Nov 24 22:01:00 crc kubenswrapper[4915]: I1124 22:01:00.189735 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400361-bksjl" Nov 24 22:01:00 crc kubenswrapper[4915]: I1124 22:01:00.194089 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29400361-bksjl"] Nov 24 22:01:00 crc kubenswrapper[4915]: I1124 22:01:00.289539 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvlp9\" (UniqueName: \"kubernetes.io/projected/8fb97ba2-7366-43cc-92db-3e5c5e61a99a-kube-api-access-lvlp9\") pod \"keystone-cron-29400361-bksjl\" (UID: \"8fb97ba2-7366-43cc-92db-3e5c5e61a99a\") " pod="openstack/keystone-cron-29400361-bksjl" Nov 24 22:01:00 crc kubenswrapper[4915]: I1124 22:01:00.289602 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fb97ba2-7366-43cc-92db-3e5c5e61a99a-config-data\") pod \"keystone-cron-29400361-bksjl\" (UID: \"8fb97ba2-7366-43cc-92db-3e5c5e61a99a\") " pod="openstack/keystone-cron-29400361-bksjl" Nov 24 22:01:00 crc kubenswrapper[4915]: I1124 22:01:00.289750 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8fb97ba2-7366-43cc-92db-3e5c5e61a99a-fernet-keys\") pod \"keystone-cron-29400361-bksjl\" (UID: \"8fb97ba2-7366-43cc-92db-3e5c5e61a99a\") " pod="openstack/keystone-cron-29400361-bksjl" Nov 24 22:01:00 crc kubenswrapper[4915]: I1124 22:01:00.290073 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fb97ba2-7366-43cc-92db-3e5c5e61a99a-combined-ca-bundle\") pod \"keystone-cron-29400361-bksjl\" (UID: \"8fb97ba2-7366-43cc-92db-3e5c5e61a99a\") " pod="openstack/keystone-cron-29400361-bksjl" Nov 24 22:01:00 crc kubenswrapper[4915]: I1124 22:01:00.392549 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fb97ba2-7366-43cc-92db-3e5c5e61a99a-combined-ca-bundle\") pod \"keystone-cron-29400361-bksjl\" (UID: \"8fb97ba2-7366-43cc-92db-3e5c5e61a99a\") " pod="openstack/keystone-cron-29400361-bksjl" Nov 24 22:01:00 crc kubenswrapper[4915]: I1124 22:01:00.392730 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvlp9\" (UniqueName: \"kubernetes.io/projected/8fb97ba2-7366-43cc-92db-3e5c5e61a99a-kube-api-access-lvlp9\") pod \"keystone-cron-29400361-bksjl\" (UID: \"8fb97ba2-7366-43cc-92db-3e5c5e61a99a\") " pod="openstack/keystone-cron-29400361-bksjl" Nov 24 22:01:00 crc kubenswrapper[4915]: I1124 22:01:00.392795 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fb97ba2-7366-43cc-92db-3e5c5e61a99a-config-data\") pod \"keystone-cron-29400361-bksjl\" (UID: \"8fb97ba2-7366-43cc-92db-3e5c5e61a99a\") " pod="openstack/keystone-cron-29400361-bksjl" Nov 24 22:01:00 crc kubenswrapper[4915]: I1124 22:01:00.392892 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8fb97ba2-7366-43cc-92db-3e5c5e61a99a-fernet-keys\") pod \"keystone-cron-29400361-bksjl\" (UID: \"8fb97ba2-7366-43cc-92db-3e5c5e61a99a\") " pod="openstack/keystone-cron-29400361-bksjl" Nov 24 22:01:00 crc kubenswrapper[4915]: I1124 22:01:00.399003 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fb97ba2-7366-43cc-92db-3e5c5e61a99a-combined-ca-bundle\") pod \"keystone-cron-29400361-bksjl\" (UID: \"8fb97ba2-7366-43cc-92db-3e5c5e61a99a\") " pod="openstack/keystone-cron-29400361-bksjl" Nov 24 22:01:00 crc kubenswrapper[4915]: I1124 22:01:00.402711 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fb97ba2-7366-43cc-92db-3e5c5e61a99a-config-data\") pod \"keystone-cron-29400361-bksjl\" (UID: \"8fb97ba2-7366-43cc-92db-3e5c5e61a99a\") " pod="openstack/keystone-cron-29400361-bksjl" Nov 24 22:01:00 crc kubenswrapper[4915]: I1124 22:01:00.411116 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8fb97ba2-7366-43cc-92db-3e5c5e61a99a-fernet-keys\") pod \"keystone-cron-29400361-bksjl\" (UID: \"8fb97ba2-7366-43cc-92db-3e5c5e61a99a\") " pod="openstack/keystone-cron-29400361-bksjl" Nov 24 22:01:00 crc kubenswrapper[4915]: I1124 22:01:00.414592 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvlp9\" (UniqueName: \"kubernetes.io/projected/8fb97ba2-7366-43cc-92db-3e5c5e61a99a-kube-api-access-lvlp9\") pod \"keystone-cron-29400361-bksjl\" (UID: \"8fb97ba2-7366-43cc-92db-3e5c5e61a99a\") " pod="openstack/keystone-cron-29400361-bksjl" Nov 24 22:01:00 crc kubenswrapper[4915]: I1124 22:01:00.522331 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400361-bksjl" Nov 24 22:01:01 crc kubenswrapper[4915]: W1124 22:01:01.029081 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fb97ba2_7366_43cc_92db_3e5c5e61a99a.slice/crio-e675b6ae2893efdc09b5ae10d684b9e25cd7c2aa5e6334fb511fc33a8d43bd1c WatchSource:0}: Error finding container e675b6ae2893efdc09b5ae10d684b9e25cd7c2aa5e6334fb511fc33a8d43bd1c: Status 404 returned error can't find the container with id e675b6ae2893efdc09b5ae10d684b9e25cd7c2aa5e6334fb511fc33a8d43bd1c Nov 24 22:01:01 crc kubenswrapper[4915]: I1124 22:01:01.031612 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29400361-bksjl"] Nov 24 22:01:01 crc kubenswrapper[4915]: I1124 22:01:01.255601 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400361-bksjl" event={"ID":"8fb97ba2-7366-43cc-92db-3e5c5e61a99a","Type":"ContainerStarted","Data":"d79788917d2137250f50970ebeb531b6051ab8a42e588d656f59a8b090c9e757"} Nov 24 22:01:01 crc kubenswrapper[4915]: I1124 22:01:01.255640 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400361-bksjl" event={"ID":"8fb97ba2-7366-43cc-92db-3e5c5e61a99a","Type":"ContainerStarted","Data":"e675b6ae2893efdc09b5ae10d684b9e25cd7c2aa5e6334fb511fc33a8d43bd1c"} Nov 24 22:01:01 crc kubenswrapper[4915]: I1124 22:01:01.279019 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29400361-bksjl" podStartSLOduration=1.2790012769999999 podStartE2EDuration="1.279001277s" podCreationTimestamp="2025-11-24 22:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 22:01:01.277021083 +0000 UTC m=+2479.593273296" watchObservedRunningTime="2025-11-24 22:01:01.279001277 +0000 UTC m=+2479.595253450" Nov 24 22:01:05 crc kubenswrapper[4915]: I1124 22:01:05.297870 4915 generic.go:334] "Generic (PLEG): container finished" podID="8fb97ba2-7366-43cc-92db-3e5c5e61a99a" containerID="d79788917d2137250f50970ebeb531b6051ab8a42e588d656f59a8b090c9e757" exitCode=0 Nov 24 22:01:05 crc kubenswrapper[4915]: I1124 22:01:05.298100 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400361-bksjl" event={"ID":"8fb97ba2-7366-43cc-92db-3e5c5e61a99a","Type":"ContainerDied","Data":"d79788917d2137250f50970ebeb531b6051ab8a42e588d656f59a8b090c9e757"} Nov 24 22:01:06 crc kubenswrapper[4915]: I1124 22:01:06.696044 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400361-bksjl" Nov 24 22:01:06 crc kubenswrapper[4915]: I1124 22:01:06.778024 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fb97ba2-7366-43cc-92db-3e5c5e61a99a-config-data\") pod \"8fb97ba2-7366-43cc-92db-3e5c5e61a99a\" (UID: \"8fb97ba2-7366-43cc-92db-3e5c5e61a99a\") " Nov 24 22:01:06 crc kubenswrapper[4915]: I1124 22:01:06.778114 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8fb97ba2-7366-43cc-92db-3e5c5e61a99a-fernet-keys\") pod \"8fb97ba2-7366-43cc-92db-3e5c5e61a99a\" (UID: \"8fb97ba2-7366-43cc-92db-3e5c5e61a99a\") " Nov 24 22:01:06 crc kubenswrapper[4915]: I1124 22:01:06.778151 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvlp9\" (UniqueName: \"kubernetes.io/projected/8fb97ba2-7366-43cc-92db-3e5c5e61a99a-kube-api-access-lvlp9\") pod \"8fb97ba2-7366-43cc-92db-3e5c5e61a99a\" (UID: \"8fb97ba2-7366-43cc-92db-3e5c5e61a99a\") " Nov 24 22:01:06 crc kubenswrapper[4915]: I1124 22:01:06.778238 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fb97ba2-7366-43cc-92db-3e5c5e61a99a-combined-ca-bundle\") pod \"8fb97ba2-7366-43cc-92db-3e5c5e61a99a\" (UID: \"8fb97ba2-7366-43cc-92db-3e5c5e61a99a\") " Nov 24 22:01:07 crc kubenswrapper[4915]: I1124 22:01:06.784509 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fb97ba2-7366-43cc-92db-3e5c5e61a99a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8fb97ba2-7366-43cc-92db-3e5c5e61a99a" (UID: "8fb97ba2-7366-43cc-92db-3e5c5e61a99a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:01:07 crc kubenswrapper[4915]: I1124 22:01:06.785205 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fb97ba2-7366-43cc-92db-3e5c5e61a99a-kube-api-access-lvlp9" (OuterVolumeSpecName: "kube-api-access-lvlp9") pod "8fb97ba2-7366-43cc-92db-3e5c5e61a99a" (UID: "8fb97ba2-7366-43cc-92db-3e5c5e61a99a"). InnerVolumeSpecName "kube-api-access-lvlp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:01:07 crc kubenswrapper[4915]: I1124 22:01:06.822158 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fb97ba2-7366-43cc-92db-3e5c5e61a99a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8fb97ba2-7366-43cc-92db-3e5c5e61a99a" (UID: "8fb97ba2-7366-43cc-92db-3e5c5e61a99a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:01:07 crc kubenswrapper[4915]: I1124 22:01:06.841922 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fb97ba2-7366-43cc-92db-3e5c5e61a99a-config-data" (OuterVolumeSpecName: "config-data") pod "8fb97ba2-7366-43cc-92db-3e5c5e61a99a" (UID: "8fb97ba2-7366-43cc-92db-3e5c5e61a99a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:01:07 crc kubenswrapper[4915]: I1124 22:01:06.881520 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fb97ba2-7366-43cc-92db-3e5c5e61a99a-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 22:01:07 crc kubenswrapper[4915]: I1124 22:01:06.881875 4915 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8fb97ba2-7366-43cc-92db-3e5c5e61a99a-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 22:01:07 crc kubenswrapper[4915]: I1124 22:01:06.881885 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvlp9\" (UniqueName: \"kubernetes.io/projected/8fb97ba2-7366-43cc-92db-3e5c5e61a99a-kube-api-access-lvlp9\") on node \"crc\" DevicePath \"\"" Nov 24 22:01:07 crc kubenswrapper[4915]: I1124 22:01:06.881896 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fb97ba2-7366-43cc-92db-3e5c5e61a99a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 22:01:07 crc kubenswrapper[4915]: I1124 22:01:07.324002 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400361-bksjl" event={"ID":"8fb97ba2-7366-43cc-92db-3e5c5e61a99a","Type":"ContainerDied","Data":"e675b6ae2893efdc09b5ae10d684b9e25cd7c2aa5e6334fb511fc33a8d43bd1c"} Nov 24 22:01:07 crc kubenswrapper[4915]: I1124 22:01:07.324039 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e675b6ae2893efdc09b5ae10d684b9e25cd7c2aa5e6334fb511fc33a8d43bd1c" Nov 24 22:01:07 crc kubenswrapper[4915]: I1124 22:01:07.324092 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400361-bksjl" Nov 24 22:01:11 crc kubenswrapper[4915]: I1124 22:01:11.427298 4915 scope.go:117] "RemoveContainer" containerID="22b32b27a01009367ce2885e17cf2d1b259359cb42594a96255fea61acb83388" Nov 24 22:01:11 crc kubenswrapper[4915]: E1124 22:01:11.428046 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:01:22 crc kubenswrapper[4915]: I1124 22:01:22.435264 4915 scope.go:117] "RemoveContainer" containerID="22b32b27a01009367ce2885e17cf2d1b259359cb42594a96255fea61acb83388" Nov 24 22:01:22 crc kubenswrapper[4915]: E1124 22:01:22.436111 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:01:34 crc kubenswrapper[4915]: I1124 22:01:34.432966 4915 scope.go:117] "RemoveContainer" containerID="22b32b27a01009367ce2885e17cf2d1b259359cb42594a96255fea61acb83388" Nov 24 22:01:34 crc kubenswrapper[4915]: E1124 22:01:34.434667 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:01:49 crc kubenswrapper[4915]: I1124 22:01:49.427515 4915 scope.go:117] "RemoveContainer" containerID="22b32b27a01009367ce2885e17cf2d1b259359cb42594a96255fea61acb83388" Nov 24 22:01:49 crc kubenswrapper[4915]: E1124 22:01:49.428225 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:02:03 crc kubenswrapper[4915]: I1124 22:02:03.427340 4915 scope.go:117] "RemoveContainer" containerID="22b32b27a01009367ce2885e17cf2d1b259359cb42594a96255fea61acb83388" Nov 24 22:02:03 crc kubenswrapper[4915]: E1124 22:02:03.428153 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:02:17 crc kubenswrapper[4915]: I1124 22:02:17.428086 4915 scope.go:117] "RemoveContainer" containerID="22b32b27a01009367ce2885e17cf2d1b259359cb42594a96255fea61acb83388" Nov 24 22:02:17 crc kubenswrapper[4915]: E1124 22:02:17.430507 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:02:28 crc kubenswrapper[4915]: I1124 22:02:28.429649 4915 scope.go:117] "RemoveContainer" containerID="22b32b27a01009367ce2885e17cf2d1b259359cb42594a96255fea61acb83388" Nov 24 22:02:29 crc kubenswrapper[4915]: I1124 22:02:29.359333 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerStarted","Data":"a449c95daa9a45bf8c92ace7ad503db5e63a7f318ae10b5488d2b660b4c746c3"} Nov 24 22:03:26 crc kubenswrapper[4915]: I1124 22:03:26.083494 4915 generic.go:334] "Generic (PLEG): container finished" podID="93a85159-d030-4598-8bad-305ba5b7a459" containerID="ee6dd2c2801457444c4c13eb2322affc46ffe8b9d3e6f05d46e775982c4e4865" exitCode=0 Nov 24 22:03:26 crc kubenswrapper[4915]: I1124 22:03:26.083646 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr" event={"ID":"93a85159-d030-4598-8bad-305ba5b7a459","Type":"ContainerDied","Data":"ee6dd2c2801457444c4c13eb2322affc46ffe8b9d3e6f05d46e775982c4e4865"} Nov 24 22:03:27 crc kubenswrapper[4915]: I1124 22:03:27.660287 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr" Nov 24 22:03:27 crc kubenswrapper[4915]: I1124 22:03:27.725375 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93a85159-d030-4598-8bad-305ba5b7a459-inventory\") pod \"93a85159-d030-4598-8bad-305ba5b7a459\" (UID: \"93a85159-d030-4598-8bad-305ba5b7a459\") " Nov 24 22:03:27 crc kubenswrapper[4915]: I1124 22:03:27.725503 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/93a85159-d030-4598-8bad-305ba5b7a459-libvirt-secret-0\") pod \"93a85159-d030-4598-8bad-305ba5b7a459\" (UID: \"93a85159-d030-4598-8bad-305ba5b7a459\") " Nov 24 22:03:27 crc kubenswrapper[4915]: I1124 22:03:27.725651 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9rbn\" (UniqueName: \"kubernetes.io/projected/93a85159-d030-4598-8bad-305ba5b7a459-kube-api-access-n9rbn\") pod \"93a85159-d030-4598-8bad-305ba5b7a459\" (UID: \"93a85159-d030-4598-8bad-305ba5b7a459\") " Nov 24 22:03:27 crc kubenswrapper[4915]: I1124 22:03:27.725675 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a85159-d030-4598-8bad-305ba5b7a459-libvirt-combined-ca-bundle\") pod \"93a85159-d030-4598-8bad-305ba5b7a459\" (UID: \"93a85159-d030-4598-8bad-305ba5b7a459\") " Nov 24 22:03:27 crc kubenswrapper[4915]: I1124 22:03:27.725746 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/93a85159-d030-4598-8bad-305ba5b7a459-ssh-key\") pod \"93a85159-d030-4598-8bad-305ba5b7a459\" (UID: \"93a85159-d030-4598-8bad-305ba5b7a459\") " Nov 24 22:03:27 crc kubenswrapper[4915]: I1124 22:03:27.733092 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93a85159-d030-4598-8bad-305ba5b7a459-kube-api-access-n9rbn" (OuterVolumeSpecName: "kube-api-access-n9rbn") pod "93a85159-d030-4598-8bad-305ba5b7a459" (UID: "93a85159-d030-4598-8bad-305ba5b7a459"). InnerVolumeSpecName "kube-api-access-n9rbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:03:27 crc kubenswrapper[4915]: I1124 22:03:27.733082 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93a85159-d030-4598-8bad-305ba5b7a459-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "93a85159-d030-4598-8bad-305ba5b7a459" (UID: "93a85159-d030-4598-8bad-305ba5b7a459"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:03:27 crc kubenswrapper[4915]: I1124 22:03:27.769906 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93a85159-d030-4598-8bad-305ba5b7a459-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "93a85159-d030-4598-8bad-305ba5b7a459" (UID: "93a85159-d030-4598-8bad-305ba5b7a459"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:03:27 crc kubenswrapper[4915]: I1124 22:03:27.769965 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93a85159-d030-4598-8bad-305ba5b7a459-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "93a85159-d030-4598-8bad-305ba5b7a459" (UID: "93a85159-d030-4598-8bad-305ba5b7a459"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:03:27 crc kubenswrapper[4915]: I1124 22:03:27.770005 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93a85159-d030-4598-8bad-305ba5b7a459-inventory" (OuterVolumeSpecName: "inventory") pod "93a85159-d030-4598-8bad-305ba5b7a459" (UID: "93a85159-d030-4598-8bad-305ba5b7a459"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:03:27 crc kubenswrapper[4915]: I1124 22:03:27.829153 4915 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/93a85159-d030-4598-8bad-305ba5b7a459-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 22:03:27 crc kubenswrapper[4915]: I1124 22:03:27.829488 4915 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/93a85159-d030-4598-8bad-305ba5b7a459-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 22:03:27 crc kubenswrapper[4915]: I1124 22:03:27.829507 4915 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/93a85159-d030-4598-8bad-305ba5b7a459-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 24 22:03:27 crc kubenswrapper[4915]: I1124 22:03:27.829528 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9rbn\" (UniqueName: \"kubernetes.io/projected/93a85159-d030-4598-8bad-305ba5b7a459-kube-api-access-n9rbn\") on node \"crc\" DevicePath \"\"" Nov 24 22:03:27 crc kubenswrapper[4915]: I1124 22:03:27.829545 4915 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a85159-d030-4598-8bad-305ba5b7a459-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.111303 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr" event={"ID":"93a85159-d030-4598-8bad-305ba5b7a459","Type":"ContainerDied","Data":"aa599eafab3e221cf4bdda7b7e85700caa5dd30acfe962c03082ee664850528b"} Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.111349 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa599eafab3e221cf4bdda7b7e85700caa5dd30acfe962c03082ee664850528b" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.111416 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.238174 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz"] Nov 24 22:03:28 crc kubenswrapper[4915]: E1124 22:03:28.238994 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93a85159-d030-4598-8bad-305ba5b7a459" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.239027 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="93a85159-d030-4598-8bad-305ba5b7a459" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 22:03:28 crc kubenswrapper[4915]: E1124 22:03:28.239096 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fb97ba2-7366-43cc-92db-3e5c5e61a99a" containerName="keystone-cron" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.239111 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fb97ba2-7366-43cc-92db-3e5c5e61a99a" containerName="keystone-cron" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.239510 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="93a85159-d030-4598-8bad-305ba5b7a459" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.239548 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fb97ba2-7366-43cc-92db-3e5c5e61a99a" containerName="keystone-cron" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.240720 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.244666 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.245077 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.245162 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.245213 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.245684 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.246034 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.246622 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkk6k" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.259208 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz"] Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.341020 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfj42\" (UniqueName: \"kubernetes.io/projected/96ebeccc-beda-4622-9d03-6aabb443b1fe-kube-api-access-bfj42\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dmpwz\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.341085 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dmpwz\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.341113 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dmpwz\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.341141 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dmpwz\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.341476 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dmpwz\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.341512 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dmpwz\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.341529 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dmpwz\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.341545 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dmpwz\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.341755 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dmpwz\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.445327 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dmpwz\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.445390 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dmpwz\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.445426 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dmpwz\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.445554 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dmpwz\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.445735 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfj42\" (UniqueName: \"kubernetes.io/projected/96ebeccc-beda-4622-9d03-6aabb443b1fe-kube-api-access-bfj42\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dmpwz\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.445829 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dmpwz\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.445872 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dmpwz\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.445923 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dmpwz\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.446007 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dmpwz\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.446795 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dmpwz\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.451510 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dmpwz\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.452750 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dmpwz\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.458983 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dmpwz\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.459535 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dmpwz\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.459579 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dmpwz\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.466151 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dmpwz\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.466565 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dmpwz\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.471081 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfj42\" (UniqueName: \"kubernetes.io/projected/96ebeccc-beda-4622-9d03-6aabb443b1fe-kube-api-access-bfj42\") pod \"nova-edpm-deployment-openstack-edpm-ipam-dmpwz\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" Nov 24 22:03:28 crc kubenswrapper[4915]: I1124 22:03:28.595657 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" Nov 24 22:03:29 crc kubenswrapper[4915]: I1124 22:03:29.199634 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz"] Nov 24 22:03:29 crc kubenswrapper[4915]: I1124 22:03:29.202415 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 22:03:30 crc kubenswrapper[4915]: I1124 22:03:30.144115 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" event={"ID":"96ebeccc-beda-4622-9d03-6aabb443b1fe","Type":"ContainerStarted","Data":"d05d565a3cf92093454af9882d74a7722575b6f645cec5d668cf49dbeb6d7b09"} Nov 24 22:03:30 crc kubenswrapper[4915]: I1124 22:03:30.144613 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" event={"ID":"96ebeccc-beda-4622-9d03-6aabb443b1fe","Type":"ContainerStarted","Data":"43016e60c4498ee77d5589d340f0ac9f9c7edf060747e49453fd08ba4c36d818"} Nov 24 22:03:30 crc kubenswrapper[4915]: I1124 22:03:30.178783 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" podStartSLOduration=1.732560761 podStartE2EDuration="2.178748641s" podCreationTimestamp="2025-11-24 22:03:28 +0000 UTC" firstStartedPulling="2025-11-24 22:03:29.202236361 +0000 UTC m=+2627.518488534" lastFinishedPulling="2025-11-24 22:03:29.648424231 +0000 UTC m=+2627.964676414" observedRunningTime="2025-11-24 22:03:30.168051041 +0000 UTC m=+2628.484303224" watchObservedRunningTime="2025-11-24 22:03:30.178748641 +0000 UTC m=+2628.495000844" Nov 24 22:03:41 crc kubenswrapper[4915]: I1124 22:03:41.591941 4915 trace.go:236] Trace[1693989652]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-index-gateway-0" (24-Nov-2025 22:03:40.548) (total time: 1043ms): Nov 24 22:03:41 crc kubenswrapper[4915]: Trace[1693989652]: [1.043896095s] [1.043896095s] END Nov 24 22:04:33 crc kubenswrapper[4915]: I1124 22:04:33.905394 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5zkbc"] Nov 24 22:04:33 crc kubenswrapper[4915]: I1124 22:04:33.908448 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5zkbc" Nov 24 22:04:33 crc kubenswrapper[4915]: I1124 22:04:33.926535 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5zkbc"] Nov 24 22:04:33 crc kubenswrapper[4915]: I1124 22:04:33.982428 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mx9b\" (UniqueName: \"kubernetes.io/projected/0b10dfc9-afd0-4f6d-a043-baa36d90b0b7-kube-api-access-6mx9b\") pod \"redhat-operators-5zkbc\" (UID: \"0b10dfc9-afd0-4f6d-a043-baa36d90b0b7\") " pod="openshift-marketplace/redhat-operators-5zkbc" Nov 24 22:04:33 crc kubenswrapper[4915]: I1124 22:04:33.982728 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b10dfc9-afd0-4f6d-a043-baa36d90b0b7-catalog-content\") pod \"redhat-operators-5zkbc\" (UID: \"0b10dfc9-afd0-4f6d-a043-baa36d90b0b7\") " pod="openshift-marketplace/redhat-operators-5zkbc" Nov 24 22:04:33 crc kubenswrapper[4915]: I1124 22:04:33.983462 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b10dfc9-afd0-4f6d-a043-baa36d90b0b7-utilities\") pod \"redhat-operators-5zkbc\" (UID: \"0b10dfc9-afd0-4f6d-a043-baa36d90b0b7\") " pod="openshift-marketplace/redhat-operators-5zkbc" Nov 24 22:04:34 crc kubenswrapper[4915]: I1124 22:04:34.085511 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b10dfc9-afd0-4f6d-a043-baa36d90b0b7-utilities\") pod \"redhat-operators-5zkbc\" (UID: \"0b10dfc9-afd0-4f6d-a043-baa36d90b0b7\") " pod="openshift-marketplace/redhat-operators-5zkbc" Nov 24 22:04:34 crc kubenswrapper[4915]: I1124 22:04:34.085633 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mx9b\" (UniqueName: \"kubernetes.io/projected/0b10dfc9-afd0-4f6d-a043-baa36d90b0b7-kube-api-access-6mx9b\") pod \"redhat-operators-5zkbc\" (UID: \"0b10dfc9-afd0-4f6d-a043-baa36d90b0b7\") " pod="openshift-marketplace/redhat-operators-5zkbc" Nov 24 22:04:34 crc kubenswrapper[4915]: I1124 22:04:34.085675 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b10dfc9-afd0-4f6d-a043-baa36d90b0b7-catalog-content\") pod \"redhat-operators-5zkbc\" (UID: \"0b10dfc9-afd0-4f6d-a043-baa36d90b0b7\") " pod="openshift-marketplace/redhat-operators-5zkbc" Nov 24 22:04:34 crc kubenswrapper[4915]: I1124 22:04:34.086199 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b10dfc9-afd0-4f6d-a043-baa36d90b0b7-catalog-content\") pod \"redhat-operators-5zkbc\" (UID: \"0b10dfc9-afd0-4f6d-a043-baa36d90b0b7\") " pod="openshift-marketplace/redhat-operators-5zkbc" Nov 24 22:04:34 crc kubenswrapper[4915]: I1124 22:04:34.086220 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b10dfc9-afd0-4f6d-a043-baa36d90b0b7-utilities\") pod \"redhat-operators-5zkbc\" (UID: \"0b10dfc9-afd0-4f6d-a043-baa36d90b0b7\") " pod="openshift-marketplace/redhat-operators-5zkbc" Nov 24 22:04:34 crc kubenswrapper[4915]: I1124 22:04:34.117739 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mx9b\" (UniqueName: \"kubernetes.io/projected/0b10dfc9-afd0-4f6d-a043-baa36d90b0b7-kube-api-access-6mx9b\") pod \"redhat-operators-5zkbc\" (UID: \"0b10dfc9-afd0-4f6d-a043-baa36d90b0b7\") " pod="openshift-marketplace/redhat-operators-5zkbc" Nov 24 22:04:34 crc kubenswrapper[4915]: I1124 22:04:34.243987 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5zkbc" Nov 24 22:04:34 crc kubenswrapper[4915]: I1124 22:04:34.846851 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5zkbc"] Nov 24 22:04:35 crc kubenswrapper[4915]: I1124 22:04:35.901297 4915 generic.go:334] "Generic (PLEG): container finished" podID="0b10dfc9-afd0-4f6d-a043-baa36d90b0b7" containerID="d084afca06dd6addc9c5bbe83a96a5b057c71cfd6df934017e3f908481d2737c" exitCode=0 Nov 24 22:04:35 crc kubenswrapper[4915]: I1124 22:04:35.901385 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zkbc" event={"ID":"0b10dfc9-afd0-4f6d-a043-baa36d90b0b7","Type":"ContainerDied","Data":"d084afca06dd6addc9c5bbe83a96a5b057c71cfd6df934017e3f908481d2737c"} Nov 24 22:04:35 crc kubenswrapper[4915]: I1124 22:04:35.901869 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zkbc" event={"ID":"0b10dfc9-afd0-4f6d-a043-baa36d90b0b7","Type":"ContainerStarted","Data":"46be5fcb4efa28c0c21988b1474277403efaef412ccf9f354dfa50f3b7aff27b"} Nov 24 22:04:36 crc kubenswrapper[4915]: I1124 22:04:36.921863 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zkbc" event={"ID":"0b10dfc9-afd0-4f6d-a043-baa36d90b0b7","Type":"ContainerStarted","Data":"15ecd5ba95cff68b00339fa2bd68c3685ab2bd8beb6d894fe01cccf420fb2857"} Nov 24 22:04:40 crc kubenswrapper[4915]: I1124 22:04:40.973840 4915 generic.go:334] "Generic (PLEG): container finished" podID="0b10dfc9-afd0-4f6d-a043-baa36d90b0b7" containerID="15ecd5ba95cff68b00339fa2bd68c3685ab2bd8beb6d894fe01cccf420fb2857" exitCode=0 Nov 24 22:04:40 crc kubenswrapper[4915]: I1124 22:04:40.973999 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zkbc" event={"ID":"0b10dfc9-afd0-4f6d-a043-baa36d90b0b7","Type":"ContainerDied","Data":"15ecd5ba95cff68b00339fa2bd68c3685ab2bd8beb6d894fe01cccf420fb2857"} Nov 24 22:04:41 crc kubenswrapper[4915]: I1124 22:04:41.991490 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zkbc" event={"ID":"0b10dfc9-afd0-4f6d-a043-baa36d90b0b7","Type":"ContainerStarted","Data":"e2afb77360e7d1c7f30ac5f983597b1ae1690f64460aef3ac374f1a697930bc7"} Nov 24 22:04:42 crc kubenswrapper[4915]: I1124 22:04:42.016119 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5zkbc" podStartSLOduration=3.485048447 podStartE2EDuration="9.016082645s" podCreationTimestamp="2025-11-24 22:04:33 +0000 UTC" firstStartedPulling="2025-11-24 22:04:35.904095085 +0000 UTC m=+2694.220347258" lastFinishedPulling="2025-11-24 22:04:41.435129273 +0000 UTC m=+2699.751381456" observedRunningTime="2025-11-24 22:04:42.011797709 +0000 UTC m=+2700.328049882" watchObservedRunningTime="2025-11-24 22:04:42.016082645 +0000 UTC m=+2700.332334828" Nov 24 22:04:44 crc kubenswrapper[4915]: I1124 22:04:44.244367 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5zkbc" Nov 24 22:04:44 crc kubenswrapper[4915]: I1124 22:04:44.244999 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5zkbc" Nov 24 22:04:45 crc kubenswrapper[4915]: I1124 22:04:45.313136 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5zkbc" podUID="0b10dfc9-afd0-4f6d-a043-baa36d90b0b7" containerName="registry-server" probeResult="failure" output=< Nov 24 22:04:45 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Nov 24 22:04:45 crc kubenswrapper[4915]: > Nov 24 22:04:54 crc kubenswrapper[4915]: I1124 22:04:54.327130 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:04:54 crc kubenswrapper[4915]: I1124 22:04:54.327934 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:04:55 crc kubenswrapper[4915]: I1124 22:04:55.356578 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5zkbc" podUID="0b10dfc9-afd0-4f6d-a043-baa36d90b0b7" containerName="registry-server" probeResult="failure" output=< Nov 24 22:04:55 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Nov 24 22:04:55 crc kubenswrapper[4915]: > Nov 24 22:05:04 crc kubenswrapper[4915]: I1124 22:05:04.297193 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5zkbc" Nov 24 22:05:04 crc kubenswrapper[4915]: I1124 22:05:04.373386 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5zkbc" Nov 24 22:05:04 crc kubenswrapper[4915]: I1124 22:05:04.550659 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5zkbc"] Nov 24 22:05:06 crc kubenswrapper[4915]: I1124 22:05:06.287005 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5zkbc" podUID="0b10dfc9-afd0-4f6d-a043-baa36d90b0b7" containerName="registry-server" containerID="cri-o://e2afb77360e7d1c7f30ac5f983597b1ae1690f64460aef3ac374f1a697930bc7" gracePeriod=2 Nov 24 22:05:06 crc kubenswrapper[4915]: I1124 22:05:06.817156 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5zkbc" Nov 24 22:05:06 crc kubenswrapper[4915]: I1124 22:05:06.845780 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b10dfc9-afd0-4f6d-a043-baa36d90b0b7-utilities\") pod \"0b10dfc9-afd0-4f6d-a043-baa36d90b0b7\" (UID: \"0b10dfc9-afd0-4f6d-a043-baa36d90b0b7\") " Nov 24 22:05:06 crc kubenswrapper[4915]: I1124 22:05:06.845846 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b10dfc9-afd0-4f6d-a043-baa36d90b0b7-catalog-content\") pod \"0b10dfc9-afd0-4f6d-a043-baa36d90b0b7\" (UID: \"0b10dfc9-afd0-4f6d-a043-baa36d90b0b7\") " Nov 24 22:05:06 crc kubenswrapper[4915]: I1124 22:05:06.846079 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mx9b\" (UniqueName: \"kubernetes.io/projected/0b10dfc9-afd0-4f6d-a043-baa36d90b0b7-kube-api-access-6mx9b\") pod \"0b10dfc9-afd0-4f6d-a043-baa36d90b0b7\" (UID: \"0b10dfc9-afd0-4f6d-a043-baa36d90b0b7\") " Nov 24 22:05:06 crc kubenswrapper[4915]: I1124 22:05:06.847694 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b10dfc9-afd0-4f6d-a043-baa36d90b0b7-utilities" (OuterVolumeSpecName: "utilities") pod "0b10dfc9-afd0-4f6d-a043-baa36d90b0b7" (UID: "0b10dfc9-afd0-4f6d-a043-baa36d90b0b7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:05:06 crc kubenswrapper[4915]: I1124 22:05:06.848426 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b10dfc9-afd0-4f6d-a043-baa36d90b0b7-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:05:06 crc kubenswrapper[4915]: I1124 22:05:06.852445 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b10dfc9-afd0-4f6d-a043-baa36d90b0b7-kube-api-access-6mx9b" (OuterVolumeSpecName: "kube-api-access-6mx9b") pod "0b10dfc9-afd0-4f6d-a043-baa36d90b0b7" (UID: "0b10dfc9-afd0-4f6d-a043-baa36d90b0b7"). InnerVolumeSpecName "kube-api-access-6mx9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:05:06 crc kubenswrapper[4915]: I1124 22:05:06.951159 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mx9b\" (UniqueName: \"kubernetes.io/projected/0b10dfc9-afd0-4f6d-a043-baa36d90b0b7-kube-api-access-6mx9b\") on node \"crc\" DevicePath \"\"" Nov 24 22:05:06 crc kubenswrapper[4915]: I1124 22:05:06.958813 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b10dfc9-afd0-4f6d-a043-baa36d90b0b7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b10dfc9-afd0-4f6d-a043-baa36d90b0b7" (UID: "0b10dfc9-afd0-4f6d-a043-baa36d90b0b7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:05:07 crc kubenswrapper[4915]: I1124 22:05:07.054378 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b10dfc9-afd0-4f6d-a043-baa36d90b0b7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:05:07 crc kubenswrapper[4915]: I1124 22:05:07.311060 4915 generic.go:334] "Generic (PLEG): container finished" podID="0b10dfc9-afd0-4f6d-a043-baa36d90b0b7" containerID="e2afb77360e7d1c7f30ac5f983597b1ae1690f64460aef3ac374f1a697930bc7" exitCode=0 Nov 24 22:05:07 crc kubenswrapper[4915]: I1124 22:05:07.311220 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zkbc" event={"ID":"0b10dfc9-afd0-4f6d-a043-baa36d90b0b7","Type":"ContainerDied","Data":"e2afb77360e7d1c7f30ac5f983597b1ae1690f64460aef3ac374f1a697930bc7"} Nov 24 22:05:07 crc kubenswrapper[4915]: I1124 22:05:07.311261 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zkbc" event={"ID":"0b10dfc9-afd0-4f6d-a043-baa36d90b0b7","Type":"ContainerDied","Data":"46be5fcb4efa28c0c21988b1474277403efaef412ccf9f354dfa50f3b7aff27b"} Nov 24 22:05:07 crc kubenswrapper[4915]: I1124 22:05:07.311290 4915 scope.go:117] "RemoveContainer" containerID="e2afb77360e7d1c7f30ac5f983597b1ae1690f64460aef3ac374f1a697930bc7" Nov 24 22:05:07 crc kubenswrapper[4915]: I1124 22:05:07.311577 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5zkbc" Nov 24 22:05:07 crc kubenswrapper[4915]: I1124 22:05:07.342539 4915 scope.go:117] "RemoveContainer" containerID="15ecd5ba95cff68b00339fa2bd68c3685ab2bd8beb6d894fe01cccf420fb2857" Nov 24 22:05:07 crc kubenswrapper[4915]: I1124 22:05:07.389693 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5zkbc"] Nov 24 22:05:07 crc kubenswrapper[4915]: I1124 22:05:07.394360 4915 scope.go:117] "RemoveContainer" containerID="d084afca06dd6addc9c5bbe83a96a5b057c71cfd6df934017e3f908481d2737c" Nov 24 22:05:07 crc kubenswrapper[4915]: I1124 22:05:07.402968 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5zkbc"] Nov 24 22:05:07 crc kubenswrapper[4915]: I1124 22:05:07.440086 4915 scope.go:117] "RemoveContainer" containerID="e2afb77360e7d1c7f30ac5f983597b1ae1690f64460aef3ac374f1a697930bc7" Nov 24 22:05:07 crc kubenswrapper[4915]: E1124 22:05:07.440716 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2afb77360e7d1c7f30ac5f983597b1ae1690f64460aef3ac374f1a697930bc7\": container with ID starting with e2afb77360e7d1c7f30ac5f983597b1ae1690f64460aef3ac374f1a697930bc7 not found: ID does not exist" containerID="e2afb77360e7d1c7f30ac5f983597b1ae1690f64460aef3ac374f1a697930bc7" Nov 24 22:05:07 crc kubenswrapper[4915]: I1124 22:05:07.440877 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2afb77360e7d1c7f30ac5f983597b1ae1690f64460aef3ac374f1a697930bc7"} err="failed to get container status \"e2afb77360e7d1c7f30ac5f983597b1ae1690f64460aef3ac374f1a697930bc7\": rpc error: code = NotFound desc = could not find container \"e2afb77360e7d1c7f30ac5f983597b1ae1690f64460aef3ac374f1a697930bc7\": container with ID starting with e2afb77360e7d1c7f30ac5f983597b1ae1690f64460aef3ac374f1a697930bc7 not found: ID does not exist" Nov 24 22:05:07 crc kubenswrapper[4915]: I1124 22:05:07.441002 4915 scope.go:117] "RemoveContainer" containerID="15ecd5ba95cff68b00339fa2bd68c3685ab2bd8beb6d894fe01cccf420fb2857" Nov 24 22:05:07 crc kubenswrapper[4915]: E1124 22:05:07.441531 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15ecd5ba95cff68b00339fa2bd68c3685ab2bd8beb6d894fe01cccf420fb2857\": container with ID starting with 15ecd5ba95cff68b00339fa2bd68c3685ab2bd8beb6d894fe01cccf420fb2857 not found: ID does not exist" containerID="15ecd5ba95cff68b00339fa2bd68c3685ab2bd8beb6d894fe01cccf420fb2857" Nov 24 22:05:07 crc kubenswrapper[4915]: I1124 22:05:07.441655 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15ecd5ba95cff68b00339fa2bd68c3685ab2bd8beb6d894fe01cccf420fb2857"} err="failed to get container status \"15ecd5ba95cff68b00339fa2bd68c3685ab2bd8beb6d894fe01cccf420fb2857\": rpc error: code = NotFound desc = could not find container \"15ecd5ba95cff68b00339fa2bd68c3685ab2bd8beb6d894fe01cccf420fb2857\": container with ID starting with 15ecd5ba95cff68b00339fa2bd68c3685ab2bd8beb6d894fe01cccf420fb2857 not found: ID does not exist" Nov 24 22:05:07 crc kubenswrapper[4915]: I1124 22:05:07.441750 4915 scope.go:117] "RemoveContainer" containerID="d084afca06dd6addc9c5bbe83a96a5b057c71cfd6df934017e3f908481d2737c" Nov 24 22:05:07 crc kubenswrapper[4915]: E1124 22:05:07.442241 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d084afca06dd6addc9c5bbe83a96a5b057c71cfd6df934017e3f908481d2737c\": container with ID starting with d084afca06dd6addc9c5bbe83a96a5b057c71cfd6df934017e3f908481d2737c not found: ID does not exist" containerID="d084afca06dd6addc9c5bbe83a96a5b057c71cfd6df934017e3f908481d2737c" Nov 24 22:05:07 crc kubenswrapper[4915]: I1124 22:05:07.442375 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d084afca06dd6addc9c5bbe83a96a5b057c71cfd6df934017e3f908481d2737c"} err="failed to get container status \"d084afca06dd6addc9c5bbe83a96a5b057c71cfd6df934017e3f908481d2737c\": rpc error: code = NotFound desc = could not find container \"d084afca06dd6addc9c5bbe83a96a5b057c71cfd6df934017e3f908481d2737c\": container with ID starting with d084afca06dd6addc9c5bbe83a96a5b057c71cfd6df934017e3f908481d2737c not found: ID does not exist" Nov 24 22:05:08 crc kubenswrapper[4915]: I1124 22:05:08.440027 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b10dfc9-afd0-4f6d-a043-baa36d90b0b7" path="/var/lib/kubelet/pods/0b10dfc9-afd0-4f6d-a043-baa36d90b0b7/volumes" Nov 24 22:05:24 crc kubenswrapper[4915]: I1124 22:05:24.327089 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:05:24 crc kubenswrapper[4915]: I1124 22:05:24.327678 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:05:54 crc kubenswrapper[4915]: I1124 22:05:54.327750 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:05:54 crc kubenswrapper[4915]: I1124 22:05:54.328284 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:05:54 crc kubenswrapper[4915]: I1124 22:05:54.328331 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 22:05:54 crc kubenswrapper[4915]: I1124 22:05:54.329208 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a449c95daa9a45bf8c92ace7ad503db5e63a7f318ae10b5488d2b660b4c746c3"} pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:05:54 crc kubenswrapper[4915]: I1124 22:05:54.329258 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" containerID="cri-o://a449c95daa9a45bf8c92ace7ad503db5e63a7f318ae10b5488d2b660b4c746c3" gracePeriod=600 Nov 24 22:05:54 crc kubenswrapper[4915]: I1124 22:05:54.881070 4915 generic.go:334] "Generic (PLEG): container finished" podID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerID="a449c95daa9a45bf8c92ace7ad503db5e63a7f318ae10b5488d2b660b4c746c3" exitCode=0 Nov 24 22:05:54 crc kubenswrapper[4915]: I1124 22:05:54.881123 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerDied","Data":"a449c95daa9a45bf8c92ace7ad503db5e63a7f318ae10b5488d2b660b4c746c3"} Nov 24 22:05:54 crc kubenswrapper[4915]: I1124 22:05:54.881471 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerStarted","Data":"054fc2b938644352adf8db396f3f42317897486de434dc1476206a9f95949537"} Nov 24 22:05:54 crc kubenswrapper[4915]: I1124 22:05:54.881498 4915 scope.go:117] "RemoveContainer" containerID="22b32b27a01009367ce2885e17cf2d1b259359cb42594a96255fea61acb83388" Nov 24 22:06:14 crc kubenswrapper[4915]: I1124 22:06:14.106862 4915 generic.go:334] "Generic (PLEG): container finished" podID="96ebeccc-beda-4622-9d03-6aabb443b1fe" containerID="d05d565a3cf92093454af9882d74a7722575b6f645cec5d668cf49dbeb6d7b09" exitCode=0 Nov 24 22:06:14 crc kubenswrapper[4915]: I1124 22:06:14.106964 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" event={"ID":"96ebeccc-beda-4622-9d03-6aabb443b1fe","Type":"ContainerDied","Data":"d05d565a3cf92093454af9882d74a7722575b6f645cec5d668cf49dbeb6d7b09"} Nov 24 22:06:15 crc kubenswrapper[4915]: I1124 22:06:15.701410 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" Nov 24 22:06:15 crc kubenswrapper[4915]: I1124 22:06:15.807361 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-migration-ssh-key-1\") pod \"96ebeccc-beda-4622-9d03-6aabb443b1fe\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " Nov 24 22:06:15 crc kubenswrapper[4915]: I1124 22:06:15.807503 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-ssh-key\") pod \"96ebeccc-beda-4622-9d03-6aabb443b1fe\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " Nov 24 22:06:15 crc kubenswrapper[4915]: I1124 22:06:15.807614 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-cell1-compute-config-0\") pod \"96ebeccc-beda-4622-9d03-6aabb443b1fe\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " Nov 24 22:06:15 crc kubenswrapper[4915]: I1124 22:06:15.807718 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-migration-ssh-key-0\") pod \"96ebeccc-beda-4622-9d03-6aabb443b1fe\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " Nov 24 22:06:15 crc kubenswrapper[4915]: I1124 22:06:15.807857 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-extra-config-0\") pod \"96ebeccc-beda-4622-9d03-6aabb443b1fe\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " Nov 24 22:06:15 crc kubenswrapper[4915]: I1124 22:06:15.807957 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfj42\" (UniqueName: \"kubernetes.io/projected/96ebeccc-beda-4622-9d03-6aabb443b1fe-kube-api-access-bfj42\") pod \"96ebeccc-beda-4622-9d03-6aabb443b1fe\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " Nov 24 22:06:15 crc kubenswrapper[4915]: I1124 22:06:15.808134 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-combined-ca-bundle\") pod \"96ebeccc-beda-4622-9d03-6aabb443b1fe\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " Nov 24 22:06:15 crc kubenswrapper[4915]: I1124 22:06:15.808356 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-cell1-compute-config-1\") pod \"96ebeccc-beda-4622-9d03-6aabb443b1fe\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " Nov 24 22:06:15 crc kubenswrapper[4915]: I1124 22:06:15.808448 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-inventory\") pod \"96ebeccc-beda-4622-9d03-6aabb443b1fe\" (UID: \"96ebeccc-beda-4622-9d03-6aabb443b1fe\") " Nov 24 22:06:15 crc kubenswrapper[4915]: I1124 22:06:15.817049 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96ebeccc-beda-4622-9d03-6aabb443b1fe-kube-api-access-bfj42" (OuterVolumeSpecName: "kube-api-access-bfj42") pod "96ebeccc-beda-4622-9d03-6aabb443b1fe" (UID: "96ebeccc-beda-4622-9d03-6aabb443b1fe"). InnerVolumeSpecName "kube-api-access-bfj42". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:06:15 crc kubenswrapper[4915]: I1124 22:06:15.817373 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "96ebeccc-beda-4622-9d03-6aabb443b1fe" (UID: "96ebeccc-beda-4622-9d03-6aabb443b1fe"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:06:15 crc kubenswrapper[4915]: I1124 22:06:15.853551 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "96ebeccc-beda-4622-9d03-6aabb443b1fe" (UID: "96ebeccc-beda-4622-9d03-6aabb443b1fe"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:06:15 crc kubenswrapper[4915]: I1124 22:06:15.854356 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-inventory" (OuterVolumeSpecName: "inventory") pod "96ebeccc-beda-4622-9d03-6aabb443b1fe" (UID: "96ebeccc-beda-4622-9d03-6aabb443b1fe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:06:15 crc kubenswrapper[4915]: I1124 22:06:15.878733 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "96ebeccc-beda-4622-9d03-6aabb443b1fe" (UID: "96ebeccc-beda-4622-9d03-6aabb443b1fe"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:06:15 crc kubenswrapper[4915]: I1124 22:06:15.881949 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "96ebeccc-beda-4622-9d03-6aabb443b1fe" (UID: "96ebeccc-beda-4622-9d03-6aabb443b1fe"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 22:06:15 crc kubenswrapper[4915]: I1124 22:06:15.885326 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "96ebeccc-beda-4622-9d03-6aabb443b1fe" (UID: "96ebeccc-beda-4622-9d03-6aabb443b1fe"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:06:15 crc kubenswrapper[4915]: I1124 22:06:15.886320 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "96ebeccc-beda-4622-9d03-6aabb443b1fe" (UID: "96ebeccc-beda-4622-9d03-6aabb443b1fe"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:06:15 crc kubenswrapper[4915]: I1124 22:06:15.888913 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "96ebeccc-beda-4622-9d03-6aabb443b1fe" (UID: "96ebeccc-beda-4622-9d03-6aabb443b1fe"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:06:15 crc kubenswrapper[4915]: I1124 22:06:15.911739 4915 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 24 22:06:15 crc kubenswrapper[4915]: I1124 22:06:15.911769 4915 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 22:06:15 crc kubenswrapper[4915]: I1124 22:06:15.911784 4915 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 22:06:15 crc kubenswrapper[4915]: I1124 22:06:15.911811 4915 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 24 22:06:15 crc kubenswrapper[4915]: I1124 22:06:15.911821 4915 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 24 22:06:15 crc kubenswrapper[4915]: I1124 22:06:15.911829 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfj42\" (UniqueName: \"kubernetes.io/projected/96ebeccc-beda-4622-9d03-6aabb443b1fe-kube-api-access-bfj42\") on node \"crc\" DevicePath \"\"" Nov 24 22:06:15 crc kubenswrapper[4915]: I1124 22:06:15.911838 4915 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 22:06:15 crc kubenswrapper[4915]: I1124 22:06:15.911846 4915 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 24 22:06:15 crc kubenswrapper[4915]: I1124 22:06:15.911856 4915 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96ebeccc-beda-4622-9d03-6aabb443b1fe-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.133033 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" event={"ID":"96ebeccc-beda-4622-9d03-6aabb443b1fe","Type":"ContainerDied","Data":"43016e60c4498ee77d5589d340f0ac9f9c7edf060747e49453fd08ba4c36d818"} Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.133444 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43016e60c4498ee77d5589d340f0ac9f9c7edf060747e49453fd08ba4c36d818" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.133079 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-dmpwz" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.253438 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk"] Nov 24 22:06:16 crc kubenswrapper[4915]: E1124 22:06:16.254051 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96ebeccc-beda-4622-9d03-6aabb443b1fe" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.254077 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="96ebeccc-beda-4622-9d03-6aabb443b1fe" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 24 22:06:16 crc kubenswrapper[4915]: E1124 22:06:16.254122 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b10dfc9-afd0-4f6d-a043-baa36d90b0b7" containerName="extract-utilities" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.254132 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b10dfc9-afd0-4f6d-a043-baa36d90b0b7" containerName="extract-utilities" Nov 24 22:06:16 crc kubenswrapper[4915]: E1124 22:06:16.254152 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b10dfc9-afd0-4f6d-a043-baa36d90b0b7" containerName="registry-server" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.254160 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b10dfc9-afd0-4f6d-a043-baa36d90b0b7" containerName="registry-server" Nov 24 22:06:16 crc kubenswrapper[4915]: E1124 22:06:16.254192 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b10dfc9-afd0-4f6d-a043-baa36d90b0b7" containerName="extract-content" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.254202 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b10dfc9-afd0-4f6d-a043-baa36d90b0b7" containerName="extract-content" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.254728 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b10dfc9-afd0-4f6d-a043-baa36d90b0b7" containerName="registry-server" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.254760 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="96ebeccc-beda-4622-9d03-6aabb443b1fe" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.255740 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.259111 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.261074 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.261395 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.263239 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkk6k" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.263431 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.292381 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk"] Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.322522 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk\" (UID: \"13961a83-9c81-49e1-b894-3651f2f91eb3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.322619 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk\" (UID: \"13961a83-9c81-49e1-b894-3651f2f91eb3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.322653 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk\" (UID: \"13961a83-9c81-49e1-b894-3651f2f91eb3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.322745 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk\" (UID: \"13961a83-9c81-49e1-b894-3651f2f91eb3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.323334 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk\" (UID: \"13961a83-9c81-49e1-b894-3651f2f91eb3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.323588 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjxtx\" (UniqueName: \"kubernetes.io/projected/13961a83-9c81-49e1-b894-3651f2f91eb3-kube-api-access-qjxtx\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk\" (UID: \"13961a83-9c81-49e1-b894-3651f2f91eb3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.323767 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk\" (UID: \"13961a83-9c81-49e1-b894-3651f2f91eb3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.425640 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk\" (UID: \"13961a83-9c81-49e1-b894-3651f2f91eb3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.425718 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjxtx\" (UniqueName: \"kubernetes.io/projected/13961a83-9c81-49e1-b894-3651f2f91eb3-kube-api-access-qjxtx\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk\" (UID: \"13961a83-9c81-49e1-b894-3651f2f91eb3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.425767 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk\" (UID: \"13961a83-9c81-49e1-b894-3651f2f91eb3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.425820 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk\" (UID: \"13961a83-9c81-49e1-b894-3651f2f91eb3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.425851 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk\" (UID: \"13961a83-9c81-49e1-b894-3651f2f91eb3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.425873 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk\" (UID: \"13961a83-9c81-49e1-b894-3651f2f91eb3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.425916 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk\" (UID: \"13961a83-9c81-49e1-b894-3651f2f91eb3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.429790 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk\" (UID: \"13961a83-9c81-49e1-b894-3651f2f91eb3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.430215 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk\" (UID: \"13961a83-9c81-49e1-b894-3651f2f91eb3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.431479 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk\" (UID: \"13961a83-9c81-49e1-b894-3651f2f91eb3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.431868 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk\" (UID: \"13961a83-9c81-49e1-b894-3651f2f91eb3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.432835 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk\" (UID: \"13961a83-9c81-49e1-b894-3651f2f91eb3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.432991 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk\" (UID: \"13961a83-9c81-49e1-b894-3651f2f91eb3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.448758 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjxtx\" (UniqueName: \"kubernetes.io/projected/13961a83-9c81-49e1-b894-3651f2f91eb3-kube-api-access-qjxtx\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk\" (UID: \"13961a83-9c81-49e1-b894-3651f2f91eb3\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk" Nov 24 22:06:16 crc kubenswrapper[4915]: I1124 22:06:16.606868 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk" Nov 24 22:06:17 crc kubenswrapper[4915]: I1124 22:06:17.264733 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk"] Nov 24 22:06:18 crc kubenswrapper[4915]: I1124 22:06:18.164616 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk" event={"ID":"13961a83-9c81-49e1-b894-3651f2f91eb3","Type":"ContainerStarted","Data":"7433b4902ee95ecd11c6be9d8d549412b34c8c2304d87f2ff6d3315e54fb41b5"} Nov 24 22:06:18 crc kubenswrapper[4915]: I1124 22:06:18.165103 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk" event={"ID":"13961a83-9c81-49e1-b894-3651f2f91eb3","Type":"ContainerStarted","Data":"755dc12d75dbad8657e2694a15a14b61ef6e5722e7902f1b3ec4187aaaae2d4f"} Nov 24 22:06:18 crc kubenswrapper[4915]: I1124 22:06:18.197531 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk" podStartSLOduration=1.752711253 podStartE2EDuration="2.197507175s" podCreationTimestamp="2025-11-24 22:06:16 +0000 UTC" firstStartedPulling="2025-11-24 22:06:17.264928416 +0000 UTC m=+2795.581180589" lastFinishedPulling="2025-11-24 22:06:17.709724338 +0000 UTC m=+2796.025976511" observedRunningTime="2025-11-24 22:06:18.194237977 +0000 UTC m=+2796.510490170" watchObservedRunningTime="2025-11-24 22:06:18.197507175 +0000 UTC m=+2796.513759358" Nov 24 22:07:39 crc kubenswrapper[4915]: I1124 22:07:39.378273 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wx7zv"] Nov 24 22:07:39 crc kubenswrapper[4915]: I1124 22:07:39.382053 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wx7zv" Nov 24 22:07:39 crc kubenswrapper[4915]: I1124 22:07:39.390384 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wx7zv"] Nov 24 22:07:39 crc kubenswrapper[4915]: I1124 22:07:39.553444 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dqh8\" (UniqueName: \"kubernetes.io/projected/05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6-kube-api-access-9dqh8\") pod \"community-operators-wx7zv\" (UID: \"05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6\") " pod="openshift-marketplace/community-operators-wx7zv" Nov 24 22:07:39 crc kubenswrapper[4915]: I1124 22:07:39.553550 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6-catalog-content\") pod \"community-operators-wx7zv\" (UID: \"05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6\") " pod="openshift-marketplace/community-operators-wx7zv" Nov 24 22:07:39 crc kubenswrapper[4915]: I1124 22:07:39.553910 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6-utilities\") pod \"community-operators-wx7zv\" (UID: \"05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6\") " pod="openshift-marketplace/community-operators-wx7zv" Nov 24 22:07:39 crc kubenswrapper[4915]: I1124 22:07:39.656465 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6-utilities\") pod \"community-operators-wx7zv\" (UID: \"05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6\") " pod="openshift-marketplace/community-operators-wx7zv" Nov 24 22:07:39 crc kubenswrapper[4915]: I1124 22:07:39.656606 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dqh8\" (UniqueName: \"kubernetes.io/projected/05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6-kube-api-access-9dqh8\") pod \"community-operators-wx7zv\" (UID: \"05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6\") " pod="openshift-marketplace/community-operators-wx7zv" Nov 24 22:07:39 crc kubenswrapper[4915]: I1124 22:07:39.656688 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6-catalog-content\") pod \"community-operators-wx7zv\" (UID: \"05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6\") " pod="openshift-marketplace/community-operators-wx7zv" Nov 24 22:07:39 crc kubenswrapper[4915]: I1124 22:07:39.657739 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6-catalog-content\") pod \"community-operators-wx7zv\" (UID: \"05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6\") " pod="openshift-marketplace/community-operators-wx7zv" Nov 24 22:07:39 crc kubenswrapper[4915]: I1124 22:07:39.658161 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6-utilities\") pod \"community-operators-wx7zv\" (UID: \"05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6\") " pod="openshift-marketplace/community-operators-wx7zv" Nov 24 22:07:39 crc kubenswrapper[4915]: I1124 22:07:39.677559 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dqh8\" (UniqueName: \"kubernetes.io/projected/05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6-kube-api-access-9dqh8\") pod \"community-operators-wx7zv\" (UID: \"05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6\") " pod="openshift-marketplace/community-operators-wx7zv" Nov 24 22:07:39 crc kubenswrapper[4915]: I1124 22:07:39.723588 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wx7zv" Nov 24 22:07:40 crc kubenswrapper[4915]: I1124 22:07:40.247945 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wx7zv"] Nov 24 22:07:40 crc kubenswrapper[4915]: W1124 22:07:40.266838 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05d642b3_ffd3_49cc_ba0d_1c7f9113f2d6.slice/crio-84d86c508acc1d97635bbfba6ab34faf7f148f83bc375d7194a49bb2c404e9c2 WatchSource:0}: Error finding container 84d86c508acc1d97635bbfba6ab34faf7f148f83bc375d7194a49bb2c404e9c2: Status 404 returned error can't find the container with id 84d86c508acc1d97635bbfba6ab34faf7f148f83bc375d7194a49bb2c404e9c2 Nov 24 22:07:41 crc kubenswrapper[4915]: I1124 22:07:41.222334 4915 generic.go:334] "Generic (PLEG): container finished" podID="05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6" containerID="96c985f5be81b5ff15049f413d865fbad8287e1624361f9f014c879e3d5b4819" exitCode=0 Nov 24 22:07:41 crc kubenswrapper[4915]: I1124 22:07:41.222433 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wx7zv" event={"ID":"05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6","Type":"ContainerDied","Data":"96c985f5be81b5ff15049f413d865fbad8287e1624361f9f014c879e3d5b4819"} Nov 24 22:07:41 crc kubenswrapper[4915]: I1124 22:07:41.222834 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wx7zv" event={"ID":"05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6","Type":"ContainerStarted","Data":"84d86c508acc1d97635bbfba6ab34faf7f148f83bc375d7194a49bb2c404e9c2"} Nov 24 22:07:41 crc kubenswrapper[4915]: I1124 22:07:41.958016 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p9f6r"] Nov 24 22:07:41 crc kubenswrapper[4915]: I1124 22:07:41.962631 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p9f6r" Nov 24 22:07:41 crc kubenswrapper[4915]: I1124 22:07:41.968234 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p9f6r"] Nov 24 22:07:42 crc kubenswrapper[4915]: I1124 22:07:42.119234 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5-catalog-content\") pod \"redhat-marketplace-p9f6r\" (UID: \"0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5\") " pod="openshift-marketplace/redhat-marketplace-p9f6r" Nov 24 22:07:42 crc kubenswrapper[4915]: I1124 22:07:42.119466 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5-utilities\") pod \"redhat-marketplace-p9f6r\" (UID: \"0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5\") " pod="openshift-marketplace/redhat-marketplace-p9f6r" Nov 24 22:07:42 crc kubenswrapper[4915]: I1124 22:07:42.119550 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lfc2\" (UniqueName: \"kubernetes.io/projected/0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5-kube-api-access-4lfc2\") pod \"redhat-marketplace-p9f6r\" (UID: \"0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5\") " pod="openshift-marketplace/redhat-marketplace-p9f6r" Nov 24 22:07:42 crc kubenswrapper[4915]: I1124 22:07:42.221648 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5-utilities\") pod \"redhat-marketplace-p9f6r\" (UID: \"0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5\") " pod="openshift-marketplace/redhat-marketplace-p9f6r" Nov 24 22:07:42 crc kubenswrapper[4915]: I1124 22:07:42.221726 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lfc2\" (UniqueName: \"kubernetes.io/projected/0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5-kube-api-access-4lfc2\") pod \"redhat-marketplace-p9f6r\" (UID: \"0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5\") " pod="openshift-marketplace/redhat-marketplace-p9f6r" Nov 24 22:07:42 crc kubenswrapper[4915]: I1124 22:07:42.221815 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5-catalog-content\") pod \"redhat-marketplace-p9f6r\" (UID: \"0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5\") " pod="openshift-marketplace/redhat-marketplace-p9f6r" Nov 24 22:07:42 crc kubenswrapper[4915]: I1124 22:07:42.222243 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5-catalog-content\") pod \"redhat-marketplace-p9f6r\" (UID: \"0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5\") " pod="openshift-marketplace/redhat-marketplace-p9f6r" Nov 24 22:07:42 crc kubenswrapper[4915]: I1124 22:07:42.222466 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5-utilities\") pod \"redhat-marketplace-p9f6r\" (UID: \"0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5\") " pod="openshift-marketplace/redhat-marketplace-p9f6r" Nov 24 22:07:42 crc kubenswrapper[4915]: I1124 22:07:42.240836 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lfc2\" (UniqueName: \"kubernetes.io/projected/0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5-kube-api-access-4lfc2\") pod \"redhat-marketplace-p9f6r\" (UID: \"0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5\") " pod="openshift-marketplace/redhat-marketplace-p9f6r" Nov 24 22:07:42 crc kubenswrapper[4915]: I1124 22:07:42.293837 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p9f6r" Nov 24 22:07:42 crc kubenswrapper[4915]: I1124 22:07:42.794646 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p9f6r"] Nov 24 22:07:43 crc kubenswrapper[4915]: I1124 22:07:43.245694 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wx7zv" event={"ID":"05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6","Type":"ContainerStarted","Data":"6bb7382a414407fcc20a082178042f0050c46e9dee912199973662b3eee923cd"} Nov 24 22:07:43 crc kubenswrapper[4915]: I1124 22:07:43.247995 4915 generic.go:334] "Generic (PLEG): container finished" podID="0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5" containerID="dfb8656709a929bd0090b2179bcc7cce89609f6db576630f50ced3952dc5d9a9" exitCode=0 Nov 24 22:07:43 crc kubenswrapper[4915]: I1124 22:07:43.248042 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p9f6r" event={"ID":"0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5","Type":"ContainerDied","Data":"dfb8656709a929bd0090b2179bcc7cce89609f6db576630f50ced3952dc5d9a9"} Nov 24 22:07:43 crc kubenswrapper[4915]: I1124 22:07:43.248067 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p9f6r" event={"ID":"0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5","Type":"ContainerStarted","Data":"fa1b191ff0a477eb2390f6c7a057b2654e159cd9d6a710a56213e4acf50b4c85"} Nov 24 22:07:44 crc kubenswrapper[4915]: I1124 22:07:44.260884 4915 generic.go:334] "Generic (PLEG): container finished" podID="05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6" containerID="6bb7382a414407fcc20a082178042f0050c46e9dee912199973662b3eee923cd" exitCode=0 Nov 24 22:07:44 crc kubenswrapper[4915]: I1124 22:07:44.261085 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wx7zv" event={"ID":"05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6","Type":"ContainerDied","Data":"6bb7382a414407fcc20a082178042f0050c46e9dee912199973662b3eee923cd"} Nov 24 22:07:45 crc kubenswrapper[4915]: I1124 22:07:45.275181 4915 generic.go:334] "Generic (PLEG): container finished" podID="0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5" containerID="9ac777a446573d08480681e4e084d458f2bd8b14e957559c94ec2079e3e2b635" exitCode=0 Nov 24 22:07:45 crc kubenswrapper[4915]: I1124 22:07:45.275609 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p9f6r" event={"ID":"0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5","Type":"ContainerDied","Data":"9ac777a446573d08480681e4e084d458f2bd8b14e957559c94ec2079e3e2b635"} Nov 24 22:07:45 crc kubenswrapper[4915]: I1124 22:07:45.281530 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wx7zv" event={"ID":"05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6","Type":"ContainerStarted","Data":"ed08f83661fa732287b8a28bafe64d06174248cb635e7a503e1df233427dba32"} Nov 24 22:07:45 crc kubenswrapper[4915]: I1124 22:07:45.327857 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wx7zv" podStartSLOduration=2.625551711 podStartE2EDuration="6.327764483s" podCreationTimestamp="2025-11-24 22:07:39 +0000 UTC" firstStartedPulling="2025-11-24 22:07:41.226030396 +0000 UTC m=+2879.542282609" lastFinishedPulling="2025-11-24 22:07:44.928243198 +0000 UTC m=+2883.244495381" observedRunningTime="2025-11-24 22:07:45.317116304 +0000 UTC m=+2883.633368497" watchObservedRunningTime="2025-11-24 22:07:45.327764483 +0000 UTC m=+2883.644016666" Nov 24 22:07:46 crc kubenswrapper[4915]: I1124 22:07:46.296517 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p9f6r" event={"ID":"0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5","Type":"ContainerStarted","Data":"a20906b72e4bbb828fc748ec6d059673dfa875b265992ce74f789981654f1159"} Nov 24 22:07:46 crc kubenswrapper[4915]: I1124 22:07:46.318430 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p9f6r" podStartSLOduration=2.7836096120000002 podStartE2EDuration="5.318411063s" podCreationTimestamp="2025-11-24 22:07:41 +0000 UTC" firstStartedPulling="2025-11-24 22:07:43.250355101 +0000 UTC m=+2881.566607274" lastFinishedPulling="2025-11-24 22:07:45.785156552 +0000 UTC m=+2884.101408725" observedRunningTime="2025-11-24 22:07:46.31458617 +0000 UTC m=+2884.630838353" watchObservedRunningTime="2025-11-24 22:07:46.318411063 +0000 UTC m=+2884.634663236" Nov 24 22:07:49 crc kubenswrapper[4915]: I1124 22:07:49.724506 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wx7zv" Nov 24 22:07:49 crc kubenswrapper[4915]: I1124 22:07:49.725183 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wx7zv" Nov 24 22:07:50 crc kubenswrapper[4915]: I1124 22:07:50.770022 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-wx7zv" podUID="05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6" containerName="registry-server" probeResult="failure" output=< Nov 24 22:07:50 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Nov 24 22:07:50 crc kubenswrapper[4915]: > Nov 24 22:07:52 crc kubenswrapper[4915]: I1124 22:07:52.294546 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p9f6r" Nov 24 22:07:52 crc kubenswrapper[4915]: I1124 22:07:52.294594 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p9f6r" Nov 24 22:07:52 crc kubenswrapper[4915]: I1124 22:07:52.353658 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p9f6r" Nov 24 22:07:52 crc kubenswrapper[4915]: I1124 22:07:52.406532 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p9f6r" Nov 24 22:07:52 crc kubenswrapper[4915]: I1124 22:07:52.586734 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p9f6r"] Nov 24 22:07:54 crc kubenswrapper[4915]: I1124 22:07:54.327296 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:07:54 crc kubenswrapper[4915]: I1124 22:07:54.327603 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:07:54 crc kubenswrapper[4915]: I1124 22:07:54.379538 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p9f6r" podUID="0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5" containerName="registry-server" containerID="cri-o://a20906b72e4bbb828fc748ec6d059673dfa875b265992ce74f789981654f1159" gracePeriod=2 Nov 24 22:07:54 crc kubenswrapper[4915]: I1124 22:07:54.878208 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p9f6r" Nov 24 22:07:55 crc kubenswrapper[4915]: I1124 22:07:55.063220 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5-utilities\") pod \"0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5\" (UID: \"0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5\") " Nov 24 22:07:55 crc kubenswrapper[4915]: I1124 22:07:55.063372 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5-catalog-content\") pod \"0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5\" (UID: \"0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5\") " Nov 24 22:07:55 crc kubenswrapper[4915]: I1124 22:07:55.063573 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lfc2\" (UniqueName: \"kubernetes.io/projected/0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5-kube-api-access-4lfc2\") pod \"0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5\" (UID: \"0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5\") " Nov 24 22:07:55 crc kubenswrapper[4915]: I1124 22:07:55.063899 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5-utilities" (OuterVolumeSpecName: "utilities") pod "0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5" (UID: "0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:07:55 crc kubenswrapper[4915]: I1124 22:07:55.064351 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:07:55 crc kubenswrapper[4915]: I1124 22:07:55.071714 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5-kube-api-access-4lfc2" (OuterVolumeSpecName: "kube-api-access-4lfc2") pod "0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5" (UID: "0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5"). InnerVolumeSpecName "kube-api-access-4lfc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:07:55 crc kubenswrapper[4915]: I1124 22:07:55.092753 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5" (UID: "0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:07:55 crc kubenswrapper[4915]: I1124 22:07:55.166188 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:07:55 crc kubenswrapper[4915]: I1124 22:07:55.166229 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lfc2\" (UniqueName: \"kubernetes.io/projected/0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5-kube-api-access-4lfc2\") on node \"crc\" DevicePath \"\"" Nov 24 22:07:55 crc kubenswrapper[4915]: I1124 22:07:55.394053 4915 generic.go:334] "Generic (PLEG): container finished" podID="0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5" containerID="a20906b72e4bbb828fc748ec6d059673dfa875b265992ce74f789981654f1159" exitCode=0 Nov 24 22:07:55 crc kubenswrapper[4915]: I1124 22:07:55.394101 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p9f6r" event={"ID":"0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5","Type":"ContainerDied","Data":"a20906b72e4bbb828fc748ec6d059673dfa875b265992ce74f789981654f1159"} Nov 24 22:07:55 crc kubenswrapper[4915]: I1124 22:07:55.394133 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p9f6r" event={"ID":"0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5","Type":"ContainerDied","Data":"fa1b191ff0a477eb2390f6c7a057b2654e159cd9d6a710a56213e4acf50b4c85"} Nov 24 22:07:55 crc kubenswrapper[4915]: I1124 22:07:55.394142 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p9f6r" Nov 24 22:07:55 crc kubenswrapper[4915]: I1124 22:07:55.394183 4915 scope.go:117] "RemoveContainer" containerID="a20906b72e4bbb828fc748ec6d059673dfa875b265992ce74f789981654f1159" Nov 24 22:07:55 crc kubenswrapper[4915]: I1124 22:07:55.424098 4915 scope.go:117] "RemoveContainer" containerID="9ac777a446573d08480681e4e084d458f2bd8b14e957559c94ec2079e3e2b635" Nov 24 22:07:55 crc kubenswrapper[4915]: I1124 22:07:55.438923 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p9f6r"] Nov 24 22:07:55 crc kubenswrapper[4915]: I1124 22:07:55.448561 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p9f6r"] Nov 24 22:07:55 crc kubenswrapper[4915]: I1124 22:07:55.464880 4915 scope.go:117] "RemoveContainer" containerID="dfb8656709a929bd0090b2179bcc7cce89609f6db576630f50ced3952dc5d9a9" Nov 24 22:07:55 crc kubenswrapper[4915]: I1124 22:07:55.550505 4915 scope.go:117] "RemoveContainer" containerID="a20906b72e4bbb828fc748ec6d059673dfa875b265992ce74f789981654f1159" Nov 24 22:07:55 crc kubenswrapper[4915]: E1124 22:07:55.552954 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a20906b72e4bbb828fc748ec6d059673dfa875b265992ce74f789981654f1159\": container with ID starting with a20906b72e4bbb828fc748ec6d059673dfa875b265992ce74f789981654f1159 not found: ID does not exist" containerID="a20906b72e4bbb828fc748ec6d059673dfa875b265992ce74f789981654f1159" Nov 24 22:07:55 crc kubenswrapper[4915]: I1124 22:07:55.552987 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a20906b72e4bbb828fc748ec6d059673dfa875b265992ce74f789981654f1159"} err="failed to get container status \"a20906b72e4bbb828fc748ec6d059673dfa875b265992ce74f789981654f1159\": rpc error: code = NotFound desc = could not find container \"a20906b72e4bbb828fc748ec6d059673dfa875b265992ce74f789981654f1159\": container with ID starting with a20906b72e4bbb828fc748ec6d059673dfa875b265992ce74f789981654f1159 not found: ID does not exist" Nov 24 22:07:55 crc kubenswrapper[4915]: I1124 22:07:55.553010 4915 scope.go:117] "RemoveContainer" containerID="9ac777a446573d08480681e4e084d458f2bd8b14e957559c94ec2079e3e2b635" Nov 24 22:07:55 crc kubenswrapper[4915]: E1124 22:07:55.553495 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ac777a446573d08480681e4e084d458f2bd8b14e957559c94ec2079e3e2b635\": container with ID starting with 9ac777a446573d08480681e4e084d458f2bd8b14e957559c94ec2079e3e2b635 not found: ID does not exist" containerID="9ac777a446573d08480681e4e084d458f2bd8b14e957559c94ec2079e3e2b635" Nov 24 22:07:55 crc kubenswrapper[4915]: I1124 22:07:55.553528 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ac777a446573d08480681e4e084d458f2bd8b14e957559c94ec2079e3e2b635"} err="failed to get container status \"9ac777a446573d08480681e4e084d458f2bd8b14e957559c94ec2079e3e2b635\": rpc error: code = NotFound desc = could not find container \"9ac777a446573d08480681e4e084d458f2bd8b14e957559c94ec2079e3e2b635\": container with ID starting with 9ac777a446573d08480681e4e084d458f2bd8b14e957559c94ec2079e3e2b635 not found: ID does not exist" Nov 24 22:07:55 crc kubenswrapper[4915]: I1124 22:07:55.553542 4915 scope.go:117] "RemoveContainer" containerID="dfb8656709a929bd0090b2179bcc7cce89609f6db576630f50ced3952dc5d9a9" Nov 24 22:07:55 crc kubenswrapper[4915]: E1124 22:07:55.555306 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfb8656709a929bd0090b2179bcc7cce89609f6db576630f50ced3952dc5d9a9\": container with ID starting with dfb8656709a929bd0090b2179bcc7cce89609f6db576630f50ced3952dc5d9a9 not found: ID does not exist" containerID="dfb8656709a929bd0090b2179bcc7cce89609f6db576630f50ced3952dc5d9a9" Nov 24 22:07:55 crc kubenswrapper[4915]: I1124 22:07:55.555335 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfb8656709a929bd0090b2179bcc7cce89609f6db576630f50ced3952dc5d9a9"} err="failed to get container status \"dfb8656709a929bd0090b2179bcc7cce89609f6db576630f50ced3952dc5d9a9\": rpc error: code = NotFound desc = could not find container \"dfb8656709a929bd0090b2179bcc7cce89609f6db576630f50ced3952dc5d9a9\": container with ID starting with dfb8656709a929bd0090b2179bcc7cce89609f6db576630f50ced3952dc5d9a9 not found: ID does not exist" Nov 24 22:07:56 crc kubenswrapper[4915]: I1124 22:07:56.441441 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5" path="/var/lib/kubelet/pods/0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5/volumes" Nov 24 22:07:59 crc kubenswrapper[4915]: I1124 22:07:59.613443 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p6svj"] Nov 24 22:07:59 crc kubenswrapper[4915]: E1124 22:07:59.614380 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5" containerName="registry-server" Nov 24 22:07:59 crc kubenswrapper[4915]: I1124 22:07:59.614393 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5" containerName="registry-server" Nov 24 22:07:59 crc kubenswrapper[4915]: E1124 22:07:59.614420 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5" containerName="extract-utilities" Nov 24 22:07:59 crc kubenswrapper[4915]: I1124 22:07:59.614427 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5" containerName="extract-utilities" Nov 24 22:07:59 crc kubenswrapper[4915]: E1124 22:07:59.614438 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5" containerName="extract-content" Nov 24 22:07:59 crc kubenswrapper[4915]: I1124 22:07:59.614445 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5" containerName="extract-content" Nov 24 22:07:59 crc kubenswrapper[4915]: I1124 22:07:59.614731 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a5a445e-e9a2-4ce9-a28c-c9679b0b46f5" containerName="registry-server" Nov 24 22:07:59 crc kubenswrapper[4915]: I1124 22:07:59.616621 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p6svj" Nov 24 22:07:59 crc kubenswrapper[4915]: I1124 22:07:59.626181 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p6svj"] Nov 24 22:07:59 crc kubenswrapper[4915]: I1124 22:07:59.691038 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g29b\" (UniqueName: \"kubernetes.io/projected/946ff18c-f0b1-4f57-a802-fed199e5ecfb-kube-api-access-4g29b\") pod \"certified-operators-p6svj\" (UID: \"946ff18c-f0b1-4f57-a802-fed199e5ecfb\") " pod="openshift-marketplace/certified-operators-p6svj" Nov 24 22:07:59 crc kubenswrapper[4915]: I1124 22:07:59.691080 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/946ff18c-f0b1-4f57-a802-fed199e5ecfb-utilities\") pod \"certified-operators-p6svj\" (UID: \"946ff18c-f0b1-4f57-a802-fed199e5ecfb\") " pod="openshift-marketplace/certified-operators-p6svj" Nov 24 22:07:59 crc kubenswrapper[4915]: I1124 22:07:59.691856 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/946ff18c-f0b1-4f57-a802-fed199e5ecfb-catalog-content\") pod \"certified-operators-p6svj\" (UID: \"946ff18c-f0b1-4f57-a802-fed199e5ecfb\") " pod="openshift-marketplace/certified-operators-p6svj" Nov 24 22:07:59 crc kubenswrapper[4915]: I1124 22:07:59.773069 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wx7zv" Nov 24 22:07:59 crc kubenswrapper[4915]: I1124 22:07:59.793576 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/946ff18c-f0b1-4f57-a802-fed199e5ecfb-catalog-content\") pod \"certified-operators-p6svj\" (UID: \"946ff18c-f0b1-4f57-a802-fed199e5ecfb\") " pod="openshift-marketplace/certified-operators-p6svj" Nov 24 22:07:59 crc kubenswrapper[4915]: I1124 22:07:59.793801 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g29b\" (UniqueName: \"kubernetes.io/projected/946ff18c-f0b1-4f57-a802-fed199e5ecfb-kube-api-access-4g29b\") pod \"certified-operators-p6svj\" (UID: \"946ff18c-f0b1-4f57-a802-fed199e5ecfb\") " pod="openshift-marketplace/certified-operators-p6svj" Nov 24 22:07:59 crc kubenswrapper[4915]: I1124 22:07:59.793836 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/946ff18c-f0b1-4f57-a802-fed199e5ecfb-utilities\") pod \"certified-operators-p6svj\" (UID: \"946ff18c-f0b1-4f57-a802-fed199e5ecfb\") " pod="openshift-marketplace/certified-operators-p6svj" Nov 24 22:07:59 crc kubenswrapper[4915]: I1124 22:07:59.794344 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/946ff18c-f0b1-4f57-a802-fed199e5ecfb-catalog-content\") pod \"certified-operators-p6svj\" (UID: \"946ff18c-f0b1-4f57-a802-fed199e5ecfb\") " pod="openshift-marketplace/certified-operators-p6svj" Nov 24 22:07:59 crc kubenswrapper[4915]: I1124 22:07:59.795416 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/946ff18c-f0b1-4f57-a802-fed199e5ecfb-utilities\") pod \"certified-operators-p6svj\" (UID: \"946ff18c-f0b1-4f57-a802-fed199e5ecfb\") " pod="openshift-marketplace/certified-operators-p6svj" Nov 24 22:07:59 crc kubenswrapper[4915]: I1124 22:07:59.832519 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g29b\" (UniqueName: \"kubernetes.io/projected/946ff18c-f0b1-4f57-a802-fed199e5ecfb-kube-api-access-4g29b\") pod \"certified-operators-p6svj\" (UID: \"946ff18c-f0b1-4f57-a802-fed199e5ecfb\") " pod="openshift-marketplace/certified-operators-p6svj" Nov 24 22:07:59 crc kubenswrapper[4915]: I1124 22:07:59.848532 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wx7zv" Nov 24 22:07:59 crc kubenswrapper[4915]: I1124 22:07:59.936451 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p6svj" Nov 24 22:08:00 crc kubenswrapper[4915]: I1124 22:08:00.473507 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p6svj"] Nov 24 22:08:01 crc kubenswrapper[4915]: I1124 22:08:01.463638 4915 generic.go:334] "Generic (PLEG): container finished" podID="946ff18c-f0b1-4f57-a802-fed199e5ecfb" containerID="4807fc4b76176bbfe76d9dd214e97736bc71a969a88fd4138ea7675d626cf202" exitCode=0 Nov 24 22:08:01 crc kubenswrapper[4915]: I1124 22:08:01.463922 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6svj" event={"ID":"946ff18c-f0b1-4f57-a802-fed199e5ecfb","Type":"ContainerDied","Data":"4807fc4b76176bbfe76d9dd214e97736bc71a969a88fd4138ea7675d626cf202"} Nov 24 22:08:01 crc kubenswrapper[4915]: I1124 22:08:01.464179 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6svj" event={"ID":"946ff18c-f0b1-4f57-a802-fed199e5ecfb","Type":"ContainerStarted","Data":"0d20f1ad9917614ec23fec061f8f77bdf1b3c232ed89d1ad32a3d4353a899d8c"} Nov 24 22:08:02 crc kubenswrapper[4915]: I1124 22:08:02.476690 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6svj" event={"ID":"946ff18c-f0b1-4f57-a802-fed199e5ecfb","Type":"ContainerStarted","Data":"de345864b3ec744817a7d035dc8fbeb3d616e7cb37eddbdf846e8e0e87ab81e8"} Nov 24 22:08:04 crc kubenswrapper[4915]: I1124 22:08:04.507408 4915 generic.go:334] "Generic (PLEG): container finished" podID="946ff18c-f0b1-4f57-a802-fed199e5ecfb" containerID="de345864b3ec744817a7d035dc8fbeb3d616e7cb37eddbdf846e8e0e87ab81e8" exitCode=0 Nov 24 22:08:04 crc kubenswrapper[4915]: I1124 22:08:04.507472 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6svj" event={"ID":"946ff18c-f0b1-4f57-a802-fed199e5ecfb","Type":"ContainerDied","Data":"de345864b3ec744817a7d035dc8fbeb3d616e7cb37eddbdf846e8e0e87ab81e8"} Nov 24 22:08:05 crc kubenswrapper[4915]: I1124 22:08:05.525438 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6svj" event={"ID":"946ff18c-f0b1-4f57-a802-fed199e5ecfb","Type":"ContainerStarted","Data":"0f59aef9c7225e6c377a551e94019103ce7d00528b545cf5cac7a8c20c927b66"} Nov 24 22:08:05 crc kubenswrapper[4915]: I1124 22:08:05.546378 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p6svj" podStartSLOduration=3.080104471 podStartE2EDuration="6.546357401s" podCreationTimestamp="2025-11-24 22:07:59 +0000 UTC" firstStartedPulling="2025-11-24 22:08:01.466076935 +0000 UTC m=+2899.782329118" lastFinishedPulling="2025-11-24 22:08:04.932329875 +0000 UTC m=+2903.248582048" observedRunningTime="2025-11-24 22:08:05.546287049 +0000 UTC m=+2903.862539252" watchObservedRunningTime="2025-11-24 22:08:05.546357401 +0000 UTC m=+2903.862609574" Nov 24 22:08:07 crc kubenswrapper[4915]: I1124 22:08:07.605338 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wx7zv"] Nov 24 22:08:07 crc kubenswrapper[4915]: I1124 22:08:07.606270 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wx7zv" podUID="05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6" containerName="registry-server" containerID="cri-o://ed08f83661fa732287b8a28bafe64d06174248cb635e7a503e1df233427dba32" gracePeriod=2 Nov 24 22:08:08 crc kubenswrapper[4915]: I1124 22:08:08.144565 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wx7zv" Nov 24 22:08:08 crc kubenswrapper[4915]: I1124 22:08:08.339329 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6-utilities\") pod \"05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6\" (UID: \"05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6\") " Nov 24 22:08:08 crc kubenswrapper[4915]: I1124 22:08:08.339654 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6-catalog-content\") pod \"05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6\" (UID: \"05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6\") " Nov 24 22:08:08 crc kubenswrapper[4915]: I1124 22:08:08.339736 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dqh8\" (UniqueName: \"kubernetes.io/projected/05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6-kube-api-access-9dqh8\") pod \"05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6\" (UID: \"05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6\") " Nov 24 22:08:08 crc kubenswrapper[4915]: I1124 22:08:08.341066 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6-utilities" (OuterVolumeSpecName: "utilities") pod "05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6" (UID: "05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:08:08 crc kubenswrapper[4915]: I1124 22:08:08.350709 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6-kube-api-access-9dqh8" (OuterVolumeSpecName: "kube-api-access-9dqh8") pod "05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6" (UID: "05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6"). InnerVolumeSpecName "kube-api-access-9dqh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:08:08 crc kubenswrapper[4915]: I1124 22:08:08.412490 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6" (UID: "05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:08:08 crc kubenswrapper[4915]: I1124 22:08:08.442887 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dqh8\" (UniqueName: \"kubernetes.io/projected/05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6-kube-api-access-9dqh8\") on node \"crc\" DevicePath \"\"" Nov 24 22:08:08 crc kubenswrapper[4915]: I1124 22:08:08.442928 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:08:08 crc kubenswrapper[4915]: I1124 22:08:08.442947 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:08:08 crc kubenswrapper[4915]: I1124 22:08:08.579270 4915 generic.go:334] "Generic (PLEG): container finished" podID="05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6" containerID="ed08f83661fa732287b8a28bafe64d06174248cb635e7a503e1df233427dba32" exitCode=0 Nov 24 22:08:08 crc kubenswrapper[4915]: I1124 22:08:08.579332 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wx7zv" event={"ID":"05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6","Type":"ContainerDied","Data":"ed08f83661fa732287b8a28bafe64d06174248cb635e7a503e1df233427dba32"} Nov 24 22:08:08 crc kubenswrapper[4915]: I1124 22:08:08.579372 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wx7zv" event={"ID":"05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6","Type":"ContainerDied","Data":"84d86c508acc1d97635bbfba6ab34faf7f148f83bc375d7194a49bb2c404e9c2"} Nov 24 22:08:08 crc kubenswrapper[4915]: I1124 22:08:08.579391 4915 scope.go:117] "RemoveContainer" containerID="ed08f83661fa732287b8a28bafe64d06174248cb635e7a503e1df233427dba32" Nov 24 22:08:08 crc kubenswrapper[4915]: I1124 22:08:08.579593 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wx7zv" Nov 24 22:08:08 crc kubenswrapper[4915]: I1124 22:08:08.614531 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wx7zv"] Nov 24 22:08:08 crc kubenswrapper[4915]: I1124 22:08:08.626388 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wx7zv"] Nov 24 22:08:08 crc kubenswrapper[4915]: I1124 22:08:08.628467 4915 scope.go:117] "RemoveContainer" containerID="6bb7382a414407fcc20a082178042f0050c46e9dee912199973662b3eee923cd" Nov 24 22:08:08 crc kubenswrapper[4915]: I1124 22:08:08.666825 4915 scope.go:117] "RemoveContainer" containerID="96c985f5be81b5ff15049f413d865fbad8287e1624361f9f014c879e3d5b4819" Nov 24 22:08:08 crc kubenswrapper[4915]: I1124 22:08:08.747983 4915 scope.go:117] "RemoveContainer" containerID="ed08f83661fa732287b8a28bafe64d06174248cb635e7a503e1df233427dba32" Nov 24 22:08:08 crc kubenswrapper[4915]: E1124 22:08:08.749266 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed08f83661fa732287b8a28bafe64d06174248cb635e7a503e1df233427dba32\": container with ID starting with ed08f83661fa732287b8a28bafe64d06174248cb635e7a503e1df233427dba32 not found: ID does not exist" containerID="ed08f83661fa732287b8a28bafe64d06174248cb635e7a503e1df233427dba32" Nov 24 22:08:08 crc kubenswrapper[4915]: I1124 22:08:08.749299 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed08f83661fa732287b8a28bafe64d06174248cb635e7a503e1df233427dba32"} err="failed to get container status \"ed08f83661fa732287b8a28bafe64d06174248cb635e7a503e1df233427dba32\": rpc error: code = NotFound desc = could not find container \"ed08f83661fa732287b8a28bafe64d06174248cb635e7a503e1df233427dba32\": container with ID starting with ed08f83661fa732287b8a28bafe64d06174248cb635e7a503e1df233427dba32 not found: ID does not exist" Nov 24 22:08:08 crc kubenswrapper[4915]: I1124 22:08:08.749329 4915 scope.go:117] "RemoveContainer" containerID="6bb7382a414407fcc20a082178042f0050c46e9dee912199973662b3eee923cd" Nov 24 22:08:08 crc kubenswrapper[4915]: E1124 22:08:08.749688 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bb7382a414407fcc20a082178042f0050c46e9dee912199973662b3eee923cd\": container with ID starting with 6bb7382a414407fcc20a082178042f0050c46e9dee912199973662b3eee923cd not found: ID does not exist" containerID="6bb7382a414407fcc20a082178042f0050c46e9dee912199973662b3eee923cd" Nov 24 22:08:08 crc kubenswrapper[4915]: I1124 22:08:08.749723 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bb7382a414407fcc20a082178042f0050c46e9dee912199973662b3eee923cd"} err="failed to get container status \"6bb7382a414407fcc20a082178042f0050c46e9dee912199973662b3eee923cd\": rpc error: code = NotFound desc = could not find container \"6bb7382a414407fcc20a082178042f0050c46e9dee912199973662b3eee923cd\": container with ID starting with 6bb7382a414407fcc20a082178042f0050c46e9dee912199973662b3eee923cd not found: ID does not exist" Nov 24 22:08:08 crc kubenswrapper[4915]: I1124 22:08:08.749744 4915 scope.go:117] "RemoveContainer" containerID="96c985f5be81b5ff15049f413d865fbad8287e1624361f9f014c879e3d5b4819" Nov 24 22:08:08 crc kubenswrapper[4915]: E1124 22:08:08.750239 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96c985f5be81b5ff15049f413d865fbad8287e1624361f9f014c879e3d5b4819\": container with ID starting with 96c985f5be81b5ff15049f413d865fbad8287e1624361f9f014c879e3d5b4819 not found: ID does not exist" containerID="96c985f5be81b5ff15049f413d865fbad8287e1624361f9f014c879e3d5b4819" Nov 24 22:08:08 crc kubenswrapper[4915]: I1124 22:08:08.750260 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96c985f5be81b5ff15049f413d865fbad8287e1624361f9f014c879e3d5b4819"} err="failed to get container status \"96c985f5be81b5ff15049f413d865fbad8287e1624361f9f014c879e3d5b4819\": rpc error: code = NotFound desc = could not find container \"96c985f5be81b5ff15049f413d865fbad8287e1624361f9f014c879e3d5b4819\": container with ID starting with 96c985f5be81b5ff15049f413d865fbad8287e1624361f9f014c879e3d5b4819 not found: ID does not exist" Nov 24 22:08:09 crc kubenswrapper[4915]: I1124 22:08:09.937394 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p6svj" Nov 24 22:08:09 crc kubenswrapper[4915]: I1124 22:08:09.937693 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p6svj" Nov 24 22:08:10 crc kubenswrapper[4915]: I1124 22:08:10.444739 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6" path="/var/lib/kubelet/pods/05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6/volumes" Nov 24 22:08:10 crc kubenswrapper[4915]: I1124 22:08:10.993919 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-p6svj" podUID="946ff18c-f0b1-4f57-a802-fed199e5ecfb" containerName="registry-server" probeResult="failure" output=< Nov 24 22:08:10 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Nov 24 22:08:10 crc kubenswrapper[4915]: > Nov 24 22:08:19 crc kubenswrapper[4915]: I1124 22:08:19.984593 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p6svj" Nov 24 22:08:20 crc kubenswrapper[4915]: I1124 22:08:20.059464 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p6svj" Nov 24 22:08:20 crc kubenswrapper[4915]: I1124 22:08:20.236694 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p6svj"] Nov 24 22:08:21 crc kubenswrapper[4915]: I1124 22:08:21.732393 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p6svj" podUID="946ff18c-f0b1-4f57-a802-fed199e5ecfb" containerName="registry-server" containerID="cri-o://0f59aef9c7225e6c377a551e94019103ce7d00528b545cf5cac7a8c20c927b66" gracePeriod=2 Nov 24 22:08:22 crc kubenswrapper[4915]: I1124 22:08:22.259937 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p6svj" Nov 24 22:08:22 crc kubenswrapper[4915]: I1124 22:08:22.310595 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/946ff18c-f0b1-4f57-a802-fed199e5ecfb-utilities\") pod \"946ff18c-f0b1-4f57-a802-fed199e5ecfb\" (UID: \"946ff18c-f0b1-4f57-a802-fed199e5ecfb\") " Nov 24 22:08:22 crc kubenswrapper[4915]: I1124 22:08:22.310671 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/946ff18c-f0b1-4f57-a802-fed199e5ecfb-catalog-content\") pod \"946ff18c-f0b1-4f57-a802-fed199e5ecfb\" (UID: \"946ff18c-f0b1-4f57-a802-fed199e5ecfb\") " Nov 24 22:08:22 crc kubenswrapper[4915]: I1124 22:08:22.310712 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g29b\" (UniqueName: \"kubernetes.io/projected/946ff18c-f0b1-4f57-a802-fed199e5ecfb-kube-api-access-4g29b\") pod \"946ff18c-f0b1-4f57-a802-fed199e5ecfb\" (UID: \"946ff18c-f0b1-4f57-a802-fed199e5ecfb\") " Nov 24 22:08:22 crc kubenswrapper[4915]: I1124 22:08:22.311466 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/946ff18c-f0b1-4f57-a802-fed199e5ecfb-utilities" (OuterVolumeSpecName: "utilities") pod "946ff18c-f0b1-4f57-a802-fed199e5ecfb" (UID: "946ff18c-f0b1-4f57-a802-fed199e5ecfb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:08:22 crc kubenswrapper[4915]: I1124 22:08:22.311686 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/946ff18c-f0b1-4f57-a802-fed199e5ecfb-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:08:22 crc kubenswrapper[4915]: I1124 22:08:22.332208 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/946ff18c-f0b1-4f57-a802-fed199e5ecfb-kube-api-access-4g29b" (OuterVolumeSpecName: "kube-api-access-4g29b") pod "946ff18c-f0b1-4f57-a802-fed199e5ecfb" (UID: "946ff18c-f0b1-4f57-a802-fed199e5ecfb"). InnerVolumeSpecName "kube-api-access-4g29b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:08:22 crc kubenswrapper[4915]: I1124 22:08:22.371186 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/946ff18c-f0b1-4f57-a802-fed199e5ecfb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "946ff18c-f0b1-4f57-a802-fed199e5ecfb" (UID: "946ff18c-f0b1-4f57-a802-fed199e5ecfb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:08:22 crc kubenswrapper[4915]: I1124 22:08:22.414592 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/946ff18c-f0b1-4f57-a802-fed199e5ecfb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:08:22 crc kubenswrapper[4915]: I1124 22:08:22.414625 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g29b\" (UniqueName: \"kubernetes.io/projected/946ff18c-f0b1-4f57-a802-fed199e5ecfb-kube-api-access-4g29b\") on node \"crc\" DevicePath \"\"" Nov 24 22:08:22 crc kubenswrapper[4915]: I1124 22:08:22.746875 4915 generic.go:334] "Generic (PLEG): container finished" podID="946ff18c-f0b1-4f57-a802-fed199e5ecfb" containerID="0f59aef9c7225e6c377a551e94019103ce7d00528b545cf5cac7a8c20c927b66" exitCode=0 Nov 24 22:08:22 crc kubenswrapper[4915]: I1124 22:08:22.746920 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p6svj" Nov 24 22:08:22 crc kubenswrapper[4915]: I1124 22:08:22.746937 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6svj" event={"ID":"946ff18c-f0b1-4f57-a802-fed199e5ecfb","Type":"ContainerDied","Data":"0f59aef9c7225e6c377a551e94019103ce7d00528b545cf5cac7a8c20c927b66"} Nov 24 22:08:22 crc kubenswrapper[4915]: I1124 22:08:22.747289 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p6svj" event={"ID":"946ff18c-f0b1-4f57-a802-fed199e5ecfb","Type":"ContainerDied","Data":"0d20f1ad9917614ec23fec061f8f77bdf1b3c232ed89d1ad32a3d4353a899d8c"} Nov 24 22:08:22 crc kubenswrapper[4915]: I1124 22:08:22.747310 4915 scope.go:117] "RemoveContainer" containerID="0f59aef9c7225e6c377a551e94019103ce7d00528b545cf5cac7a8c20c927b66" Nov 24 22:08:22 crc kubenswrapper[4915]: I1124 22:08:22.773235 4915 scope.go:117] "RemoveContainer" containerID="de345864b3ec744817a7d035dc8fbeb3d616e7cb37eddbdf846e8e0e87ab81e8" Nov 24 22:08:22 crc kubenswrapper[4915]: I1124 22:08:22.776467 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p6svj"] Nov 24 22:08:22 crc kubenswrapper[4915]: I1124 22:08:22.788194 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p6svj"] Nov 24 22:08:22 crc kubenswrapper[4915]: I1124 22:08:22.797865 4915 scope.go:117] "RemoveContainer" containerID="4807fc4b76176bbfe76d9dd214e97736bc71a969a88fd4138ea7675d626cf202" Nov 24 22:08:22 crc kubenswrapper[4915]: I1124 22:08:22.857422 4915 scope.go:117] "RemoveContainer" containerID="0f59aef9c7225e6c377a551e94019103ce7d00528b545cf5cac7a8c20c927b66" Nov 24 22:08:22 crc kubenswrapper[4915]: E1124 22:08:22.857896 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f59aef9c7225e6c377a551e94019103ce7d00528b545cf5cac7a8c20c927b66\": container with ID starting with 0f59aef9c7225e6c377a551e94019103ce7d00528b545cf5cac7a8c20c927b66 not found: ID does not exist" containerID="0f59aef9c7225e6c377a551e94019103ce7d00528b545cf5cac7a8c20c927b66" Nov 24 22:08:22 crc kubenswrapper[4915]: I1124 22:08:22.857948 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f59aef9c7225e6c377a551e94019103ce7d00528b545cf5cac7a8c20c927b66"} err="failed to get container status \"0f59aef9c7225e6c377a551e94019103ce7d00528b545cf5cac7a8c20c927b66\": rpc error: code = NotFound desc = could not find container \"0f59aef9c7225e6c377a551e94019103ce7d00528b545cf5cac7a8c20c927b66\": container with ID starting with 0f59aef9c7225e6c377a551e94019103ce7d00528b545cf5cac7a8c20c927b66 not found: ID does not exist" Nov 24 22:08:22 crc kubenswrapper[4915]: I1124 22:08:22.857996 4915 scope.go:117] "RemoveContainer" containerID="de345864b3ec744817a7d035dc8fbeb3d616e7cb37eddbdf846e8e0e87ab81e8" Nov 24 22:08:22 crc kubenswrapper[4915]: E1124 22:08:22.858383 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de345864b3ec744817a7d035dc8fbeb3d616e7cb37eddbdf846e8e0e87ab81e8\": container with ID starting with de345864b3ec744817a7d035dc8fbeb3d616e7cb37eddbdf846e8e0e87ab81e8 not found: ID does not exist" containerID="de345864b3ec744817a7d035dc8fbeb3d616e7cb37eddbdf846e8e0e87ab81e8" Nov 24 22:08:22 crc kubenswrapper[4915]: I1124 22:08:22.858421 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de345864b3ec744817a7d035dc8fbeb3d616e7cb37eddbdf846e8e0e87ab81e8"} err="failed to get container status \"de345864b3ec744817a7d035dc8fbeb3d616e7cb37eddbdf846e8e0e87ab81e8\": rpc error: code = NotFound desc = could not find container \"de345864b3ec744817a7d035dc8fbeb3d616e7cb37eddbdf846e8e0e87ab81e8\": container with ID starting with de345864b3ec744817a7d035dc8fbeb3d616e7cb37eddbdf846e8e0e87ab81e8 not found: ID does not exist" Nov 24 22:08:22 crc kubenswrapper[4915]: I1124 22:08:22.858485 4915 scope.go:117] "RemoveContainer" containerID="4807fc4b76176bbfe76d9dd214e97736bc71a969a88fd4138ea7675d626cf202" Nov 24 22:08:22 crc kubenswrapper[4915]: E1124 22:08:22.858808 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4807fc4b76176bbfe76d9dd214e97736bc71a969a88fd4138ea7675d626cf202\": container with ID starting with 4807fc4b76176bbfe76d9dd214e97736bc71a969a88fd4138ea7675d626cf202 not found: ID does not exist" containerID="4807fc4b76176bbfe76d9dd214e97736bc71a969a88fd4138ea7675d626cf202" Nov 24 22:08:22 crc kubenswrapper[4915]: I1124 22:08:22.858833 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4807fc4b76176bbfe76d9dd214e97736bc71a969a88fd4138ea7675d626cf202"} err="failed to get container status \"4807fc4b76176bbfe76d9dd214e97736bc71a969a88fd4138ea7675d626cf202\": rpc error: code = NotFound desc = could not find container \"4807fc4b76176bbfe76d9dd214e97736bc71a969a88fd4138ea7675d626cf202\": container with ID starting with 4807fc4b76176bbfe76d9dd214e97736bc71a969a88fd4138ea7675d626cf202 not found: ID does not exist" Nov 24 22:08:24 crc kubenswrapper[4915]: I1124 22:08:24.327194 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:08:24 crc kubenswrapper[4915]: I1124 22:08:24.327509 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:08:24 crc kubenswrapper[4915]: I1124 22:08:24.441009 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="946ff18c-f0b1-4f57-a802-fed199e5ecfb" path="/var/lib/kubelet/pods/946ff18c-f0b1-4f57-a802-fed199e5ecfb/volumes" Nov 24 22:08:46 crc kubenswrapper[4915]: E1124 22:08:46.335642 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13961a83_9c81_49e1_b894_3651f2f91eb3.slice/crio-7433b4902ee95ecd11c6be9d8d549412b34c8c2304d87f2ff6d3315e54fb41b5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13961a83_9c81_49e1_b894_3651f2f91eb3.slice/crio-conmon-7433b4902ee95ecd11c6be9d8d549412b34c8c2304d87f2ff6d3315e54fb41b5.scope\": RecentStats: unable to find data in memory cache]" Nov 24 22:08:47 crc kubenswrapper[4915]: I1124 22:08:47.041283 4915 generic.go:334] "Generic (PLEG): container finished" podID="13961a83-9c81-49e1-b894-3651f2f91eb3" containerID="7433b4902ee95ecd11c6be9d8d549412b34c8c2304d87f2ff6d3315e54fb41b5" exitCode=0 Nov 24 22:08:47 crc kubenswrapper[4915]: I1124 22:08:47.041512 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk" event={"ID":"13961a83-9c81-49e1-b894-3651f2f91eb3","Type":"ContainerDied","Data":"7433b4902ee95ecd11c6be9d8d549412b34c8c2304d87f2ff6d3315e54fb41b5"} Nov 24 22:08:48 crc kubenswrapper[4915]: I1124 22:08:48.470277 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk" Nov 24 22:08:48 crc kubenswrapper[4915]: I1124 22:08:48.554669 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-ceilometer-compute-config-data-1\") pod \"13961a83-9c81-49e1-b894-3651f2f91eb3\" (UID: \"13961a83-9c81-49e1-b894-3651f2f91eb3\") " Nov 24 22:08:48 crc kubenswrapper[4915]: I1124 22:08:48.554756 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjxtx\" (UniqueName: \"kubernetes.io/projected/13961a83-9c81-49e1-b894-3651f2f91eb3-kube-api-access-qjxtx\") pod \"13961a83-9c81-49e1-b894-3651f2f91eb3\" (UID: \"13961a83-9c81-49e1-b894-3651f2f91eb3\") " Nov 24 22:08:48 crc kubenswrapper[4915]: I1124 22:08:48.555028 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-ceilometer-compute-config-data-0\") pod \"13961a83-9c81-49e1-b894-3651f2f91eb3\" (UID: \"13961a83-9c81-49e1-b894-3651f2f91eb3\") " Nov 24 22:08:48 crc kubenswrapper[4915]: I1124 22:08:48.555069 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-ceilometer-compute-config-data-2\") pod \"13961a83-9c81-49e1-b894-3651f2f91eb3\" (UID: \"13961a83-9c81-49e1-b894-3651f2f91eb3\") " Nov 24 22:08:48 crc kubenswrapper[4915]: I1124 22:08:48.555113 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-telemetry-combined-ca-bundle\") pod \"13961a83-9c81-49e1-b894-3651f2f91eb3\" (UID: \"13961a83-9c81-49e1-b894-3651f2f91eb3\") " Nov 24 22:08:48 crc kubenswrapper[4915]: I1124 22:08:48.555153 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-ssh-key\") pod \"13961a83-9c81-49e1-b894-3651f2f91eb3\" (UID: \"13961a83-9c81-49e1-b894-3651f2f91eb3\") " Nov 24 22:08:48 crc kubenswrapper[4915]: I1124 22:08:48.555195 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-inventory\") pod \"13961a83-9c81-49e1-b894-3651f2f91eb3\" (UID: \"13961a83-9c81-49e1-b894-3651f2f91eb3\") " Nov 24 22:08:48 crc kubenswrapper[4915]: I1124 22:08:48.577175 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "13961a83-9c81-49e1-b894-3651f2f91eb3" (UID: "13961a83-9c81-49e1-b894-3651f2f91eb3"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:08:48 crc kubenswrapper[4915]: I1124 22:08:48.580853 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13961a83-9c81-49e1-b894-3651f2f91eb3-kube-api-access-qjxtx" (OuterVolumeSpecName: "kube-api-access-qjxtx") pod "13961a83-9c81-49e1-b894-3651f2f91eb3" (UID: "13961a83-9c81-49e1-b894-3651f2f91eb3"). InnerVolumeSpecName "kube-api-access-qjxtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:08:48 crc kubenswrapper[4915]: I1124 22:08:48.586416 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "13961a83-9c81-49e1-b894-3651f2f91eb3" (UID: "13961a83-9c81-49e1-b894-3651f2f91eb3"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:08:48 crc kubenswrapper[4915]: I1124 22:08:48.597692 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "13961a83-9c81-49e1-b894-3651f2f91eb3" (UID: "13961a83-9c81-49e1-b894-3651f2f91eb3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:08:48 crc kubenswrapper[4915]: I1124 22:08:48.601337 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "13961a83-9c81-49e1-b894-3651f2f91eb3" (UID: "13961a83-9c81-49e1-b894-3651f2f91eb3"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:08:48 crc kubenswrapper[4915]: I1124 22:08:48.612388 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "13961a83-9c81-49e1-b894-3651f2f91eb3" (UID: "13961a83-9c81-49e1-b894-3651f2f91eb3"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:08:48 crc kubenswrapper[4915]: I1124 22:08:48.612603 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-inventory" (OuterVolumeSpecName: "inventory") pod "13961a83-9c81-49e1-b894-3651f2f91eb3" (UID: "13961a83-9c81-49e1-b894-3651f2f91eb3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:08:48 crc kubenswrapper[4915]: I1124 22:08:48.658929 4915 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 24 22:08:48 crc kubenswrapper[4915]: I1124 22:08:48.658972 4915 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 24 22:08:48 crc kubenswrapper[4915]: I1124 22:08:48.658987 4915 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 22:08:48 crc kubenswrapper[4915]: I1124 22:08:48.659000 4915 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 22:08:48 crc kubenswrapper[4915]: I1124 22:08:48.659013 4915 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 22:08:48 crc kubenswrapper[4915]: I1124 22:08:48.659026 4915 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/13961a83-9c81-49e1-b894-3651f2f91eb3-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 24 22:08:48 crc kubenswrapper[4915]: I1124 22:08:48.659038 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjxtx\" (UniqueName: \"kubernetes.io/projected/13961a83-9c81-49e1-b894-3651f2f91eb3-kube-api-access-qjxtx\") on node \"crc\" DevicePath \"\"" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.065551 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk" event={"ID":"13961a83-9c81-49e1-b894-3651f2f91eb3","Type":"ContainerDied","Data":"755dc12d75dbad8657e2694a15a14b61ef6e5722e7902f1b3ec4187aaaae2d4f"} Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.065871 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="755dc12d75dbad8657e2694a15a14b61ef6e5722e7902f1b3ec4187aaaae2d4f" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.065626 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.219382 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x"] Nov 24 22:08:49 crc kubenswrapper[4915]: E1124 22:08:49.220019 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6" containerName="extract-utilities" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.220043 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6" containerName="extract-utilities" Nov 24 22:08:49 crc kubenswrapper[4915]: E1124 22:08:49.220058 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="946ff18c-f0b1-4f57-a802-fed199e5ecfb" containerName="registry-server" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.220068 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="946ff18c-f0b1-4f57-a802-fed199e5ecfb" containerName="registry-server" Nov 24 22:08:49 crc kubenswrapper[4915]: E1124 22:08:49.220092 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="946ff18c-f0b1-4f57-a802-fed199e5ecfb" containerName="extract-utilities" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.220100 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="946ff18c-f0b1-4f57-a802-fed199e5ecfb" containerName="extract-utilities" Nov 24 22:08:49 crc kubenswrapper[4915]: E1124 22:08:49.220123 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6" containerName="extract-content" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.220130 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6" containerName="extract-content" Nov 24 22:08:49 crc kubenswrapper[4915]: E1124 22:08:49.220144 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6" containerName="registry-server" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.220153 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6" containerName="registry-server" Nov 24 22:08:49 crc kubenswrapper[4915]: E1124 22:08:49.220170 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13961a83-9c81-49e1-b894-3651f2f91eb3" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.220179 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="13961a83-9c81-49e1-b894-3651f2f91eb3" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 24 22:08:49 crc kubenswrapper[4915]: E1124 22:08:49.220188 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="946ff18c-f0b1-4f57-a802-fed199e5ecfb" containerName="extract-content" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.220195 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="946ff18c-f0b1-4f57-a802-fed199e5ecfb" containerName="extract-content" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.220495 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="13961a83-9c81-49e1-b894-3651f2f91eb3" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.220517 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="946ff18c-f0b1-4f57-a802-fed199e5ecfb" containerName="registry-server" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.220551 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="05d642b3-ffd3-49cc-ba0d-1c7f9113f2d6" containerName="registry-server" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.221593 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.227731 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.227978 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.228072 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkk6k" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.228927 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.242072 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x"] Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.279532 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.397268 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x\" (UID: \"7e90f618-8014-4582-a184-e00647111efc\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.397349 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x\" (UID: \"7e90f618-8014-4582-a184-e00647111efc\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.397409 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x\" (UID: \"7e90f618-8014-4582-a184-e00647111efc\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.397441 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x\" (UID: \"7e90f618-8014-4582-a184-e00647111efc\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.397528 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wff4\" (UniqueName: \"kubernetes.io/projected/7e90f618-8014-4582-a184-e00647111efc-kube-api-access-6wff4\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x\" (UID: \"7e90f618-8014-4582-a184-e00647111efc\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.397563 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x\" (UID: \"7e90f618-8014-4582-a184-e00647111efc\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.397580 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x\" (UID: \"7e90f618-8014-4582-a184-e00647111efc\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.499201 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x\" (UID: \"7e90f618-8014-4582-a184-e00647111efc\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.499485 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x\" (UID: \"7e90f618-8014-4582-a184-e00647111efc\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.499534 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x\" (UID: \"7e90f618-8014-4582-a184-e00647111efc\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.499561 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x\" (UID: \"7e90f618-8014-4582-a184-e00647111efc\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.499635 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wff4\" (UniqueName: \"kubernetes.io/projected/7e90f618-8014-4582-a184-e00647111efc-kube-api-access-6wff4\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x\" (UID: \"7e90f618-8014-4582-a184-e00647111efc\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.499670 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x\" (UID: \"7e90f618-8014-4582-a184-e00647111efc\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.499688 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x\" (UID: \"7e90f618-8014-4582-a184-e00647111efc\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.504765 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x\" (UID: \"7e90f618-8014-4582-a184-e00647111efc\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.505451 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x\" (UID: \"7e90f618-8014-4582-a184-e00647111efc\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.505717 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-ssh-key\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x\" (UID: \"7e90f618-8014-4582-a184-e00647111efc\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.506535 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x\" (UID: \"7e90f618-8014-4582-a184-e00647111efc\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.509409 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x\" (UID: \"7e90f618-8014-4582-a184-e00647111efc\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.514905 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x\" (UID: \"7e90f618-8014-4582-a184-e00647111efc\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.522257 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wff4\" (UniqueName: \"kubernetes.io/projected/7e90f618-8014-4582-a184-e00647111efc-kube-api-access-6wff4\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x\" (UID: \"7e90f618-8014-4582-a184-e00647111efc\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x" Nov 24 22:08:49 crc kubenswrapper[4915]: I1124 22:08:49.547119 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x" Nov 24 22:08:50 crc kubenswrapper[4915]: I1124 22:08:50.114169 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x"] Nov 24 22:08:50 crc kubenswrapper[4915]: I1124 22:08:50.121750 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 22:08:51 crc kubenswrapper[4915]: I1124 22:08:51.088737 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x" event={"ID":"7e90f618-8014-4582-a184-e00647111efc","Type":"ContainerStarted","Data":"51f0125fd062bf03b157da3ea6e3d4bb94d2a7f2ee4553f6aae8780b67ed8a13"} Nov 24 22:08:52 crc kubenswrapper[4915]: I1124 22:08:52.104912 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x" event={"ID":"7e90f618-8014-4582-a184-e00647111efc","Type":"ContainerStarted","Data":"8fcc505ea082c38686b24f0bd76d2edbc7fbabde343ac656989d63adf3610cfc"} Nov 24 22:08:52 crc kubenswrapper[4915]: I1124 22:08:52.130341 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x" podStartSLOduration=2.473726544 podStartE2EDuration="3.130312211s" podCreationTimestamp="2025-11-24 22:08:49 +0000 UTC" firstStartedPulling="2025-11-24 22:08:50.121515785 +0000 UTC m=+2948.437767958" lastFinishedPulling="2025-11-24 22:08:50.778101452 +0000 UTC m=+2949.094353625" observedRunningTime="2025-11-24 22:08:52.125190082 +0000 UTC m=+2950.441442265" watchObservedRunningTime="2025-11-24 22:08:52.130312211 +0000 UTC m=+2950.446564404" Nov 24 22:08:54 crc kubenswrapper[4915]: I1124 22:08:54.326818 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:08:54 crc kubenswrapper[4915]: I1124 22:08:54.327112 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:08:54 crc kubenswrapper[4915]: I1124 22:08:54.327154 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 22:08:54 crc kubenswrapper[4915]: I1124 22:08:54.327789 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"054fc2b938644352adf8db396f3f42317897486de434dc1476206a9f95949537"} pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:08:54 crc kubenswrapper[4915]: I1124 22:08:54.327941 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" containerID="cri-o://054fc2b938644352adf8db396f3f42317897486de434dc1476206a9f95949537" gracePeriod=600 Nov 24 22:08:54 crc kubenswrapper[4915]: E1124 22:08:54.449697 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:08:55 crc kubenswrapper[4915]: I1124 22:08:55.139944 4915 generic.go:334] "Generic (PLEG): container finished" podID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerID="054fc2b938644352adf8db396f3f42317897486de434dc1476206a9f95949537" exitCode=0 Nov 24 22:08:55 crc kubenswrapper[4915]: I1124 22:08:55.140009 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerDied","Data":"054fc2b938644352adf8db396f3f42317897486de434dc1476206a9f95949537"} Nov 24 22:08:55 crc kubenswrapper[4915]: I1124 22:08:55.140067 4915 scope.go:117] "RemoveContainer" containerID="a449c95daa9a45bf8c92ace7ad503db5e63a7f318ae10b5488d2b660b4c746c3" Nov 24 22:08:55 crc kubenswrapper[4915]: I1124 22:08:55.140701 4915 scope.go:117] "RemoveContainer" containerID="054fc2b938644352adf8db396f3f42317897486de434dc1476206a9f95949537" Nov 24 22:08:55 crc kubenswrapper[4915]: E1124 22:08:55.141116 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:09:09 crc kubenswrapper[4915]: I1124 22:09:09.427305 4915 scope.go:117] "RemoveContainer" containerID="054fc2b938644352adf8db396f3f42317897486de434dc1476206a9f95949537" Nov 24 22:09:09 crc kubenswrapper[4915]: E1124 22:09:09.428132 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:09:22 crc kubenswrapper[4915]: I1124 22:09:22.435698 4915 scope.go:117] "RemoveContainer" containerID="054fc2b938644352adf8db396f3f42317897486de434dc1476206a9f95949537" Nov 24 22:09:22 crc kubenswrapper[4915]: E1124 22:09:22.436628 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:09:37 crc kubenswrapper[4915]: I1124 22:09:37.427584 4915 scope.go:117] "RemoveContainer" containerID="054fc2b938644352adf8db396f3f42317897486de434dc1476206a9f95949537" Nov 24 22:09:39 crc kubenswrapper[4915]: E1124 22:09:37.428435 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:09:48 crc kubenswrapper[4915]: I1124 22:09:48.427500 4915 scope.go:117] "RemoveContainer" containerID="054fc2b938644352adf8db396f3f42317897486de434dc1476206a9f95949537" Nov 24 22:09:48 crc kubenswrapper[4915]: E1124 22:09:48.428483 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:10:00 crc kubenswrapper[4915]: I1124 22:10:00.428050 4915 scope.go:117] "RemoveContainer" containerID="054fc2b938644352adf8db396f3f42317897486de434dc1476206a9f95949537" Nov 24 22:10:00 crc kubenswrapper[4915]: E1124 22:10:00.428816 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:10:14 crc kubenswrapper[4915]: I1124 22:10:14.426826 4915 scope.go:117] "RemoveContainer" containerID="054fc2b938644352adf8db396f3f42317897486de434dc1476206a9f95949537" Nov 24 22:10:14 crc kubenswrapper[4915]: E1124 22:10:14.427574 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:10:25 crc kubenswrapper[4915]: I1124 22:10:25.428575 4915 scope.go:117] "RemoveContainer" containerID="054fc2b938644352adf8db396f3f42317897486de434dc1476206a9f95949537" Nov 24 22:10:25 crc kubenswrapper[4915]: E1124 22:10:25.430415 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:10:40 crc kubenswrapper[4915]: I1124 22:10:40.426914 4915 scope.go:117] "RemoveContainer" containerID="054fc2b938644352adf8db396f3f42317897486de434dc1476206a9f95949537" Nov 24 22:10:40 crc kubenswrapper[4915]: E1124 22:10:40.427660 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:10:55 crc kubenswrapper[4915]: I1124 22:10:55.429265 4915 scope.go:117] "RemoveContainer" containerID="054fc2b938644352adf8db396f3f42317897486de434dc1476206a9f95949537" Nov 24 22:10:55 crc kubenswrapper[4915]: E1124 22:10:55.430290 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:11:04 crc kubenswrapper[4915]: I1124 22:11:04.838626 4915 generic.go:334] "Generic (PLEG): container finished" podID="7e90f618-8014-4582-a184-e00647111efc" containerID="8fcc505ea082c38686b24f0bd76d2edbc7fbabde343ac656989d63adf3610cfc" exitCode=0 Nov 24 22:11:04 crc kubenswrapper[4915]: I1124 22:11:04.838691 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x" event={"ID":"7e90f618-8014-4582-a184-e00647111efc","Type":"ContainerDied","Data":"8fcc505ea082c38686b24f0bd76d2edbc7fbabde343ac656989d63adf3610cfc"} Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.358451 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x" Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.526415 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-ssh-key\") pod \"7e90f618-8014-4582-a184-e00647111efc\" (UID: \"7e90f618-8014-4582-a184-e00647111efc\") " Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.526830 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-ceilometer-ipmi-config-data-2\") pod \"7e90f618-8014-4582-a184-e00647111efc\" (UID: \"7e90f618-8014-4582-a184-e00647111efc\") " Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.526942 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-ceilometer-ipmi-config-data-0\") pod \"7e90f618-8014-4582-a184-e00647111efc\" (UID: \"7e90f618-8014-4582-a184-e00647111efc\") " Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.527018 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-inventory\") pod \"7e90f618-8014-4582-a184-e00647111efc\" (UID: \"7e90f618-8014-4582-a184-e00647111efc\") " Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.527121 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wff4\" (UniqueName: \"kubernetes.io/projected/7e90f618-8014-4582-a184-e00647111efc-kube-api-access-6wff4\") pod \"7e90f618-8014-4582-a184-e00647111efc\" (UID: \"7e90f618-8014-4582-a184-e00647111efc\") " Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.527225 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-ceilometer-ipmi-config-data-1\") pod \"7e90f618-8014-4582-a184-e00647111efc\" (UID: \"7e90f618-8014-4582-a184-e00647111efc\") " Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.527297 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-telemetry-power-monitoring-combined-ca-bundle\") pod \"7e90f618-8014-4582-a184-e00647111efc\" (UID: \"7e90f618-8014-4582-a184-e00647111efc\") " Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.541469 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e90f618-8014-4582-a184-e00647111efc-kube-api-access-6wff4" (OuterVolumeSpecName: "kube-api-access-6wff4") pod "7e90f618-8014-4582-a184-e00647111efc" (UID: "7e90f618-8014-4582-a184-e00647111efc"). InnerVolumeSpecName "kube-api-access-6wff4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.543113 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "7e90f618-8014-4582-a184-e00647111efc" (UID: "7e90f618-8014-4582-a184-e00647111efc"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.558623 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "7e90f618-8014-4582-a184-e00647111efc" (UID: "7e90f618-8014-4582-a184-e00647111efc"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.563540 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "7e90f618-8014-4582-a184-e00647111efc" (UID: "7e90f618-8014-4582-a184-e00647111efc"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.565131 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-inventory" (OuterVolumeSpecName: "inventory") pod "7e90f618-8014-4582-a184-e00647111efc" (UID: "7e90f618-8014-4582-a184-e00647111efc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.572347 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "7e90f618-8014-4582-a184-e00647111efc" (UID: "7e90f618-8014-4582-a184-e00647111efc"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.592754 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7e90f618-8014-4582-a184-e00647111efc" (UID: "7e90f618-8014-4582-a184-e00647111efc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.629979 4915 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.630012 4915 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.630023 4915 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.630032 4915 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.630041 4915 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.630050 4915 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e90f618-8014-4582-a184-e00647111efc-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.630060 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wff4\" (UniqueName: \"kubernetes.io/projected/7e90f618-8014-4582-a184-e00647111efc-kube-api-access-6wff4\") on node \"crc\" DevicePath \"\"" Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.868724 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x" event={"ID":"7e90f618-8014-4582-a184-e00647111efc","Type":"ContainerDied","Data":"51f0125fd062bf03b157da3ea6e3d4bb94d2a7f2ee4553f6aae8780b67ed8a13"} Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.868800 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51f0125fd062bf03b157da3ea6e3d4bb94d2a7f2ee4553f6aae8780b67ed8a13" Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.868831 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x" Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.977268 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-jzmrf"] Nov 24 22:11:06 crc kubenswrapper[4915]: E1124 22:11:06.977987 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e90f618-8014-4582-a184-e00647111efc" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.978012 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e90f618-8014-4582-a184-e00647111efc" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.978401 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e90f618-8014-4582-a184-e00647111efc" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.979422 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jzmrf" Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.983315 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.984920 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.985090 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.985423 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.985711 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jkk6k" Nov 24 22:11:06 crc kubenswrapper[4915]: I1124 22:11:06.998920 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-jzmrf"] Nov 24 22:11:07 crc kubenswrapper[4915]: I1124 22:11:07.142369 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqbm4\" (UniqueName: \"kubernetes.io/projected/ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6-kube-api-access-gqbm4\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jzmrf\" (UID: \"ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jzmrf" Nov 24 22:11:07 crc kubenswrapper[4915]: I1124 22:11:07.142740 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jzmrf\" (UID: \"ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jzmrf" Nov 24 22:11:07 crc kubenswrapper[4915]: I1124 22:11:07.142938 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jzmrf\" (UID: \"ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jzmrf" Nov 24 22:11:07 crc kubenswrapper[4915]: I1124 22:11:07.143061 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jzmrf\" (UID: \"ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jzmrf" Nov 24 22:11:07 crc kubenswrapper[4915]: I1124 22:11:07.143167 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jzmrf\" (UID: \"ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jzmrf" Nov 24 22:11:07 crc kubenswrapper[4915]: I1124 22:11:07.245193 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jzmrf\" (UID: \"ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jzmrf" Nov 24 22:11:07 crc kubenswrapper[4915]: I1124 22:11:07.245322 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jzmrf\" (UID: \"ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jzmrf" Nov 24 22:11:07 crc kubenswrapper[4915]: I1124 22:11:07.245358 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jzmrf\" (UID: \"ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jzmrf" Nov 24 22:11:07 crc kubenswrapper[4915]: I1124 22:11:07.245391 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jzmrf\" (UID: \"ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jzmrf" Nov 24 22:11:07 crc kubenswrapper[4915]: I1124 22:11:07.245523 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqbm4\" (UniqueName: \"kubernetes.io/projected/ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6-kube-api-access-gqbm4\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jzmrf\" (UID: \"ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jzmrf" Nov 24 22:11:07 crc kubenswrapper[4915]: I1124 22:11:07.250546 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jzmrf\" (UID: \"ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jzmrf" Nov 24 22:11:07 crc kubenswrapper[4915]: I1124 22:11:07.250908 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jzmrf\" (UID: \"ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jzmrf" Nov 24 22:11:07 crc kubenswrapper[4915]: I1124 22:11:07.251106 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jzmrf\" (UID: \"ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jzmrf" Nov 24 22:11:07 crc kubenswrapper[4915]: I1124 22:11:07.251353 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jzmrf\" (UID: \"ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jzmrf" Nov 24 22:11:07 crc kubenswrapper[4915]: I1124 22:11:07.264374 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqbm4\" (UniqueName: \"kubernetes.io/projected/ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6-kube-api-access-gqbm4\") pod \"logging-edpm-deployment-openstack-edpm-ipam-jzmrf\" (UID: \"ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jzmrf" Nov 24 22:11:07 crc kubenswrapper[4915]: I1124 22:11:07.310184 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jzmrf" Nov 24 22:11:07 crc kubenswrapper[4915]: I1124 22:11:07.914734 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-jzmrf"] Nov 24 22:11:08 crc kubenswrapper[4915]: I1124 22:11:08.427618 4915 scope.go:117] "RemoveContainer" containerID="054fc2b938644352adf8db396f3f42317897486de434dc1476206a9f95949537" Nov 24 22:11:08 crc kubenswrapper[4915]: E1124 22:11:08.428337 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:11:08 crc kubenswrapper[4915]: I1124 22:11:08.895141 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jzmrf" event={"ID":"ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6","Type":"ContainerStarted","Data":"fceac05b9736c45382c9f2bd4b8afbb5008920f18a2055cc71d1449a686c7124"} Nov 24 22:11:08 crc kubenswrapper[4915]: I1124 22:11:08.895199 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jzmrf" event={"ID":"ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6","Type":"ContainerStarted","Data":"c88031fec8dad5390f1753a430c6d64d09b00f308b589efa1c58fe43d7d6b856"} Nov 24 22:11:08 crc kubenswrapper[4915]: I1124 22:11:08.913725 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jzmrf" podStartSLOduration=2.420164969 podStartE2EDuration="2.913707967s" podCreationTimestamp="2025-11-24 22:11:06 +0000 UTC" firstStartedPulling="2025-11-24 22:11:07.925916053 +0000 UTC m=+3086.242168226" lastFinishedPulling="2025-11-24 22:11:08.419459031 +0000 UTC m=+3086.735711224" observedRunningTime="2025-11-24 22:11:08.910641494 +0000 UTC m=+3087.226893677" watchObservedRunningTime="2025-11-24 22:11:08.913707967 +0000 UTC m=+3087.229960140" Nov 24 22:11:22 crc kubenswrapper[4915]: I1124 22:11:22.434887 4915 scope.go:117] "RemoveContainer" containerID="054fc2b938644352adf8db396f3f42317897486de434dc1476206a9f95949537" Nov 24 22:11:22 crc kubenswrapper[4915]: E1124 22:11:22.435700 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:11:26 crc kubenswrapper[4915]: I1124 22:11:26.096553 4915 generic.go:334] "Generic (PLEG): container finished" podID="ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6" containerID="fceac05b9736c45382c9f2bd4b8afbb5008920f18a2055cc71d1449a686c7124" exitCode=0 Nov 24 22:11:26 crc kubenswrapper[4915]: I1124 22:11:26.096676 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jzmrf" event={"ID":"ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6","Type":"ContainerDied","Data":"fceac05b9736c45382c9f2bd4b8afbb5008920f18a2055cc71d1449a686c7124"} Nov 24 22:11:27 crc kubenswrapper[4915]: I1124 22:11:27.727552 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jzmrf" Nov 24 22:11:27 crc kubenswrapper[4915]: I1124 22:11:27.873912 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6-inventory\") pod \"ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6\" (UID: \"ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6\") " Nov 24 22:11:27 crc kubenswrapper[4915]: I1124 22:11:27.874018 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqbm4\" (UniqueName: \"kubernetes.io/projected/ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6-kube-api-access-gqbm4\") pod \"ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6\" (UID: \"ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6\") " Nov 24 22:11:27 crc kubenswrapper[4915]: I1124 22:11:27.874189 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6-logging-compute-config-data-0\") pod \"ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6\" (UID: \"ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6\") " Nov 24 22:11:27 crc kubenswrapper[4915]: I1124 22:11:27.874218 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6-ssh-key\") pod \"ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6\" (UID: \"ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6\") " Nov 24 22:11:27 crc kubenswrapper[4915]: I1124 22:11:27.874266 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6-logging-compute-config-data-1\") pod \"ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6\" (UID: \"ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6\") " Nov 24 22:11:27 crc kubenswrapper[4915]: I1124 22:11:27.880946 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6-kube-api-access-gqbm4" (OuterVolumeSpecName: "kube-api-access-gqbm4") pod "ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6" (UID: "ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6"). InnerVolumeSpecName "kube-api-access-gqbm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:11:27 crc kubenswrapper[4915]: I1124 22:11:27.911375 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6" (UID: "ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:11:27 crc kubenswrapper[4915]: I1124 22:11:27.911564 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6" (UID: "ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:11:27 crc kubenswrapper[4915]: I1124 22:11:27.917991 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6-inventory" (OuterVolumeSpecName: "inventory") pod "ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6" (UID: "ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:11:27 crc kubenswrapper[4915]: I1124 22:11:27.930144 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6" (UID: "ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:11:27 crc kubenswrapper[4915]: I1124 22:11:27.982435 4915 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 24 22:11:27 crc kubenswrapper[4915]: I1124 22:11:27.982463 4915 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 22:11:27 crc kubenswrapper[4915]: I1124 22:11:27.982473 4915 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 24 22:11:27 crc kubenswrapper[4915]: I1124 22:11:27.982482 4915 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6-inventory\") on node \"crc\" DevicePath \"\"" Nov 24 22:11:27 crc kubenswrapper[4915]: I1124 22:11:27.982492 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqbm4\" (UniqueName: \"kubernetes.io/projected/ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6-kube-api-access-gqbm4\") on node \"crc\" DevicePath \"\"" Nov 24 22:11:28 crc kubenswrapper[4915]: I1124 22:11:28.127154 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jzmrf" event={"ID":"ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6","Type":"ContainerDied","Data":"c88031fec8dad5390f1753a430c6d64d09b00f308b589efa1c58fe43d7d6b856"} Nov 24 22:11:28 crc kubenswrapper[4915]: I1124 22:11:28.127462 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c88031fec8dad5390f1753a430c6d64d09b00f308b589efa1c58fe43d7d6b856" Nov 24 22:11:28 crc kubenswrapper[4915]: I1124 22:11:28.127515 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-jzmrf" Nov 24 22:11:35 crc kubenswrapper[4915]: I1124 22:11:35.427880 4915 scope.go:117] "RemoveContainer" containerID="054fc2b938644352adf8db396f3f42317897486de434dc1476206a9f95949537" Nov 24 22:11:35 crc kubenswrapper[4915]: E1124 22:11:35.428950 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:11:47 crc kubenswrapper[4915]: I1124 22:11:47.427124 4915 scope.go:117] "RemoveContainer" containerID="054fc2b938644352adf8db396f3f42317897486de434dc1476206a9f95949537" Nov 24 22:11:47 crc kubenswrapper[4915]: E1124 22:11:47.428034 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:11:58 crc kubenswrapper[4915]: I1124 22:11:58.426721 4915 scope.go:117] "RemoveContainer" containerID="054fc2b938644352adf8db396f3f42317897486de434dc1476206a9f95949537" Nov 24 22:11:58 crc kubenswrapper[4915]: E1124 22:11:58.427639 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:12:12 crc kubenswrapper[4915]: I1124 22:12:12.437288 4915 scope.go:117] "RemoveContainer" containerID="054fc2b938644352adf8db396f3f42317897486de434dc1476206a9f95949537" Nov 24 22:12:12 crc kubenswrapper[4915]: E1124 22:12:12.438216 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:12:26 crc kubenswrapper[4915]: I1124 22:12:26.426996 4915 scope.go:117] "RemoveContainer" containerID="054fc2b938644352adf8db396f3f42317897486de434dc1476206a9f95949537" Nov 24 22:12:26 crc kubenswrapper[4915]: E1124 22:12:26.427767 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:12:40 crc kubenswrapper[4915]: I1124 22:12:40.430447 4915 scope.go:117] "RemoveContainer" containerID="054fc2b938644352adf8db396f3f42317897486de434dc1476206a9f95949537" Nov 24 22:12:40 crc kubenswrapper[4915]: E1124 22:12:40.431760 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:12:52 crc kubenswrapper[4915]: I1124 22:12:52.448375 4915 scope.go:117] "RemoveContainer" containerID="054fc2b938644352adf8db396f3f42317897486de434dc1476206a9f95949537" Nov 24 22:12:52 crc kubenswrapper[4915]: E1124 22:12:52.449612 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:13:04 crc kubenswrapper[4915]: I1124 22:13:04.427433 4915 scope.go:117] "RemoveContainer" containerID="054fc2b938644352adf8db396f3f42317897486de434dc1476206a9f95949537" Nov 24 22:13:04 crc kubenswrapper[4915]: E1124 22:13:04.428262 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:13:16 crc kubenswrapper[4915]: I1124 22:13:16.427499 4915 scope.go:117] "RemoveContainer" containerID="054fc2b938644352adf8db396f3f42317897486de434dc1476206a9f95949537" Nov 24 22:13:16 crc kubenswrapper[4915]: E1124 22:13:16.429704 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:13:28 crc kubenswrapper[4915]: I1124 22:13:28.428267 4915 scope.go:117] "RemoveContainer" containerID="054fc2b938644352adf8db396f3f42317897486de434dc1476206a9f95949537" Nov 24 22:13:28 crc kubenswrapper[4915]: E1124 22:13:28.429549 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:13:42 crc kubenswrapper[4915]: I1124 22:13:42.437313 4915 scope.go:117] "RemoveContainer" containerID="054fc2b938644352adf8db396f3f42317897486de434dc1476206a9f95949537" Nov 24 22:13:42 crc kubenswrapper[4915]: E1124 22:13:42.439145 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:13:53 crc kubenswrapper[4915]: I1124 22:13:53.428933 4915 scope.go:117] "RemoveContainer" containerID="054fc2b938644352adf8db396f3f42317897486de434dc1476206a9f95949537" Nov 24 22:13:53 crc kubenswrapper[4915]: E1124 22:13:53.430582 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:14:08 crc kubenswrapper[4915]: I1124 22:14:08.427147 4915 scope.go:117] "RemoveContainer" containerID="054fc2b938644352adf8db396f3f42317897486de434dc1476206a9f95949537" Nov 24 22:14:09 crc kubenswrapper[4915]: I1124 22:14:09.457529 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerStarted","Data":"9cbd3f603d5d3382fd217f881dc17d503d8cc26c34ddaecaaae4eb7dd1d26609"} Nov 24 22:15:00 crc kubenswrapper[4915]: I1124 22:15:00.182589 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400375-c2dz4"] Nov 24 22:15:00 crc kubenswrapper[4915]: E1124 22:15:00.184731 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 24 22:15:00 crc kubenswrapper[4915]: I1124 22:15:00.184846 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 24 22:15:00 crc kubenswrapper[4915]: I1124 22:15:00.185153 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 24 22:15:00 crc kubenswrapper[4915]: I1124 22:15:00.186057 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-c2dz4" Nov 24 22:15:00 crc kubenswrapper[4915]: I1124 22:15:00.195264 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 22:15:00 crc kubenswrapper[4915]: I1124 22:15:00.195484 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 22:15:00 crc kubenswrapper[4915]: I1124 22:15:00.197086 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400375-c2dz4"] Nov 24 22:15:00 crc kubenswrapper[4915]: I1124 22:15:00.284693 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2twz9\" (UniqueName: \"kubernetes.io/projected/2289122f-8531-401d-8af4-df2ee60099c0-kube-api-access-2twz9\") pod \"collect-profiles-29400375-c2dz4\" (UID: \"2289122f-8531-401d-8af4-df2ee60099c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-c2dz4" Nov 24 22:15:00 crc kubenswrapper[4915]: I1124 22:15:00.285260 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2289122f-8531-401d-8af4-df2ee60099c0-config-volume\") pod \"collect-profiles-29400375-c2dz4\" (UID: \"2289122f-8531-401d-8af4-df2ee60099c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-c2dz4" Nov 24 22:15:00 crc kubenswrapper[4915]: I1124 22:15:00.285445 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2289122f-8531-401d-8af4-df2ee60099c0-secret-volume\") pod \"collect-profiles-29400375-c2dz4\" (UID: \"2289122f-8531-401d-8af4-df2ee60099c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-c2dz4" Nov 24 22:15:00 crc kubenswrapper[4915]: I1124 22:15:00.388528 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2twz9\" (UniqueName: \"kubernetes.io/projected/2289122f-8531-401d-8af4-df2ee60099c0-kube-api-access-2twz9\") pod \"collect-profiles-29400375-c2dz4\" (UID: \"2289122f-8531-401d-8af4-df2ee60099c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-c2dz4" Nov 24 22:15:00 crc kubenswrapper[4915]: I1124 22:15:00.388578 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2289122f-8531-401d-8af4-df2ee60099c0-config-volume\") pod \"collect-profiles-29400375-c2dz4\" (UID: \"2289122f-8531-401d-8af4-df2ee60099c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-c2dz4" Nov 24 22:15:00 crc kubenswrapper[4915]: I1124 22:15:00.388685 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2289122f-8531-401d-8af4-df2ee60099c0-secret-volume\") pod \"collect-profiles-29400375-c2dz4\" (UID: \"2289122f-8531-401d-8af4-df2ee60099c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-c2dz4" Nov 24 22:15:00 crc kubenswrapper[4915]: I1124 22:15:00.391027 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2289122f-8531-401d-8af4-df2ee60099c0-config-volume\") pod \"collect-profiles-29400375-c2dz4\" (UID: \"2289122f-8531-401d-8af4-df2ee60099c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-c2dz4" Nov 24 22:15:00 crc kubenswrapper[4915]: I1124 22:15:00.396430 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2289122f-8531-401d-8af4-df2ee60099c0-secret-volume\") pod \"collect-profiles-29400375-c2dz4\" (UID: \"2289122f-8531-401d-8af4-df2ee60099c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-c2dz4" Nov 24 22:15:00 crc kubenswrapper[4915]: I1124 22:15:00.406972 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2twz9\" (UniqueName: \"kubernetes.io/projected/2289122f-8531-401d-8af4-df2ee60099c0-kube-api-access-2twz9\") pod \"collect-profiles-29400375-c2dz4\" (UID: \"2289122f-8531-401d-8af4-df2ee60099c0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-c2dz4" Nov 24 22:15:00 crc kubenswrapper[4915]: I1124 22:15:00.520584 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-c2dz4" Nov 24 22:15:01 crc kubenswrapper[4915]: I1124 22:15:01.000116 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400375-c2dz4"] Nov 24 22:15:01 crc kubenswrapper[4915]: I1124 22:15:01.143265 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-c2dz4" event={"ID":"2289122f-8531-401d-8af4-df2ee60099c0","Type":"ContainerStarted","Data":"3d678025185f2a895735b6f511db916cb107cb7f1bd3c82a653cd798995d56ea"} Nov 24 22:15:02 crc kubenswrapper[4915]: I1124 22:15:02.154736 4915 generic.go:334] "Generic (PLEG): container finished" podID="2289122f-8531-401d-8af4-df2ee60099c0" containerID="3f525ece29c3053b46ab2e937a7047c2cbcc13771402e73a276697815219cdec" exitCode=0 Nov 24 22:15:02 crc kubenswrapper[4915]: I1124 22:15:02.154814 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-c2dz4" event={"ID":"2289122f-8531-401d-8af4-df2ee60099c0","Type":"ContainerDied","Data":"3f525ece29c3053b46ab2e937a7047c2cbcc13771402e73a276697815219cdec"} Nov 24 22:15:03 crc kubenswrapper[4915]: I1124 22:15:03.644040 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-c2dz4" Nov 24 22:15:03 crc kubenswrapper[4915]: I1124 22:15:03.662563 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2289122f-8531-401d-8af4-df2ee60099c0-config-volume\") pod \"2289122f-8531-401d-8af4-df2ee60099c0\" (UID: \"2289122f-8531-401d-8af4-df2ee60099c0\") " Nov 24 22:15:03 crc kubenswrapper[4915]: I1124 22:15:03.662837 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2twz9\" (UniqueName: \"kubernetes.io/projected/2289122f-8531-401d-8af4-df2ee60099c0-kube-api-access-2twz9\") pod \"2289122f-8531-401d-8af4-df2ee60099c0\" (UID: \"2289122f-8531-401d-8af4-df2ee60099c0\") " Nov 24 22:15:03 crc kubenswrapper[4915]: I1124 22:15:03.663015 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2289122f-8531-401d-8af4-df2ee60099c0-secret-volume\") pod \"2289122f-8531-401d-8af4-df2ee60099c0\" (UID: \"2289122f-8531-401d-8af4-df2ee60099c0\") " Nov 24 22:15:03 crc kubenswrapper[4915]: I1124 22:15:03.667569 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2289122f-8531-401d-8af4-df2ee60099c0-config-volume" (OuterVolumeSpecName: "config-volume") pod "2289122f-8531-401d-8af4-df2ee60099c0" (UID: "2289122f-8531-401d-8af4-df2ee60099c0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 22:15:03 crc kubenswrapper[4915]: I1124 22:15:03.671732 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2289122f-8531-401d-8af4-df2ee60099c0-kube-api-access-2twz9" (OuterVolumeSpecName: "kube-api-access-2twz9") pod "2289122f-8531-401d-8af4-df2ee60099c0" (UID: "2289122f-8531-401d-8af4-df2ee60099c0"). InnerVolumeSpecName "kube-api-access-2twz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:15:03 crc kubenswrapper[4915]: I1124 22:15:03.677024 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2289122f-8531-401d-8af4-df2ee60099c0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2289122f-8531-401d-8af4-df2ee60099c0" (UID: "2289122f-8531-401d-8af4-df2ee60099c0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:15:03 crc kubenswrapper[4915]: I1124 22:15:03.767301 4915 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2289122f-8531-401d-8af4-df2ee60099c0-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 22:15:03 crc kubenswrapper[4915]: I1124 22:15:03.767334 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2twz9\" (UniqueName: \"kubernetes.io/projected/2289122f-8531-401d-8af4-df2ee60099c0-kube-api-access-2twz9\") on node \"crc\" DevicePath \"\"" Nov 24 22:15:03 crc kubenswrapper[4915]: I1124 22:15:03.767346 4915 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2289122f-8531-401d-8af4-df2ee60099c0-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 22:15:04 crc kubenswrapper[4915]: I1124 22:15:04.177967 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-c2dz4" event={"ID":"2289122f-8531-401d-8af4-df2ee60099c0","Type":"ContainerDied","Data":"3d678025185f2a895735b6f511db916cb107cb7f1bd3c82a653cd798995d56ea"} Nov 24 22:15:04 crc kubenswrapper[4915]: I1124 22:15:04.178014 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d678025185f2a895735b6f511db916cb107cb7f1bd3c82a653cd798995d56ea" Nov 24 22:15:04 crc kubenswrapper[4915]: I1124 22:15:04.178100 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400375-c2dz4" Nov 24 22:15:04 crc kubenswrapper[4915]: I1124 22:15:04.725660 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400330-m77rg"] Nov 24 22:15:04 crc kubenswrapper[4915]: I1124 22:15:04.739126 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400330-m77rg"] Nov 24 22:15:06 crc kubenswrapper[4915]: I1124 22:15:06.452506 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc" path="/var/lib/kubelet/pods/e4a6f164-d664-4b92-a0c0-8ffdbbeb07dc/volumes" Nov 24 22:15:17 crc kubenswrapper[4915]: I1124 22:15:17.937823 4915 scope.go:117] "RemoveContainer" containerID="c62f234ecec394e652ad6e26029697d408d5aa75b9e1029dc4acdd7a8230cf89" Nov 24 22:15:24 crc kubenswrapper[4915]: I1124 22:15:24.585672 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hb7dj"] Nov 24 22:15:24 crc kubenswrapper[4915]: E1124 22:15:24.587106 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2289122f-8531-401d-8af4-df2ee60099c0" containerName="collect-profiles" Nov 24 22:15:24 crc kubenswrapper[4915]: I1124 22:15:24.587122 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="2289122f-8531-401d-8af4-df2ee60099c0" containerName="collect-profiles" Nov 24 22:15:24 crc kubenswrapper[4915]: I1124 22:15:24.587403 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="2289122f-8531-401d-8af4-df2ee60099c0" containerName="collect-profiles" Nov 24 22:15:24 crc kubenswrapper[4915]: I1124 22:15:24.589271 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hb7dj" Nov 24 22:15:24 crc kubenswrapper[4915]: I1124 22:15:24.599725 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hb7dj"] Nov 24 22:15:25 crc kubenswrapper[4915]: I1124 22:15:25.072966 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92dh2\" (UniqueName: \"kubernetes.io/projected/c25b6faa-4feb-4ab5-a33f-ccb169318aff-kube-api-access-92dh2\") pod \"redhat-operators-hb7dj\" (UID: \"c25b6faa-4feb-4ab5-a33f-ccb169318aff\") " pod="openshift-marketplace/redhat-operators-hb7dj" Nov 24 22:15:25 crc kubenswrapper[4915]: I1124 22:15:25.073370 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c25b6faa-4feb-4ab5-a33f-ccb169318aff-catalog-content\") pod \"redhat-operators-hb7dj\" (UID: \"c25b6faa-4feb-4ab5-a33f-ccb169318aff\") " pod="openshift-marketplace/redhat-operators-hb7dj" Nov 24 22:15:25 crc kubenswrapper[4915]: I1124 22:15:25.073771 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c25b6faa-4feb-4ab5-a33f-ccb169318aff-utilities\") pod \"redhat-operators-hb7dj\" (UID: \"c25b6faa-4feb-4ab5-a33f-ccb169318aff\") " pod="openshift-marketplace/redhat-operators-hb7dj" Nov 24 22:15:25 crc kubenswrapper[4915]: I1124 22:15:25.176741 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92dh2\" (UniqueName: \"kubernetes.io/projected/c25b6faa-4feb-4ab5-a33f-ccb169318aff-kube-api-access-92dh2\") pod \"redhat-operators-hb7dj\" (UID: \"c25b6faa-4feb-4ab5-a33f-ccb169318aff\") " pod="openshift-marketplace/redhat-operators-hb7dj" Nov 24 22:15:25 crc kubenswrapper[4915]: I1124 22:15:25.176810 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c25b6faa-4feb-4ab5-a33f-ccb169318aff-catalog-content\") pod \"redhat-operators-hb7dj\" (UID: \"c25b6faa-4feb-4ab5-a33f-ccb169318aff\") " pod="openshift-marketplace/redhat-operators-hb7dj" Nov 24 22:15:25 crc kubenswrapper[4915]: I1124 22:15:25.176909 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c25b6faa-4feb-4ab5-a33f-ccb169318aff-utilities\") pod \"redhat-operators-hb7dj\" (UID: \"c25b6faa-4feb-4ab5-a33f-ccb169318aff\") " pod="openshift-marketplace/redhat-operators-hb7dj" Nov 24 22:15:25 crc kubenswrapper[4915]: I1124 22:15:25.177460 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c25b6faa-4feb-4ab5-a33f-ccb169318aff-utilities\") pod \"redhat-operators-hb7dj\" (UID: \"c25b6faa-4feb-4ab5-a33f-ccb169318aff\") " pod="openshift-marketplace/redhat-operators-hb7dj" Nov 24 22:15:25 crc kubenswrapper[4915]: I1124 22:15:25.195569 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c25b6faa-4feb-4ab5-a33f-ccb169318aff-catalog-content\") pod \"redhat-operators-hb7dj\" (UID: \"c25b6faa-4feb-4ab5-a33f-ccb169318aff\") " pod="openshift-marketplace/redhat-operators-hb7dj" Nov 24 22:15:25 crc kubenswrapper[4915]: I1124 22:15:25.216490 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92dh2\" (UniqueName: \"kubernetes.io/projected/c25b6faa-4feb-4ab5-a33f-ccb169318aff-kube-api-access-92dh2\") pod \"redhat-operators-hb7dj\" (UID: \"c25b6faa-4feb-4ab5-a33f-ccb169318aff\") " pod="openshift-marketplace/redhat-operators-hb7dj" Nov 24 22:15:25 crc kubenswrapper[4915]: I1124 22:15:25.217533 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hb7dj" Nov 24 22:15:25 crc kubenswrapper[4915]: I1124 22:15:25.853957 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hb7dj"] Nov 24 22:15:26 crc kubenswrapper[4915]: I1124 22:15:26.546673 4915 generic.go:334] "Generic (PLEG): container finished" podID="c25b6faa-4feb-4ab5-a33f-ccb169318aff" containerID="379afeda7fd5fe7331a9299f6c06779d0ab8df8084cc5f077467584efb409e6c" exitCode=0 Nov 24 22:15:26 crc kubenswrapper[4915]: I1124 22:15:26.546756 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hb7dj" event={"ID":"c25b6faa-4feb-4ab5-a33f-ccb169318aff","Type":"ContainerDied","Data":"379afeda7fd5fe7331a9299f6c06779d0ab8df8084cc5f077467584efb409e6c"} Nov 24 22:15:26 crc kubenswrapper[4915]: I1124 22:15:26.546988 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hb7dj" event={"ID":"c25b6faa-4feb-4ab5-a33f-ccb169318aff","Type":"ContainerStarted","Data":"945826d7a33ba370a909723417dc102ca044e30a94b1ccfd5216a024d1737f48"} Nov 24 22:15:26 crc kubenswrapper[4915]: I1124 22:15:26.549205 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 22:15:28 crc kubenswrapper[4915]: I1124 22:15:28.571321 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hb7dj" event={"ID":"c25b6faa-4feb-4ab5-a33f-ccb169318aff","Type":"ContainerStarted","Data":"4a9a7a22722b8c10177f3876b0ba7ae06c98bb57deed2930a2b80f5f12a97281"} Nov 24 22:15:32 crc kubenswrapper[4915]: I1124 22:15:32.622084 4915 generic.go:334] "Generic (PLEG): container finished" podID="c25b6faa-4feb-4ab5-a33f-ccb169318aff" containerID="4a9a7a22722b8c10177f3876b0ba7ae06c98bb57deed2930a2b80f5f12a97281" exitCode=0 Nov 24 22:15:32 crc kubenswrapper[4915]: I1124 22:15:32.622175 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hb7dj" event={"ID":"c25b6faa-4feb-4ab5-a33f-ccb169318aff","Type":"ContainerDied","Data":"4a9a7a22722b8c10177f3876b0ba7ae06c98bb57deed2930a2b80f5f12a97281"} Nov 24 22:15:33 crc kubenswrapper[4915]: I1124 22:15:33.634395 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hb7dj" event={"ID":"c25b6faa-4feb-4ab5-a33f-ccb169318aff","Type":"ContainerStarted","Data":"465d19251e42c3ca8a4e2f6fe3cc3b54025ff3e6860757518aeea9d3f556131d"} Nov 24 22:15:33 crc kubenswrapper[4915]: I1124 22:15:33.665342 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hb7dj" podStartSLOduration=3.210797219 podStartE2EDuration="9.665317914s" podCreationTimestamp="2025-11-24 22:15:24 +0000 UTC" firstStartedPulling="2025-11-24 22:15:26.549001132 +0000 UTC m=+3344.865253295" lastFinishedPulling="2025-11-24 22:15:33.003521817 +0000 UTC m=+3351.319773990" observedRunningTime="2025-11-24 22:15:33.654485922 +0000 UTC m=+3351.970738125" watchObservedRunningTime="2025-11-24 22:15:33.665317914 +0000 UTC m=+3351.981570087" Nov 24 22:15:35 crc kubenswrapper[4915]: I1124 22:15:35.217966 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hb7dj" Nov 24 22:15:35 crc kubenswrapper[4915]: I1124 22:15:35.218293 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hb7dj" Nov 24 22:15:36 crc kubenswrapper[4915]: I1124 22:15:36.273717 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hb7dj" podUID="c25b6faa-4feb-4ab5-a33f-ccb169318aff" containerName="registry-server" probeResult="failure" output=< Nov 24 22:15:36 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Nov 24 22:15:36 crc kubenswrapper[4915]: > Nov 24 22:15:45 crc kubenswrapper[4915]: I1124 22:15:45.280874 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hb7dj" Nov 24 22:15:45 crc kubenswrapper[4915]: I1124 22:15:45.340992 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hb7dj" Nov 24 22:15:45 crc kubenswrapper[4915]: I1124 22:15:45.522491 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hb7dj"] Nov 24 22:15:46 crc kubenswrapper[4915]: I1124 22:15:46.827159 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hb7dj" podUID="c25b6faa-4feb-4ab5-a33f-ccb169318aff" containerName="registry-server" containerID="cri-o://465d19251e42c3ca8a4e2f6fe3cc3b54025ff3e6860757518aeea9d3f556131d" gracePeriod=2 Nov 24 22:15:47 crc kubenswrapper[4915]: I1124 22:15:47.857071 4915 generic.go:334] "Generic (PLEG): container finished" podID="c25b6faa-4feb-4ab5-a33f-ccb169318aff" containerID="465d19251e42c3ca8a4e2f6fe3cc3b54025ff3e6860757518aeea9d3f556131d" exitCode=0 Nov 24 22:15:47 crc kubenswrapper[4915]: I1124 22:15:47.857172 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hb7dj" event={"ID":"c25b6faa-4feb-4ab5-a33f-ccb169318aff","Type":"ContainerDied","Data":"465d19251e42c3ca8a4e2f6fe3cc3b54025ff3e6860757518aeea9d3f556131d"} Nov 24 22:15:47 crc kubenswrapper[4915]: I1124 22:15:47.960823 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hb7dj" Nov 24 22:15:48 crc kubenswrapper[4915]: I1124 22:15:48.082626 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c25b6faa-4feb-4ab5-a33f-ccb169318aff-utilities\") pod \"c25b6faa-4feb-4ab5-a33f-ccb169318aff\" (UID: \"c25b6faa-4feb-4ab5-a33f-ccb169318aff\") " Nov 24 22:15:48 crc kubenswrapper[4915]: I1124 22:15:48.082891 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c25b6faa-4feb-4ab5-a33f-ccb169318aff-catalog-content\") pod \"c25b6faa-4feb-4ab5-a33f-ccb169318aff\" (UID: \"c25b6faa-4feb-4ab5-a33f-ccb169318aff\") " Nov 24 22:15:48 crc kubenswrapper[4915]: I1124 22:15:48.083136 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92dh2\" (UniqueName: \"kubernetes.io/projected/c25b6faa-4feb-4ab5-a33f-ccb169318aff-kube-api-access-92dh2\") pod \"c25b6faa-4feb-4ab5-a33f-ccb169318aff\" (UID: \"c25b6faa-4feb-4ab5-a33f-ccb169318aff\") " Nov 24 22:15:48 crc kubenswrapper[4915]: I1124 22:15:48.083567 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c25b6faa-4feb-4ab5-a33f-ccb169318aff-utilities" (OuterVolumeSpecName: "utilities") pod "c25b6faa-4feb-4ab5-a33f-ccb169318aff" (UID: "c25b6faa-4feb-4ab5-a33f-ccb169318aff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:15:48 crc kubenswrapper[4915]: I1124 22:15:48.083840 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c25b6faa-4feb-4ab5-a33f-ccb169318aff-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:15:48 crc kubenswrapper[4915]: I1124 22:15:48.092383 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c25b6faa-4feb-4ab5-a33f-ccb169318aff-kube-api-access-92dh2" (OuterVolumeSpecName: "kube-api-access-92dh2") pod "c25b6faa-4feb-4ab5-a33f-ccb169318aff" (UID: "c25b6faa-4feb-4ab5-a33f-ccb169318aff"). InnerVolumeSpecName "kube-api-access-92dh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:15:48 crc kubenswrapper[4915]: I1124 22:15:48.185620 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c25b6faa-4feb-4ab5-a33f-ccb169318aff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c25b6faa-4feb-4ab5-a33f-ccb169318aff" (UID: "c25b6faa-4feb-4ab5-a33f-ccb169318aff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:15:48 crc kubenswrapper[4915]: I1124 22:15:48.186693 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c25b6faa-4feb-4ab5-a33f-ccb169318aff-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:15:48 crc kubenswrapper[4915]: I1124 22:15:48.186714 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92dh2\" (UniqueName: \"kubernetes.io/projected/c25b6faa-4feb-4ab5-a33f-ccb169318aff-kube-api-access-92dh2\") on node \"crc\" DevicePath \"\"" Nov 24 22:15:48 crc kubenswrapper[4915]: I1124 22:15:48.869494 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hb7dj" event={"ID":"c25b6faa-4feb-4ab5-a33f-ccb169318aff","Type":"ContainerDied","Data":"945826d7a33ba370a909723417dc102ca044e30a94b1ccfd5216a024d1737f48"} Nov 24 22:15:48 crc kubenswrapper[4915]: I1124 22:15:48.869554 4915 scope.go:117] "RemoveContainer" containerID="465d19251e42c3ca8a4e2f6fe3cc3b54025ff3e6860757518aeea9d3f556131d" Nov 24 22:15:48 crc kubenswrapper[4915]: I1124 22:15:48.869587 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hb7dj" Nov 24 22:15:48 crc kubenswrapper[4915]: I1124 22:15:48.897214 4915 scope.go:117] "RemoveContainer" containerID="4a9a7a22722b8c10177f3876b0ba7ae06c98bb57deed2930a2b80f5f12a97281" Nov 24 22:15:48 crc kubenswrapper[4915]: I1124 22:15:48.904323 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hb7dj"] Nov 24 22:15:48 crc kubenswrapper[4915]: I1124 22:15:48.915483 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hb7dj"] Nov 24 22:15:48 crc kubenswrapper[4915]: I1124 22:15:48.922829 4915 scope.go:117] "RemoveContainer" containerID="379afeda7fd5fe7331a9299f6c06779d0ab8df8084cc5f077467584efb409e6c" Nov 24 22:15:50 crc kubenswrapper[4915]: I1124 22:15:50.437439 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c25b6faa-4feb-4ab5-a33f-ccb169318aff" path="/var/lib/kubelet/pods/c25b6faa-4feb-4ab5-a33f-ccb169318aff/volumes" Nov 24 22:16:24 crc kubenswrapper[4915]: I1124 22:16:24.327049 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:16:24 crc kubenswrapper[4915]: I1124 22:16:24.327676 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:16:54 crc kubenswrapper[4915]: I1124 22:16:54.327523 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:16:54 crc kubenswrapper[4915]: I1124 22:16:54.328030 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:17:24 crc kubenswrapper[4915]: I1124 22:17:24.327464 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:17:24 crc kubenswrapper[4915]: I1124 22:17:24.328400 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:17:24 crc kubenswrapper[4915]: I1124 22:17:24.328483 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 22:17:24 crc kubenswrapper[4915]: I1124 22:17:24.329941 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9cbd3f603d5d3382fd217f881dc17d503d8cc26c34ddaecaaae4eb7dd1d26609"} pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:17:24 crc kubenswrapper[4915]: I1124 22:17:24.330010 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" containerID="cri-o://9cbd3f603d5d3382fd217f881dc17d503d8cc26c34ddaecaaae4eb7dd1d26609" gracePeriod=600 Nov 24 22:17:25 crc kubenswrapper[4915]: I1124 22:17:25.044193 4915 generic.go:334] "Generic (PLEG): container finished" podID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerID="9cbd3f603d5d3382fd217f881dc17d503d8cc26c34ddaecaaae4eb7dd1d26609" exitCode=0 Nov 24 22:17:25 crc kubenswrapper[4915]: I1124 22:17:25.044367 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerDied","Data":"9cbd3f603d5d3382fd217f881dc17d503d8cc26c34ddaecaaae4eb7dd1d26609"} Nov 24 22:17:25 crc kubenswrapper[4915]: I1124 22:17:25.045077 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerStarted","Data":"50adaa21a51d5ce7115d13cf3c49340f370b75afc08031fe6f5ac4f68c1ae24f"} Nov 24 22:17:25 crc kubenswrapper[4915]: I1124 22:17:25.045112 4915 scope.go:117] "RemoveContainer" containerID="054fc2b938644352adf8db396f3f42317897486de434dc1476206a9f95949537" Nov 24 22:17:48 crc kubenswrapper[4915]: I1124 22:17:48.171861 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-94ztk"] Nov 24 22:17:48 crc kubenswrapper[4915]: E1124 22:17:48.174115 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c25b6faa-4feb-4ab5-a33f-ccb169318aff" containerName="registry-server" Nov 24 22:17:48 crc kubenswrapper[4915]: I1124 22:17:48.174136 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c25b6faa-4feb-4ab5-a33f-ccb169318aff" containerName="registry-server" Nov 24 22:17:48 crc kubenswrapper[4915]: E1124 22:17:48.174154 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c25b6faa-4feb-4ab5-a33f-ccb169318aff" containerName="extract-content" Nov 24 22:17:48 crc kubenswrapper[4915]: I1124 22:17:48.174162 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c25b6faa-4feb-4ab5-a33f-ccb169318aff" containerName="extract-content" Nov 24 22:17:48 crc kubenswrapper[4915]: E1124 22:17:48.174217 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c25b6faa-4feb-4ab5-a33f-ccb169318aff" containerName="extract-utilities" Nov 24 22:17:48 crc kubenswrapper[4915]: I1124 22:17:48.174226 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c25b6faa-4feb-4ab5-a33f-ccb169318aff" containerName="extract-utilities" Nov 24 22:17:48 crc kubenswrapper[4915]: I1124 22:17:48.174495 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c25b6faa-4feb-4ab5-a33f-ccb169318aff" containerName="registry-server" Nov 24 22:17:48 crc kubenswrapper[4915]: I1124 22:17:48.177070 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94ztk" Nov 24 22:17:48 crc kubenswrapper[4915]: I1124 22:17:48.184826 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-94ztk"] Nov 24 22:17:48 crc kubenswrapper[4915]: I1124 22:17:48.301358 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5ced570-eb84-49e4-b193-453b60cca7c3-catalog-content\") pod \"community-operators-94ztk\" (UID: \"d5ced570-eb84-49e4-b193-453b60cca7c3\") " pod="openshift-marketplace/community-operators-94ztk" Nov 24 22:17:48 crc kubenswrapper[4915]: I1124 22:17:48.301470 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7pwr\" (UniqueName: \"kubernetes.io/projected/d5ced570-eb84-49e4-b193-453b60cca7c3-kube-api-access-b7pwr\") pod \"community-operators-94ztk\" (UID: \"d5ced570-eb84-49e4-b193-453b60cca7c3\") " pod="openshift-marketplace/community-operators-94ztk" Nov 24 22:17:48 crc kubenswrapper[4915]: I1124 22:17:48.301520 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5ced570-eb84-49e4-b193-453b60cca7c3-utilities\") pod \"community-operators-94ztk\" (UID: \"d5ced570-eb84-49e4-b193-453b60cca7c3\") " pod="openshift-marketplace/community-operators-94ztk" Nov 24 22:17:48 crc kubenswrapper[4915]: I1124 22:17:48.403487 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5ced570-eb84-49e4-b193-453b60cca7c3-catalog-content\") pod \"community-operators-94ztk\" (UID: \"d5ced570-eb84-49e4-b193-453b60cca7c3\") " pod="openshift-marketplace/community-operators-94ztk" Nov 24 22:17:48 crc kubenswrapper[4915]: I1124 22:17:48.403643 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7pwr\" (UniqueName: \"kubernetes.io/projected/d5ced570-eb84-49e4-b193-453b60cca7c3-kube-api-access-b7pwr\") pod \"community-operators-94ztk\" (UID: \"d5ced570-eb84-49e4-b193-453b60cca7c3\") " pod="openshift-marketplace/community-operators-94ztk" Nov 24 22:17:48 crc kubenswrapper[4915]: I1124 22:17:48.403709 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5ced570-eb84-49e4-b193-453b60cca7c3-utilities\") pod \"community-operators-94ztk\" (UID: \"d5ced570-eb84-49e4-b193-453b60cca7c3\") " pod="openshift-marketplace/community-operators-94ztk" Nov 24 22:17:48 crc kubenswrapper[4915]: I1124 22:17:48.404043 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5ced570-eb84-49e4-b193-453b60cca7c3-catalog-content\") pod \"community-operators-94ztk\" (UID: \"d5ced570-eb84-49e4-b193-453b60cca7c3\") " pod="openshift-marketplace/community-operators-94ztk" Nov 24 22:17:48 crc kubenswrapper[4915]: I1124 22:17:48.404333 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5ced570-eb84-49e4-b193-453b60cca7c3-utilities\") pod \"community-operators-94ztk\" (UID: \"d5ced570-eb84-49e4-b193-453b60cca7c3\") " pod="openshift-marketplace/community-operators-94ztk" Nov 24 22:17:48 crc kubenswrapper[4915]: I1124 22:17:48.471721 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7pwr\" (UniqueName: \"kubernetes.io/projected/d5ced570-eb84-49e4-b193-453b60cca7c3-kube-api-access-b7pwr\") pod \"community-operators-94ztk\" (UID: \"d5ced570-eb84-49e4-b193-453b60cca7c3\") " pod="openshift-marketplace/community-operators-94ztk" Nov 24 22:17:48 crc kubenswrapper[4915]: I1124 22:17:48.513000 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94ztk" Nov 24 22:17:49 crc kubenswrapper[4915]: I1124 22:17:49.026386 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-94ztk"] Nov 24 22:17:49 crc kubenswrapper[4915]: I1124 22:17:49.374647 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94ztk" event={"ID":"d5ced570-eb84-49e4-b193-453b60cca7c3","Type":"ContainerStarted","Data":"87b84498aa60d30935f8deba9677510aa3e1ba455b3c492ef688eb76a881d95b"} Nov 24 22:17:50 crc kubenswrapper[4915]: I1124 22:17:50.387461 4915 generic.go:334] "Generic (PLEG): container finished" podID="d5ced570-eb84-49e4-b193-453b60cca7c3" containerID="41ccf65d9468b78c320ed5b75191610f640f81abfd2adc563904c2a13ed0a048" exitCode=0 Nov 24 22:17:50 crc kubenswrapper[4915]: I1124 22:17:50.387623 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94ztk" event={"ID":"d5ced570-eb84-49e4-b193-453b60cca7c3","Type":"ContainerDied","Data":"41ccf65d9468b78c320ed5b75191610f640f81abfd2adc563904c2a13ed0a048"} Nov 24 22:17:51 crc kubenswrapper[4915]: I1124 22:17:51.411108 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94ztk" event={"ID":"d5ced570-eb84-49e4-b193-453b60cca7c3","Type":"ContainerStarted","Data":"c7e3776c10c17f85c198d2d1743495676dfdf4a9377c0d97dd39604b87c3a77a"} Nov 24 22:17:53 crc kubenswrapper[4915]: I1124 22:17:53.434170 4915 generic.go:334] "Generic (PLEG): container finished" podID="d5ced570-eb84-49e4-b193-453b60cca7c3" containerID="c7e3776c10c17f85c198d2d1743495676dfdf4a9377c0d97dd39604b87c3a77a" exitCode=0 Nov 24 22:17:53 crc kubenswrapper[4915]: I1124 22:17:53.434304 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94ztk" event={"ID":"d5ced570-eb84-49e4-b193-453b60cca7c3","Type":"ContainerDied","Data":"c7e3776c10c17f85c198d2d1743495676dfdf4a9377c0d97dd39604b87c3a77a"} Nov 24 22:17:54 crc kubenswrapper[4915]: I1124 22:17:54.451124 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94ztk" event={"ID":"d5ced570-eb84-49e4-b193-453b60cca7c3","Type":"ContainerStarted","Data":"53236361d3370c143e7e1a9f231c4a28e7f6a21ffda0e7a2b037e1cc30481741"} Nov 24 22:17:54 crc kubenswrapper[4915]: I1124 22:17:54.471344 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-94ztk" podStartSLOduration=2.925995044 podStartE2EDuration="6.47132559s" podCreationTimestamp="2025-11-24 22:17:48 +0000 UTC" firstStartedPulling="2025-11-24 22:17:50.392020173 +0000 UTC m=+3488.708272346" lastFinishedPulling="2025-11-24 22:17:53.937350689 +0000 UTC m=+3492.253602892" observedRunningTime="2025-11-24 22:17:54.468894276 +0000 UTC m=+3492.785146459" watchObservedRunningTime="2025-11-24 22:17:54.47132559 +0000 UTC m=+3492.787577763" Nov 24 22:17:58 crc kubenswrapper[4915]: I1124 22:17:58.513704 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-94ztk" Nov 24 22:17:58 crc kubenswrapper[4915]: I1124 22:17:58.514332 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-94ztk" Nov 24 22:17:58 crc kubenswrapper[4915]: I1124 22:17:58.578903 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-94ztk" Nov 24 22:17:59 crc kubenswrapper[4915]: I1124 22:17:59.593370 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-94ztk" Nov 24 22:17:59 crc kubenswrapper[4915]: I1124 22:17:59.658824 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-94ztk"] Nov 24 22:18:01 crc kubenswrapper[4915]: I1124 22:18:01.550139 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-94ztk" podUID="d5ced570-eb84-49e4-b193-453b60cca7c3" containerName="registry-server" containerID="cri-o://53236361d3370c143e7e1a9f231c4a28e7f6a21ffda0e7a2b037e1cc30481741" gracePeriod=2 Nov 24 22:18:02 crc kubenswrapper[4915]: I1124 22:18:02.086885 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94ztk" Nov 24 22:18:02 crc kubenswrapper[4915]: I1124 22:18:02.179557 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5ced570-eb84-49e4-b193-453b60cca7c3-utilities\") pod \"d5ced570-eb84-49e4-b193-453b60cca7c3\" (UID: \"d5ced570-eb84-49e4-b193-453b60cca7c3\") " Nov 24 22:18:02 crc kubenswrapper[4915]: I1124 22:18:02.179714 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5ced570-eb84-49e4-b193-453b60cca7c3-catalog-content\") pod \"d5ced570-eb84-49e4-b193-453b60cca7c3\" (UID: \"d5ced570-eb84-49e4-b193-453b60cca7c3\") " Nov 24 22:18:02 crc kubenswrapper[4915]: I1124 22:18:02.179780 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7pwr\" (UniqueName: \"kubernetes.io/projected/d5ced570-eb84-49e4-b193-453b60cca7c3-kube-api-access-b7pwr\") pod \"d5ced570-eb84-49e4-b193-453b60cca7c3\" (UID: \"d5ced570-eb84-49e4-b193-453b60cca7c3\") " Nov 24 22:18:02 crc kubenswrapper[4915]: I1124 22:18:02.180360 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5ced570-eb84-49e4-b193-453b60cca7c3-utilities" (OuterVolumeSpecName: "utilities") pod "d5ced570-eb84-49e4-b193-453b60cca7c3" (UID: "d5ced570-eb84-49e4-b193-453b60cca7c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:18:02 crc kubenswrapper[4915]: I1124 22:18:02.180464 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5ced570-eb84-49e4-b193-453b60cca7c3-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:18:02 crc kubenswrapper[4915]: I1124 22:18:02.185284 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5ced570-eb84-49e4-b193-453b60cca7c3-kube-api-access-b7pwr" (OuterVolumeSpecName: "kube-api-access-b7pwr") pod "d5ced570-eb84-49e4-b193-453b60cca7c3" (UID: "d5ced570-eb84-49e4-b193-453b60cca7c3"). InnerVolumeSpecName "kube-api-access-b7pwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:18:02 crc kubenswrapper[4915]: I1124 22:18:02.232078 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5ced570-eb84-49e4-b193-453b60cca7c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5ced570-eb84-49e4-b193-453b60cca7c3" (UID: "d5ced570-eb84-49e4-b193-453b60cca7c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:18:02 crc kubenswrapper[4915]: I1124 22:18:02.282984 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7pwr\" (UniqueName: \"kubernetes.io/projected/d5ced570-eb84-49e4-b193-453b60cca7c3-kube-api-access-b7pwr\") on node \"crc\" DevicePath \"\"" Nov 24 22:18:02 crc kubenswrapper[4915]: I1124 22:18:02.283324 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5ced570-eb84-49e4-b193-453b60cca7c3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:18:02 crc kubenswrapper[4915]: I1124 22:18:02.569179 4915 generic.go:334] "Generic (PLEG): container finished" podID="d5ced570-eb84-49e4-b193-453b60cca7c3" containerID="53236361d3370c143e7e1a9f231c4a28e7f6a21ffda0e7a2b037e1cc30481741" exitCode=0 Nov 24 22:18:02 crc kubenswrapper[4915]: I1124 22:18:02.569248 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94ztk" event={"ID":"d5ced570-eb84-49e4-b193-453b60cca7c3","Type":"ContainerDied","Data":"53236361d3370c143e7e1a9f231c4a28e7f6a21ffda0e7a2b037e1cc30481741"} Nov 24 22:18:02 crc kubenswrapper[4915]: I1124 22:18:02.569292 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-94ztk" event={"ID":"d5ced570-eb84-49e4-b193-453b60cca7c3","Type":"ContainerDied","Data":"87b84498aa60d30935f8deba9677510aa3e1ba455b3c492ef688eb76a881d95b"} Nov 24 22:18:02 crc kubenswrapper[4915]: I1124 22:18:02.569316 4915 scope.go:117] "RemoveContainer" containerID="53236361d3370c143e7e1a9f231c4a28e7f6a21ffda0e7a2b037e1cc30481741" Nov 24 22:18:02 crc kubenswrapper[4915]: I1124 22:18:02.569326 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-94ztk" Nov 24 22:18:02 crc kubenswrapper[4915]: I1124 22:18:02.609599 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-94ztk"] Nov 24 22:18:02 crc kubenswrapper[4915]: I1124 22:18:02.625649 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-94ztk"] Nov 24 22:18:02 crc kubenswrapper[4915]: I1124 22:18:02.642470 4915 scope.go:117] "RemoveContainer" containerID="c7e3776c10c17f85c198d2d1743495676dfdf4a9377c0d97dd39604b87c3a77a" Nov 24 22:18:02 crc kubenswrapper[4915]: I1124 22:18:02.689598 4915 scope.go:117] "RemoveContainer" containerID="41ccf65d9468b78c320ed5b75191610f640f81abfd2adc563904c2a13ed0a048" Nov 24 22:18:02 crc kubenswrapper[4915]: I1124 22:18:02.726484 4915 scope.go:117] "RemoveContainer" containerID="53236361d3370c143e7e1a9f231c4a28e7f6a21ffda0e7a2b037e1cc30481741" Nov 24 22:18:02 crc kubenswrapper[4915]: E1124 22:18:02.726958 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53236361d3370c143e7e1a9f231c4a28e7f6a21ffda0e7a2b037e1cc30481741\": container with ID starting with 53236361d3370c143e7e1a9f231c4a28e7f6a21ffda0e7a2b037e1cc30481741 not found: ID does not exist" containerID="53236361d3370c143e7e1a9f231c4a28e7f6a21ffda0e7a2b037e1cc30481741" Nov 24 22:18:02 crc kubenswrapper[4915]: I1124 22:18:02.727014 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53236361d3370c143e7e1a9f231c4a28e7f6a21ffda0e7a2b037e1cc30481741"} err="failed to get container status \"53236361d3370c143e7e1a9f231c4a28e7f6a21ffda0e7a2b037e1cc30481741\": rpc error: code = NotFound desc = could not find container \"53236361d3370c143e7e1a9f231c4a28e7f6a21ffda0e7a2b037e1cc30481741\": container with ID starting with 53236361d3370c143e7e1a9f231c4a28e7f6a21ffda0e7a2b037e1cc30481741 not found: ID does not exist" Nov 24 22:18:02 crc kubenswrapper[4915]: I1124 22:18:02.727070 4915 scope.go:117] "RemoveContainer" containerID="c7e3776c10c17f85c198d2d1743495676dfdf4a9377c0d97dd39604b87c3a77a" Nov 24 22:18:02 crc kubenswrapper[4915]: E1124 22:18:02.727411 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7e3776c10c17f85c198d2d1743495676dfdf4a9377c0d97dd39604b87c3a77a\": container with ID starting with c7e3776c10c17f85c198d2d1743495676dfdf4a9377c0d97dd39604b87c3a77a not found: ID does not exist" containerID="c7e3776c10c17f85c198d2d1743495676dfdf4a9377c0d97dd39604b87c3a77a" Nov 24 22:18:02 crc kubenswrapper[4915]: I1124 22:18:02.727451 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7e3776c10c17f85c198d2d1743495676dfdf4a9377c0d97dd39604b87c3a77a"} err="failed to get container status \"c7e3776c10c17f85c198d2d1743495676dfdf4a9377c0d97dd39604b87c3a77a\": rpc error: code = NotFound desc = could not find container \"c7e3776c10c17f85c198d2d1743495676dfdf4a9377c0d97dd39604b87c3a77a\": container with ID starting with c7e3776c10c17f85c198d2d1743495676dfdf4a9377c0d97dd39604b87c3a77a not found: ID does not exist" Nov 24 22:18:02 crc kubenswrapper[4915]: I1124 22:18:02.727470 4915 scope.go:117] "RemoveContainer" containerID="41ccf65d9468b78c320ed5b75191610f640f81abfd2adc563904c2a13ed0a048" Nov 24 22:18:02 crc kubenswrapper[4915]: E1124 22:18:02.727991 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41ccf65d9468b78c320ed5b75191610f640f81abfd2adc563904c2a13ed0a048\": container with ID starting with 41ccf65d9468b78c320ed5b75191610f640f81abfd2adc563904c2a13ed0a048 not found: ID does not exist" containerID="41ccf65d9468b78c320ed5b75191610f640f81abfd2adc563904c2a13ed0a048" Nov 24 22:18:02 crc kubenswrapper[4915]: I1124 22:18:02.728024 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41ccf65d9468b78c320ed5b75191610f640f81abfd2adc563904c2a13ed0a048"} err="failed to get container status \"41ccf65d9468b78c320ed5b75191610f640f81abfd2adc563904c2a13ed0a048\": rpc error: code = NotFound desc = could not find container \"41ccf65d9468b78c320ed5b75191610f640f81abfd2adc563904c2a13ed0a048\": container with ID starting with 41ccf65d9468b78c320ed5b75191610f640f81abfd2adc563904c2a13ed0a048 not found: ID does not exist" Nov 24 22:18:04 crc kubenswrapper[4915]: I1124 22:18:04.447593 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5ced570-eb84-49e4-b193-453b60cca7c3" path="/var/lib/kubelet/pods/d5ced570-eb84-49e4-b193-453b60cca7c3/volumes" Nov 24 22:18:38 crc kubenswrapper[4915]: I1124 22:18:38.933256 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x92v8"] Nov 24 22:18:38 crc kubenswrapper[4915]: E1124 22:18:38.934404 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ced570-eb84-49e4-b193-453b60cca7c3" containerName="registry-server" Nov 24 22:18:38 crc kubenswrapper[4915]: I1124 22:18:38.934420 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ced570-eb84-49e4-b193-453b60cca7c3" containerName="registry-server" Nov 24 22:18:38 crc kubenswrapper[4915]: E1124 22:18:38.934455 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ced570-eb84-49e4-b193-453b60cca7c3" containerName="extract-content" Nov 24 22:18:38 crc kubenswrapper[4915]: I1124 22:18:38.934464 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ced570-eb84-49e4-b193-453b60cca7c3" containerName="extract-content" Nov 24 22:18:38 crc kubenswrapper[4915]: E1124 22:18:38.934488 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ced570-eb84-49e4-b193-453b60cca7c3" containerName="extract-utilities" Nov 24 22:18:38 crc kubenswrapper[4915]: I1124 22:18:38.934499 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ced570-eb84-49e4-b193-453b60cca7c3" containerName="extract-utilities" Nov 24 22:18:38 crc kubenswrapper[4915]: I1124 22:18:38.934786 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5ced570-eb84-49e4-b193-453b60cca7c3" containerName="registry-server" Nov 24 22:18:38 crc kubenswrapper[4915]: I1124 22:18:38.937117 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x92v8" Nov 24 22:18:38 crc kubenswrapper[4915]: I1124 22:18:38.946493 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x92v8"] Nov 24 22:18:39 crc kubenswrapper[4915]: I1124 22:18:39.007425 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5p2b\" (UniqueName: \"kubernetes.io/projected/25064c1b-8a7b-4a53-8f1e-093d0563002c-kube-api-access-t5p2b\") pod \"redhat-marketplace-x92v8\" (UID: \"25064c1b-8a7b-4a53-8f1e-093d0563002c\") " pod="openshift-marketplace/redhat-marketplace-x92v8" Nov 24 22:18:39 crc kubenswrapper[4915]: I1124 22:18:39.007464 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25064c1b-8a7b-4a53-8f1e-093d0563002c-catalog-content\") pod \"redhat-marketplace-x92v8\" (UID: \"25064c1b-8a7b-4a53-8f1e-093d0563002c\") " pod="openshift-marketplace/redhat-marketplace-x92v8" Nov 24 22:18:39 crc kubenswrapper[4915]: I1124 22:18:39.007530 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25064c1b-8a7b-4a53-8f1e-093d0563002c-utilities\") pod \"redhat-marketplace-x92v8\" (UID: \"25064c1b-8a7b-4a53-8f1e-093d0563002c\") " pod="openshift-marketplace/redhat-marketplace-x92v8" Nov 24 22:18:39 crc kubenswrapper[4915]: I1124 22:18:39.103493 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4rfrb"] Nov 24 22:18:39 crc kubenswrapper[4915]: I1124 22:18:39.106852 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4rfrb" Nov 24 22:18:39 crc kubenswrapper[4915]: I1124 22:18:39.111481 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5p2b\" (UniqueName: \"kubernetes.io/projected/25064c1b-8a7b-4a53-8f1e-093d0563002c-kube-api-access-t5p2b\") pod \"redhat-marketplace-x92v8\" (UID: \"25064c1b-8a7b-4a53-8f1e-093d0563002c\") " pod="openshift-marketplace/redhat-marketplace-x92v8" Nov 24 22:18:39 crc kubenswrapper[4915]: I1124 22:18:39.125218 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25064c1b-8a7b-4a53-8f1e-093d0563002c-catalog-content\") pod \"redhat-marketplace-x92v8\" (UID: \"25064c1b-8a7b-4a53-8f1e-093d0563002c\") " pod="openshift-marketplace/redhat-marketplace-x92v8" Nov 24 22:18:39 crc kubenswrapper[4915]: I1124 22:18:39.125613 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25064c1b-8a7b-4a53-8f1e-093d0563002c-utilities\") pod \"redhat-marketplace-x92v8\" (UID: \"25064c1b-8a7b-4a53-8f1e-093d0563002c\") " pod="openshift-marketplace/redhat-marketplace-x92v8" Nov 24 22:18:39 crc kubenswrapper[4915]: I1124 22:18:39.126520 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25064c1b-8a7b-4a53-8f1e-093d0563002c-utilities\") pod \"redhat-marketplace-x92v8\" (UID: \"25064c1b-8a7b-4a53-8f1e-093d0563002c\") " pod="openshift-marketplace/redhat-marketplace-x92v8" Nov 24 22:18:39 crc kubenswrapper[4915]: I1124 22:18:39.126883 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25064c1b-8a7b-4a53-8f1e-093d0563002c-catalog-content\") pod \"redhat-marketplace-x92v8\" (UID: \"25064c1b-8a7b-4a53-8f1e-093d0563002c\") " pod="openshift-marketplace/redhat-marketplace-x92v8" Nov 24 22:18:39 crc kubenswrapper[4915]: I1124 22:18:39.142290 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4rfrb"] Nov 24 22:18:39 crc kubenswrapper[4915]: I1124 22:18:39.173036 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5p2b\" (UniqueName: \"kubernetes.io/projected/25064c1b-8a7b-4a53-8f1e-093d0563002c-kube-api-access-t5p2b\") pod \"redhat-marketplace-x92v8\" (UID: \"25064c1b-8a7b-4a53-8f1e-093d0563002c\") " pod="openshift-marketplace/redhat-marketplace-x92v8" Nov 24 22:18:39 crc kubenswrapper[4915]: I1124 22:18:39.228146 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwpqc\" (UniqueName: \"kubernetes.io/projected/66c014aa-fadf-4141-aeee-9b76ab57bb85-kube-api-access-pwpqc\") pod \"certified-operators-4rfrb\" (UID: \"66c014aa-fadf-4141-aeee-9b76ab57bb85\") " pod="openshift-marketplace/certified-operators-4rfrb" Nov 24 22:18:39 crc kubenswrapper[4915]: I1124 22:18:39.228272 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66c014aa-fadf-4141-aeee-9b76ab57bb85-utilities\") pod \"certified-operators-4rfrb\" (UID: \"66c014aa-fadf-4141-aeee-9b76ab57bb85\") " pod="openshift-marketplace/certified-operators-4rfrb" Nov 24 22:18:39 crc kubenswrapper[4915]: I1124 22:18:39.228377 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66c014aa-fadf-4141-aeee-9b76ab57bb85-catalog-content\") pod \"certified-operators-4rfrb\" (UID: \"66c014aa-fadf-4141-aeee-9b76ab57bb85\") " pod="openshift-marketplace/certified-operators-4rfrb" Nov 24 22:18:39 crc kubenswrapper[4915]: I1124 22:18:39.294321 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x92v8" Nov 24 22:18:39 crc kubenswrapper[4915]: I1124 22:18:39.330959 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66c014aa-fadf-4141-aeee-9b76ab57bb85-utilities\") pod \"certified-operators-4rfrb\" (UID: \"66c014aa-fadf-4141-aeee-9b76ab57bb85\") " pod="openshift-marketplace/certified-operators-4rfrb" Nov 24 22:18:39 crc kubenswrapper[4915]: I1124 22:18:39.331306 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66c014aa-fadf-4141-aeee-9b76ab57bb85-catalog-content\") pod \"certified-operators-4rfrb\" (UID: \"66c014aa-fadf-4141-aeee-9b76ab57bb85\") " pod="openshift-marketplace/certified-operators-4rfrb" Nov 24 22:18:39 crc kubenswrapper[4915]: I1124 22:18:39.331444 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwpqc\" (UniqueName: \"kubernetes.io/projected/66c014aa-fadf-4141-aeee-9b76ab57bb85-kube-api-access-pwpqc\") pod \"certified-operators-4rfrb\" (UID: \"66c014aa-fadf-4141-aeee-9b76ab57bb85\") " pod="openshift-marketplace/certified-operators-4rfrb" Nov 24 22:18:39 crc kubenswrapper[4915]: I1124 22:18:39.332538 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66c014aa-fadf-4141-aeee-9b76ab57bb85-utilities\") pod \"certified-operators-4rfrb\" (UID: \"66c014aa-fadf-4141-aeee-9b76ab57bb85\") " pod="openshift-marketplace/certified-operators-4rfrb" Nov 24 22:18:39 crc kubenswrapper[4915]: I1124 22:18:39.332879 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66c014aa-fadf-4141-aeee-9b76ab57bb85-catalog-content\") pod \"certified-operators-4rfrb\" (UID: \"66c014aa-fadf-4141-aeee-9b76ab57bb85\") " pod="openshift-marketplace/certified-operators-4rfrb" Nov 24 22:18:39 crc kubenswrapper[4915]: I1124 22:18:39.349721 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwpqc\" (UniqueName: \"kubernetes.io/projected/66c014aa-fadf-4141-aeee-9b76ab57bb85-kube-api-access-pwpqc\") pod \"certified-operators-4rfrb\" (UID: \"66c014aa-fadf-4141-aeee-9b76ab57bb85\") " pod="openshift-marketplace/certified-operators-4rfrb" Nov 24 22:18:39 crc kubenswrapper[4915]: I1124 22:18:39.452349 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4rfrb" Nov 24 22:18:39 crc kubenswrapper[4915]: I1124 22:18:39.842321 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x92v8"] Nov 24 22:18:40 crc kubenswrapper[4915]: W1124 22:18:39.999748 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66c014aa_fadf_4141_aeee_9b76ab57bb85.slice/crio-0969c37dceebd1fe56d1cfd26b22c4e5db624822c07be0d253583411a9912fd2 WatchSource:0}: Error finding container 0969c37dceebd1fe56d1cfd26b22c4e5db624822c07be0d253583411a9912fd2: Status 404 returned error can't find the container with id 0969c37dceebd1fe56d1cfd26b22c4e5db624822c07be0d253583411a9912fd2 Nov 24 22:18:40 crc kubenswrapper[4915]: I1124 22:18:39.999903 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4rfrb"] Nov 24 22:18:40 crc kubenswrapper[4915]: I1124 22:18:40.055440 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rfrb" event={"ID":"66c014aa-fadf-4141-aeee-9b76ab57bb85","Type":"ContainerStarted","Data":"0969c37dceebd1fe56d1cfd26b22c4e5db624822c07be0d253583411a9912fd2"} Nov 24 22:18:40 crc kubenswrapper[4915]: I1124 22:18:40.057490 4915 generic.go:334] "Generic (PLEG): container finished" podID="25064c1b-8a7b-4a53-8f1e-093d0563002c" containerID="f23ed7334d13811641954dd51b3ecb1b5c7a7055d0027df4662a684b0d120cd6" exitCode=0 Nov 24 22:18:40 crc kubenswrapper[4915]: I1124 22:18:40.057554 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x92v8" event={"ID":"25064c1b-8a7b-4a53-8f1e-093d0563002c","Type":"ContainerDied","Data":"f23ed7334d13811641954dd51b3ecb1b5c7a7055d0027df4662a684b0d120cd6"} Nov 24 22:18:40 crc kubenswrapper[4915]: I1124 22:18:40.057588 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x92v8" event={"ID":"25064c1b-8a7b-4a53-8f1e-093d0563002c","Type":"ContainerStarted","Data":"a0114c42c1351f0fe75969d84b03d1e99648a652a275933b71139ba0f61d9af4"} Nov 24 22:18:41 crc kubenswrapper[4915]: I1124 22:18:41.075002 4915 generic.go:334] "Generic (PLEG): container finished" podID="66c014aa-fadf-4141-aeee-9b76ab57bb85" containerID="af302e9f9c33bdf092005c7a5b3379f7c43d72095cee2ffb9bc529b11c4dcaae" exitCode=0 Nov 24 22:18:41 crc kubenswrapper[4915]: I1124 22:18:41.075070 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rfrb" event={"ID":"66c014aa-fadf-4141-aeee-9b76ab57bb85","Type":"ContainerDied","Data":"af302e9f9c33bdf092005c7a5b3379f7c43d72095cee2ffb9bc529b11c4dcaae"} Nov 24 22:18:42 crc kubenswrapper[4915]: I1124 22:18:42.087314 4915 generic.go:334] "Generic (PLEG): container finished" podID="25064c1b-8a7b-4a53-8f1e-093d0563002c" containerID="22c68ec9867c63432fe5c4cb9f0c9840a6ed94c39734515281037c9ce2dd863f" exitCode=0 Nov 24 22:18:42 crc kubenswrapper[4915]: I1124 22:18:42.087570 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x92v8" event={"ID":"25064c1b-8a7b-4a53-8f1e-093d0563002c","Type":"ContainerDied","Data":"22c68ec9867c63432fe5c4cb9f0c9840a6ed94c39734515281037c9ce2dd863f"} Nov 24 22:18:42 crc kubenswrapper[4915]: I1124 22:18:42.093698 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rfrb" event={"ID":"66c014aa-fadf-4141-aeee-9b76ab57bb85","Type":"ContainerStarted","Data":"509afaad53120dd7289f1738dfde6116dd70406e162efa88d4625a0c2cfb1a79"} Nov 24 22:18:44 crc kubenswrapper[4915]: I1124 22:18:44.120388 4915 generic.go:334] "Generic (PLEG): container finished" podID="66c014aa-fadf-4141-aeee-9b76ab57bb85" containerID="509afaad53120dd7289f1738dfde6116dd70406e162efa88d4625a0c2cfb1a79" exitCode=0 Nov 24 22:18:44 crc kubenswrapper[4915]: I1124 22:18:44.120473 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rfrb" event={"ID":"66c014aa-fadf-4141-aeee-9b76ab57bb85","Type":"ContainerDied","Data":"509afaad53120dd7289f1738dfde6116dd70406e162efa88d4625a0c2cfb1a79"} Nov 24 22:18:44 crc kubenswrapper[4915]: I1124 22:18:44.123900 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x92v8" event={"ID":"25064c1b-8a7b-4a53-8f1e-093d0563002c","Type":"ContainerStarted","Data":"d2770329e38fa6ea811daf854fe9c05dadc3057b13e80568ac9da3a28e4e618e"} Nov 24 22:18:44 crc kubenswrapper[4915]: I1124 22:18:44.172928 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x92v8" podStartSLOduration=3.241270933 podStartE2EDuration="6.172910107s" podCreationTimestamp="2025-11-24 22:18:38 +0000 UTC" firstStartedPulling="2025-11-24 22:18:40.059246294 +0000 UTC m=+3538.375498467" lastFinishedPulling="2025-11-24 22:18:42.990885458 +0000 UTC m=+3541.307137641" observedRunningTime="2025-11-24 22:18:44.171835809 +0000 UTC m=+3542.488088002" watchObservedRunningTime="2025-11-24 22:18:44.172910107 +0000 UTC m=+3542.489162280" Nov 24 22:18:45 crc kubenswrapper[4915]: I1124 22:18:45.142821 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rfrb" event={"ID":"66c014aa-fadf-4141-aeee-9b76ab57bb85","Type":"ContainerStarted","Data":"f6d1871ed57176ff1a50d0f38ce85ab771678dd11b6f6cd505c69e0978e84d4c"} Nov 24 22:18:45 crc kubenswrapper[4915]: I1124 22:18:45.184931 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4rfrb" podStartSLOduration=2.711393545 podStartE2EDuration="6.184908954s" podCreationTimestamp="2025-11-24 22:18:39 +0000 UTC" firstStartedPulling="2025-11-24 22:18:41.078037773 +0000 UTC m=+3539.394289946" lastFinishedPulling="2025-11-24 22:18:44.551553142 +0000 UTC m=+3542.867805355" observedRunningTime="2025-11-24 22:18:45.170952838 +0000 UTC m=+3543.487205021" watchObservedRunningTime="2025-11-24 22:18:45.184908954 +0000 UTC m=+3543.501161117" Nov 24 22:18:49 crc kubenswrapper[4915]: I1124 22:18:49.295487 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x92v8" Nov 24 22:18:49 crc kubenswrapper[4915]: I1124 22:18:49.296086 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x92v8" Nov 24 22:18:49 crc kubenswrapper[4915]: I1124 22:18:49.350621 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x92v8" Nov 24 22:18:49 crc kubenswrapper[4915]: I1124 22:18:49.453420 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4rfrb" Nov 24 22:18:49 crc kubenswrapper[4915]: I1124 22:18:49.453503 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4rfrb" Nov 24 22:18:49 crc kubenswrapper[4915]: I1124 22:18:49.515853 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4rfrb" Nov 24 22:18:50 crc kubenswrapper[4915]: I1124 22:18:50.253367 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4rfrb" Nov 24 22:18:50 crc kubenswrapper[4915]: I1124 22:18:50.253742 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x92v8" Nov 24 22:18:50 crc kubenswrapper[4915]: I1124 22:18:50.882593 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4rfrb"] Nov 24 22:18:52 crc kubenswrapper[4915]: I1124 22:18:52.215330 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4rfrb" podUID="66c014aa-fadf-4141-aeee-9b76ab57bb85" containerName="registry-server" containerID="cri-o://f6d1871ed57176ff1a50d0f38ce85ab771678dd11b6f6cd505c69e0978e84d4c" gracePeriod=2 Nov 24 22:18:52 crc kubenswrapper[4915]: I1124 22:18:52.691085 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x92v8"] Nov 24 22:18:52 crc kubenswrapper[4915]: I1124 22:18:52.691663 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x92v8" podUID="25064c1b-8a7b-4a53-8f1e-093d0563002c" containerName="registry-server" containerID="cri-o://d2770329e38fa6ea811daf854fe9c05dadc3057b13e80568ac9da3a28e4e618e" gracePeriod=2 Nov 24 22:18:52 crc kubenswrapper[4915]: I1124 22:18:52.937732 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4rfrb" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.029192 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwpqc\" (UniqueName: \"kubernetes.io/projected/66c014aa-fadf-4141-aeee-9b76ab57bb85-kube-api-access-pwpqc\") pod \"66c014aa-fadf-4141-aeee-9b76ab57bb85\" (UID: \"66c014aa-fadf-4141-aeee-9b76ab57bb85\") " Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.029240 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66c014aa-fadf-4141-aeee-9b76ab57bb85-utilities\") pod \"66c014aa-fadf-4141-aeee-9b76ab57bb85\" (UID: \"66c014aa-fadf-4141-aeee-9b76ab57bb85\") " Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.029307 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66c014aa-fadf-4141-aeee-9b76ab57bb85-catalog-content\") pod \"66c014aa-fadf-4141-aeee-9b76ab57bb85\" (UID: \"66c014aa-fadf-4141-aeee-9b76ab57bb85\") " Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.030844 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66c014aa-fadf-4141-aeee-9b76ab57bb85-utilities" (OuterVolumeSpecName: "utilities") pod "66c014aa-fadf-4141-aeee-9b76ab57bb85" (UID: "66c014aa-fadf-4141-aeee-9b76ab57bb85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.042252 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66c014aa-fadf-4141-aeee-9b76ab57bb85-kube-api-access-pwpqc" (OuterVolumeSpecName: "kube-api-access-pwpqc") pod "66c014aa-fadf-4141-aeee-9b76ab57bb85" (UID: "66c014aa-fadf-4141-aeee-9b76ab57bb85"). InnerVolumeSpecName "kube-api-access-pwpqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.087076 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66c014aa-fadf-4141-aeee-9b76ab57bb85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66c014aa-fadf-4141-aeee-9b76ab57bb85" (UID: "66c014aa-fadf-4141-aeee-9b76ab57bb85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.112085 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x92v8" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.135582 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66c014aa-fadf-4141-aeee-9b76ab57bb85-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.135629 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwpqc\" (UniqueName: \"kubernetes.io/projected/66c014aa-fadf-4141-aeee-9b76ab57bb85-kube-api-access-pwpqc\") on node \"crc\" DevicePath \"\"" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.135643 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66c014aa-fadf-4141-aeee-9b76ab57bb85-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.229163 4915 generic.go:334] "Generic (PLEG): container finished" podID="66c014aa-fadf-4141-aeee-9b76ab57bb85" containerID="f6d1871ed57176ff1a50d0f38ce85ab771678dd11b6f6cd505c69e0978e84d4c" exitCode=0 Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.229240 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4rfrb" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.229244 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rfrb" event={"ID":"66c014aa-fadf-4141-aeee-9b76ab57bb85","Type":"ContainerDied","Data":"f6d1871ed57176ff1a50d0f38ce85ab771678dd11b6f6cd505c69e0978e84d4c"} Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.229412 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rfrb" event={"ID":"66c014aa-fadf-4141-aeee-9b76ab57bb85","Type":"ContainerDied","Data":"0969c37dceebd1fe56d1cfd26b22c4e5db624822c07be0d253583411a9912fd2"} Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.229449 4915 scope.go:117] "RemoveContainer" containerID="f6d1871ed57176ff1a50d0f38ce85ab771678dd11b6f6cd505c69e0978e84d4c" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.232014 4915 generic.go:334] "Generic (PLEG): container finished" podID="25064c1b-8a7b-4a53-8f1e-093d0563002c" containerID="d2770329e38fa6ea811daf854fe9c05dadc3057b13e80568ac9da3a28e4e618e" exitCode=0 Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.232053 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x92v8" event={"ID":"25064c1b-8a7b-4a53-8f1e-093d0563002c","Type":"ContainerDied","Data":"d2770329e38fa6ea811daf854fe9c05dadc3057b13e80568ac9da3a28e4e618e"} Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.232311 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x92v8" event={"ID":"25064c1b-8a7b-4a53-8f1e-093d0563002c","Type":"ContainerDied","Data":"a0114c42c1351f0fe75969d84b03d1e99648a652a275933b71139ba0f61d9af4"} Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.232125 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x92v8" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.236615 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25064c1b-8a7b-4a53-8f1e-093d0563002c-utilities\") pod \"25064c1b-8a7b-4a53-8f1e-093d0563002c\" (UID: \"25064c1b-8a7b-4a53-8f1e-093d0563002c\") " Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.237083 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5p2b\" (UniqueName: \"kubernetes.io/projected/25064c1b-8a7b-4a53-8f1e-093d0563002c-kube-api-access-t5p2b\") pod \"25064c1b-8a7b-4a53-8f1e-093d0563002c\" (UID: \"25064c1b-8a7b-4a53-8f1e-093d0563002c\") " Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.237296 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25064c1b-8a7b-4a53-8f1e-093d0563002c-catalog-content\") pod \"25064c1b-8a7b-4a53-8f1e-093d0563002c\" (UID: \"25064c1b-8a7b-4a53-8f1e-093d0563002c\") " Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.239680 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25064c1b-8a7b-4a53-8f1e-093d0563002c-utilities" (OuterVolumeSpecName: "utilities") pod "25064c1b-8a7b-4a53-8f1e-093d0563002c" (UID: "25064c1b-8a7b-4a53-8f1e-093d0563002c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.249900 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25064c1b-8a7b-4a53-8f1e-093d0563002c-kube-api-access-t5p2b" (OuterVolumeSpecName: "kube-api-access-t5p2b") pod "25064c1b-8a7b-4a53-8f1e-093d0563002c" (UID: "25064c1b-8a7b-4a53-8f1e-093d0563002c"). InnerVolumeSpecName "kube-api-access-t5p2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.258891 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25064c1b-8a7b-4a53-8f1e-093d0563002c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "25064c1b-8a7b-4a53-8f1e-093d0563002c" (UID: "25064c1b-8a7b-4a53-8f1e-093d0563002c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.264327 4915 scope.go:117] "RemoveContainer" containerID="509afaad53120dd7289f1738dfde6116dd70406e162efa88d4625a0c2cfb1a79" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.272959 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4rfrb"] Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.283451 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4rfrb"] Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.292422 4915 scope.go:117] "RemoveContainer" containerID="af302e9f9c33bdf092005c7a5b3379f7c43d72095cee2ffb9bc529b11c4dcaae" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.312272 4915 scope.go:117] "RemoveContainer" containerID="f6d1871ed57176ff1a50d0f38ce85ab771678dd11b6f6cd505c69e0978e84d4c" Nov 24 22:18:53 crc kubenswrapper[4915]: E1124 22:18:53.312692 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6d1871ed57176ff1a50d0f38ce85ab771678dd11b6f6cd505c69e0978e84d4c\": container with ID starting with f6d1871ed57176ff1a50d0f38ce85ab771678dd11b6f6cd505c69e0978e84d4c not found: ID does not exist" containerID="f6d1871ed57176ff1a50d0f38ce85ab771678dd11b6f6cd505c69e0978e84d4c" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.312738 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6d1871ed57176ff1a50d0f38ce85ab771678dd11b6f6cd505c69e0978e84d4c"} err="failed to get container status \"f6d1871ed57176ff1a50d0f38ce85ab771678dd11b6f6cd505c69e0978e84d4c\": rpc error: code = NotFound desc = could not find container \"f6d1871ed57176ff1a50d0f38ce85ab771678dd11b6f6cd505c69e0978e84d4c\": container with ID starting with f6d1871ed57176ff1a50d0f38ce85ab771678dd11b6f6cd505c69e0978e84d4c not found: ID does not exist" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.312767 4915 scope.go:117] "RemoveContainer" containerID="509afaad53120dd7289f1738dfde6116dd70406e162efa88d4625a0c2cfb1a79" Nov 24 22:18:53 crc kubenswrapper[4915]: E1124 22:18:53.313085 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"509afaad53120dd7289f1738dfde6116dd70406e162efa88d4625a0c2cfb1a79\": container with ID starting with 509afaad53120dd7289f1738dfde6116dd70406e162efa88d4625a0c2cfb1a79 not found: ID does not exist" containerID="509afaad53120dd7289f1738dfde6116dd70406e162efa88d4625a0c2cfb1a79" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.313134 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"509afaad53120dd7289f1738dfde6116dd70406e162efa88d4625a0c2cfb1a79"} err="failed to get container status \"509afaad53120dd7289f1738dfde6116dd70406e162efa88d4625a0c2cfb1a79\": rpc error: code = NotFound desc = could not find container \"509afaad53120dd7289f1738dfde6116dd70406e162efa88d4625a0c2cfb1a79\": container with ID starting with 509afaad53120dd7289f1738dfde6116dd70406e162efa88d4625a0c2cfb1a79 not found: ID does not exist" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.313156 4915 scope.go:117] "RemoveContainer" containerID="af302e9f9c33bdf092005c7a5b3379f7c43d72095cee2ffb9bc529b11c4dcaae" Nov 24 22:18:53 crc kubenswrapper[4915]: E1124 22:18:53.313454 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af302e9f9c33bdf092005c7a5b3379f7c43d72095cee2ffb9bc529b11c4dcaae\": container with ID starting with af302e9f9c33bdf092005c7a5b3379f7c43d72095cee2ffb9bc529b11c4dcaae not found: ID does not exist" containerID="af302e9f9c33bdf092005c7a5b3379f7c43d72095cee2ffb9bc529b11c4dcaae" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.313478 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af302e9f9c33bdf092005c7a5b3379f7c43d72095cee2ffb9bc529b11c4dcaae"} err="failed to get container status \"af302e9f9c33bdf092005c7a5b3379f7c43d72095cee2ffb9bc529b11c4dcaae\": rpc error: code = NotFound desc = could not find container \"af302e9f9c33bdf092005c7a5b3379f7c43d72095cee2ffb9bc529b11c4dcaae\": container with ID starting with af302e9f9c33bdf092005c7a5b3379f7c43d72095cee2ffb9bc529b11c4dcaae not found: ID does not exist" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.313492 4915 scope.go:117] "RemoveContainer" containerID="d2770329e38fa6ea811daf854fe9c05dadc3057b13e80568ac9da3a28e4e618e" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.337543 4915 scope.go:117] "RemoveContainer" containerID="22c68ec9867c63432fe5c4cb9f0c9840a6ed94c39734515281037c9ce2dd863f" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.340403 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/25064c1b-8a7b-4a53-8f1e-093d0563002c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.340432 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/25064c1b-8a7b-4a53-8f1e-093d0563002c-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.340444 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5p2b\" (UniqueName: \"kubernetes.io/projected/25064c1b-8a7b-4a53-8f1e-093d0563002c-kube-api-access-t5p2b\") on node \"crc\" DevicePath \"\"" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.418725 4915 scope.go:117] "RemoveContainer" containerID="f23ed7334d13811641954dd51b3ecb1b5c7a7055d0027df4662a684b0d120cd6" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.458005 4915 scope.go:117] "RemoveContainer" containerID="d2770329e38fa6ea811daf854fe9c05dadc3057b13e80568ac9da3a28e4e618e" Nov 24 22:18:53 crc kubenswrapper[4915]: E1124 22:18:53.458512 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2770329e38fa6ea811daf854fe9c05dadc3057b13e80568ac9da3a28e4e618e\": container with ID starting with d2770329e38fa6ea811daf854fe9c05dadc3057b13e80568ac9da3a28e4e618e not found: ID does not exist" containerID="d2770329e38fa6ea811daf854fe9c05dadc3057b13e80568ac9da3a28e4e618e" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.458550 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2770329e38fa6ea811daf854fe9c05dadc3057b13e80568ac9da3a28e4e618e"} err="failed to get container status \"d2770329e38fa6ea811daf854fe9c05dadc3057b13e80568ac9da3a28e4e618e\": rpc error: code = NotFound desc = could not find container \"d2770329e38fa6ea811daf854fe9c05dadc3057b13e80568ac9da3a28e4e618e\": container with ID starting with d2770329e38fa6ea811daf854fe9c05dadc3057b13e80568ac9da3a28e4e618e not found: ID does not exist" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.458592 4915 scope.go:117] "RemoveContainer" containerID="22c68ec9867c63432fe5c4cb9f0c9840a6ed94c39734515281037c9ce2dd863f" Nov 24 22:18:53 crc kubenswrapper[4915]: E1124 22:18:53.459092 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22c68ec9867c63432fe5c4cb9f0c9840a6ed94c39734515281037c9ce2dd863f\": container with ID starting with 22c68ec9867c63432fe5c4cb9f0c9840a6ed94c39734515281037c9ce2dd863f not found: ID does not exist" containerID="22c68ec9867c63432fe5c4cb9f0c9840a6ed94c39734515281037c9ce2dd863f" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.459151 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22c68ec9867c63432fe5c4cb9f0c9840a6ed94c39734515281037c9ce2dd863f"} err="failed to get container status \"22c68ec9867c63432fe5c4cb9f0c9840a6ed94c39734515281037c9ce2dd863f\": rpc error: code = NotFound desc = could not find container \"22c68ec9867c63432fe5c4cb9f0c9840a6ed94c39734515281037c9ce2dd863f\": container with ID starting with 22c68ec9867c63432fe5c4cb9f0c9840a6ed94c39734515281037c9ce2dd863f not found: ID does not exist" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.459182 4915 scope.go:117] "RemoveContainer" containerID="f23ed7334d13811641954dd51b3ecb1b5c7a7055d0027df4662a684b0d120cd6" Nov 24 22:18:53 crc kubenswrapper[4915]: E1124 22:18:53.459538 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f23ed7334d13811641954dd51b3ecb1b5c7a7055d0027df4662a684b0d120cd6\": container with ID starting with f23ed7334d13811641954dd51b3ecb1b5c7a7055d0027df4662a684b0d120cd6 not found: ID does not exist" containerID="f23ed7334d13811641954dd51b3ecb1b5c7a7055d0027df4662a684b0d120cd6" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.459568 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f23ed7334d13811641954dd51b3ecb1b5c7a7055d0027df4662a684b0d120cd6"} err="failed to get container status \"f23ed7334d13811641954dd51b3ecb1b5c7a7055d0027df4662a684b0d120cd6\": rpc error: code = NotFound desc = could not find container \"f23ed7334d13811641954dd51b3ecb1b5c7a7055d0027df4662a684b0d120cd6\": container with ID starting with f23ed7334d13811641954dd51b3ecb1b5c7a7055d0027df4662a684b0d120cd6 not found: ID does not exist" Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.575596 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x92v8"] Nov 24 22:18:53 crc kubenswrapper[4915]: I1124 22:18:53.586538 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x92v8"] Nov 24 22:18:54 crc kubenswrapper[4915]: I1124 22:18:54.448162 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25064c1b-8a7b-4a53-8f1e-093d0563002c" path="/var/lib/kubelet/pods/25064c1b-8a7b-4a53-8f1e-093d0563002c/volumes" Nov 24 22:18:54 crc kubenswrapper[4915]: I1124 22:18:54.449683 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66c014aa-fadf-4141-aeee-9b76ab57bb85" path="/var/lib/kubelet/pods/66c014aa-fadf-4141-aeee-9b76ab57bb85/volumes" Nov 24 22:19:24 crc kubenswrapper[4915]: I1124 22:19:24.327470 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:19:24 crc kubenswrapper[4915]: I1124 22:19:24.328006 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:19:54 crc kubenswrapper[4915]: I1124 22:19:54.327630 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:19:54 crc kubenswrapper[4915]: I1124 22:19:54.329234 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:20:24 crc kubenswrapper[4915]: I1124 22:20:24.327828 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:20:24 crc kubenswrapper[4915]: I1124 22:20:24.328427 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:20:24 crc kubenswrapper[4915]: I1124 22:20:24.328483 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 22:20:24 crc kubenswrapper[4915]: I1124 22:20:24.329368 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"50adaa21a51d5ce7115d13cf3c49340f370b75afc08031fe6f5ac4f68c1ae24f"} pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:20:24 crc kubenswrapper[4915]: I1124 22:20:24.329422 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" containerID="cri-o://50adaa21a51d5ce7115d13cf3c49340f370b75afc08031fe6f5ac4f68c1ae24f" gracePeriod=600 Nov 24 22:20:24 crc kubenswrapper[4915]: E1124 22:20:24.457024 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:20:24 crc kubenswrapper[4915]: I1124 22:20:24.462675 4915 generic.go:334] "Generic (PLEG): container finished" podID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerID="50adaa21a51d5ce7115d13cf3c49340f370b75afc08031fe6f5ac4f68c1ae24f" exitCode=0 Nov 24 22:20:24 crc kubenswrapper[4915]: I1124 22:20:24.462721 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerDied","Data":"50adaa21a51d5ce7115d13cf3c49340f370b75afc08031fe6f5ac4f68c1ae24f"} Nov 24 22:20:24 crc kubenswrapper[4915]: I1124 22:20:24.462755 4915 scope.go:117] "RemoveContainer" containerID="9cbd3f603d5d3382fd217f881dc17d503d8cc26c34ddaecaaae4eb7dd1d26609" Nov 24 22:20:24 crc kubenswrapper[4915]: I1124 22:20:24.463513 4915 scope.go:117] "RemoveContainer" containerID="50adaa21a51d5ce7115d13cf3c49340f370b75afc08031fe6f5ac4f68c1ae24f" Nov 24 22:20:24 crc kubenswrapper[4915]: E1124 22:20:24.463922 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:20:39 crc kubenswrapper[4915]: I1124 22:20:39.426957 4915 scope.go:117] "RemoveContainer" containerID="50adaa21a51d5ce7115d13cf3c49340f370b75afc08031fe6f5ac4f68c1ae24f" Nov 24 22:20:39 crc kubenswrapper[4915]: E1124 22:20:39.427990 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:20:52 crc kubenswrapper[4915]: I1124 22:20:52.455807 4915 scope.go:117] "RemoveContainer" containerID="50adaa21a51d5ce7115d13cf3c49340f370b75afc08031fe6f5ac4f68c1ae24f" Nov 24 22:20:52 crc kubenswrapper[4915]: E1124 22:20:52.457118 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:21:03 crc kubenswrapper[4915]: I1124 22:21:03.428989 4915 scope.go:117] "RemoveContainer" containerID="50adaa21a51d5ce7115d13cf3c49340f370b75afc08031fe6f5ac4f68c1ae24f" Nov 24 22:21:03 crc kubenswrapper[4915]: E1124 22:21:03.429628 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:21:16 crc kubenswrapper[4915]: I1124 22:21:16.426942 4915 scope.go:117] "RemoveContainer" containerID="50adaa21a51d5ce7115d13cf3c49340f370b75afc08031fe6f5ac4f68c1ae24f" Nov 24 22:21:16 crc kubenswrapper[4915]: E1124 22:21:16.427700 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:21:27 crc kubenswrapper[4915]: I1124 22:21:27.428054 4915 scope.go:117] "RemoveContainer" containerID="50adaa21a51d5ce7115d13cf3c49340f370b75afc08031fe6f5ac4f68c1ae24f" Nov 24 22:21:27 crc kubenswrapper[4915]: E1124 22:21:27.429126 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:21:40 crc kubenswrapper[4915]: I1124 22:21:40.427497 4915 scope.go:117] "RemoveContainer" containerID="50adaa21a51d5ce7115d13cf3c49340f370b75afc08031fe6f5ac4f68c1ae24f" Nov 24 22:21:40 crc kubenswrapper[4915]: E1124 22:21:40.428324 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:21:52 crc kubenswrapper[4915]: I1124 22:21:52.441245 4915 scope.go:117] "RemoveContainer" containerID="50adaa21a51d5ce7115d13cf3c49340f370b75afc08031fe6f5ac4f68c1ae24f" Nov 24 22:21:52 crc kubenswrapper[4915]: E1124 22:21:52.442565 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:22:03 crc kubenswrapper[4915]: I1124 22:22:03.427622 4915 scope.go:117] "RemoveContainer" containerID="50adaa21a51d5ce7115d13cf3c49340f370b75afc08031fe6f5ac4f68c1ae24f" Nov 24 22:22:03 crc kubenswrapper[4915]: E1124 22:22:03.428380 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:22:16 crc kubenswrapper[4915]: I1124 22:22:16.427193 4915 scope.go:117] "RemoveContainer" containerID="50adaa21a51d5ce7115d13cf3c49340f370b75afc08031fe6f5ac4f68c1ae24f" Nov 24 22:22:16 crc kubenswrapper[4915]: E1124 22:22:16.428107 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:22:28 crc kubenswrapper[4915]: I1124 22:22:28.442869 4915 scope.go:117] "RemoveContainer" containerID="50adaa21a51d5ce7115d13cf3c49340f370b75afc08031fe6f5ac4f68c1ae24f" Nov 24 22:22:28 crc kubenswrapper[4915]: E1124 22:22:28.444193 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:22:39 crc kubenswrapper[4915]: I1124 22:22:39.427567 4915 scope.go:117] "RemoveContainer" containerID="50adaa21a51d5ce7115d13cf3c49340f370b75afc08031fe6f5ac4f68c1ae24f" Nov 24 22:22:39 crc kubenswrapper[4915]: E1124 22:22:39.428962 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:22:52 crc kubenswrapper[4915]: I1124 22:22:52.448889 4915 scope.go:117] "RemoveContainer" containerID="50adaa21a51d5ce7115d13cf3c49340f370b75afc08031fe6f5ac4f68c1ae24f" Nov 24 22:22:52 crc kubenswrapper[4915]: E1124 22:22:52.450169 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:23:03 crc kubenswrapper[4915]: I1124 22:23:03.426182 4915 scope.go:117] "RemoveContainer" containerID="50adaa21a51d5ce7115d13cf3c49340f370b75afc08031fe6f5ac4f68c1ae24f" Nov 24 22:23:03 crc kubenswrapper[4915]: E1124 22:23:03.426846 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:23:18 crc kubenswrapper[4915]: I1124 22:23:18.431523 4915 scope.go:117] "RemoveContainer" containerID="50adaa21a51d5ce7115d13cf3c49340f370b75afc08031fe6f5ac4f68c1ae24f" Nov 24 22:23:18 crc kubenswrapper[4915]: E1124 22:23:18.433566 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:23:31 crc kubenswrapper[4915]: I1124 22:23:31.427388 4915 scope.go:117] "RemoveContainer" containerID="50adaa21a51d5ce7115d13cf3c49340f370b75afc08031fe6f5ac4f68c1ae24f" Nov 24 22:23:31 crc kubenswrapper[4915]: E1124 22:23:31.428290 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:23:46 crc kubenswrapper[4915]: I1124 22:23:46.427522 4915 scope.go:117] "RemoveContainer" containerID="50adaa21a51d5ce7115d13cf3c49340f370b75afc08031fe6f5ac4f68c1ae24f" Nov 24 22:23:46 crc kubenswrapper[4915]: E1124 22:23:46.428514 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:23:49 crc kubenswrapper[4915]: E1124 22:23:49.555901 4915 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.107:45824->38.102.83.107:46247: write tcp 38.102.83.107:45824->38.102.83.107:46247: write: broken pipe Nov 24 22:23:57 crc kubenswrapper[4915]: I1124 22:23:57.426680 4915 scope.go:117] "RemoveContainer" containerID="50adaa21a51d5ce7115d13cf3c49340f370b75afc08031fe6f5ac4f68c1ae24f" Nov 24 22:23:57 crc kubenswrapper[4915]: E1124 22:23:57.427690 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:24:11 crc kubenswrapper[4915]: I1124 22:24:11.426965 4915 scope.go:117] "RemoveContainer" containerID="50adaa21a51d5ce7115d13cf3c49340f370b75afc08031fe6f5ac4f68c1ae24f" Nov 24 22:24:11 crc kubenswrapper[4915]: E1124 22:24:11.427762 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:24:25 crc kubenswrapper[4915]: I1124 22:24:25.429249 4915 scope.go:117] "RemoveContainer" containerID="50adaa21a51d5ce7115d13cf3c49340f370b75afc08031fe6f5ac4f68c1ae24f" Nov 24 22:24:25 crc kubenswrapper[4915]: E1124 22:24:25.430376 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:24:38 crc kubenswrapper[4915]: I1124 22:24:38.428064 4915 scope.go:117] "RemoveContainer" containerID="50adaa21a51d5ce7115d13cf3c49340f370b75afc08031fe6f5ac4f68c1ae24f" Nov 24 22:24:38 crc kubenswrapper[4915]: E1124 22:24:38.429024 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:24:53 crc kubenswrapper[4915]: I1124 22:24:53.426934 4915 scope.go:117] "RemoveContainer" containerID="50adaa21a51d5ce7115d13cf3c49340f370b75afc08031fe6f5ac4f68c1ae24f" Nov 24 22:24:53 crc kubenswrapper[4915]: E1124 22:24:53.427674 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:25:06 crc kubenswrapper[4915]: I1124 22:25:06.431144 4915 scope.go:117] "RemoveContainer" containerID="50adaa21a51d5ce7115d13cf3c49340f370b75afc08031fe6f5ac4f68c1ae24f" Nov 24 22:25:06 crc kubenswrapper[4915]: E1124 22:25:06.432203 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:25:21 crc kubenswrapper[4915]: I1124 22:25:21.427706 4915 scope.go:117] "RemoveContainer" containerID="50adaa21a51d5ce7115d13cf3c49340f370b75afc08031fe6f5ac4f68c1ae24f" Nov 24 22:25:21 crc kubenswrapper[4915]: E1124 22:25:21.429631 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:25:33 crc kubenswrapper[4915]: I1124 22:25:33.427269 4915 scope.go:117] "RemoveContainer" containerID="50adaa21a51d5ce7115d13cf3c49340f370b75afc08031fe6f5ac4f68c1ae24f" Nov 24 22:25:34 crc kubenswrapper[4915]: I1124 22:25:34.525013 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerStarted","Data":"05e5ff85235ffb05ebddacc53a07ad723541a83b073e2a1239e33a4379c037a1"} Nov 24 22:27:54 crc kubenswrapper[4915]: I1124 22:27:54.327915 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:27:54 crc kubenswrapper[4915]: I1124 22:27:54.328495 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:28:24 crc kubenswrapper[4915]: I1124 22:28:24.327686 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:28:24 crc kubenswrapper[4915]: I1124 22:28:24.328283 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:28:52 crc kubenswrapper[4915]: I1124 22:28:52.250920 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bghsr"] Nov 24 22:28:52 crc kubenswrapper[4915]: E1124 22:28:52.251944 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25064c1b-8a7b-4a53-8f1e-093d0563002c" containerName="extract-content" Nov 24 22:28:52 crc kubenswrapper[4915]: I1124 22:28:52.251964 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="25064c1b-8a7b-4a53-8f1e-093d0563002c" containerName="extract-content" Nov 24 22:28:52 crc kubenswrapper[4915]: E1124 22:28:52.251997 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25064c1b-8a7b-4a53-8f1e-093d0563002c" containerName="registry-server" Nov 24 22:28:52 crc kubenswrapper[4915]: I1124 22:28:52.252010 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="25064c1b-8a7b-4a53-8f1e-093d0563002c" containerName="registry-server" Nov 24 22:28:52 crc kubenswrapper[4915]: E1124 22:28:52.252030 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66c014aa-fadf-4141-aeee-9b76ab57bb85" containerName="extract-utilities" Nov 24 22:28:52 crc kubenswrapper[4915]: I1124 22:28:52.252041 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="66c014aa-fadf-4141-aeee-9b76ab57bb85" containerName="extract-utilities" Nov 24 22:28:52 crc kubenswrapper[4915]: E1124 22:28:52.252079 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66c014aa-fadf-4141-aeee-9b76ab57bb85" containerName="extract-content" Nov 24 22:28:52 crc kubenswrapper[4915]: I1124 22:28:52.252090 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="66c014aa-fadf-4141-aeee-9b76ab57bb85" containerName="extract-content" Nov 24 22:28:52 crc kubenswrapper[4915]: E1124 22:28:52.252126 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66c014aa-fadf-4141-aeee-9b76ab57bb85" containerName="registry-server" Nov 24 22:28:52 crc kubenswrapper[4915]: I1124 22:28:52.252136 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="66c014aa-fadf-4141-aeee-9b76ab57bb85" containerName="registry-server" Nov 24 22:28:52 crc kubenswrapper[4915]: E1124 22:28:52.252165 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25064c1b-8a7b-4a53-8f1e-093d0563002c" containerName="extract-utilities" Nov 24 22:28:52 crc kubenswrapper[4915]: I1124 22:28:52.252176 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="25064c1b-8a7b-4a53-8f1e-093d0563002c" containerName="extract-utilities" Nov 24 22:28:52 crc kubenswrapper[4915]: I1124 22:28:52.252531 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="66c014aa-fadf-4141-aeee-9b76ab57bb85" containerName="registry-server" Nov 24 22:28:52 crc kubenswrapper[4915]: I1124 22:28:52.252565 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="25064c1b-8a7b-4a53-8f1e-093d0563002c" containerName="registry-server" Nov 24 22:28:52 crc kubenswrapper[4915]: I1124 22:28:52.254870 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bghsr" Nov 24 22:28:52 crc kubenswrapper[4915]: I1124 22:28:52.318463 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bghsr"] Nov 24 22:28:52 crc kubenswrapper[4915]: I1124 22:28:52.342316 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkxxx\" (UniqueName: \"kubernetes.io/projected/a6a913ae-f429-420b-b984-03be168844ea-kube-api-access-vkxxx\") pod \"redhat-marketplace-bghsr\" (UID: \"a6a913ae-f429-420b-b984-03be168844ea\") " pod="openshift-marketplace/redhat-marketplace-bghsr" Nov 24 22:28:52 crc kubenswrapper[4915]: I1124 22:28:52.342432 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6a913ae-f429-420b-b984-03be168844ea-utilities\") pod \"redhat-marketplace-bghsr\" (UID: \"a6a913ae-f429-420b-b984-03be168844ea\") " pod="openshift-marketplace/redhat-marketplace-bghsr" Nov 24 22:28:52 crc kubenswrapper[4915]: I1124 22:28:52.342483 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6a913ae-f429-420b-b984-03be168844ea-catalog-content\") pod \"redhat-marketplace-bghsr\" (UID: \"a6a913ae-f429-420b-b984-03be168844ea\") " pod="openshift-marketplace/redhat-marketplace-bghsr" Nov 24 22:28:52 crc kubenswrapper[4915]: I1124 22:28:52.445173 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkxxx\" (UniqueName: \"kubernetes.io/projected/a6a913ae-f429-420b-b984-03be168844ea-kube-api-access-vkxxx\") pod \"redhat-marketplace-bghsr\" (UID: \"a6a913ae-f429-420b-b984-03be168844ea\") " pod="openshift-marketplace/redhat-marketplace-bghsr" Nov 24 22:28:52 crc kubenswrapper[4915]: I1124 22:28:52.445300 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6a913ae-f429-420b-b984-03be168844ea-utilities\") pod \"redhat-marketplace-bghsr\" (UID: \"a6a913ae-f429-420b-b984-03be168844ea\") " pod="openshift-marketplace/redhat-marketplace-bghsr" Nov 24 22:28:52 crc kubenswrapper[4915]: I1124 22:28:52.445370 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6a913ae-f429-420b-b984-03be168844ea-catalog-content\") pod \"redhat-marketplace-bghsr\" (UID: \"a6a913ae-f429-420b-b984-03be168844ea\") " pod="openshift-marketplace/redhat-marketplace-bghsr" Nov 24 22:28:52 crc kubenswrapper[4915]: I1124 22:28:52.445979 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6a913ae-f429-420b-b984-03be168844ea-utilities\") pod \"redhat-marketplace-bghsr\" (UID: \"a6a913ae-f429-420b-b984-03be168844ea\") " pod="openshift-marketplace/redhat-marketplace-bghsr" Nov 24 22:28:52 crc kubenswrapper[4915]: I1124 22:28:52.445986 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6a913ae-f429-420b-b984-03be168844ea-catalog-content\") pod \"redhat-marketplace-bghsr\" (UID: \"a6a913ae-f429-420b-b984-03be168844ea\") " pod="openshift-marketplace/redhat-marketplace-bghsr" Nov 24 22:28:52 crc kubenswrapper[4915]: I1124 22:28:52.470615 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkxxx\" (UniqueName: \"kubernetes.io/projected/a6a913ae-f429-420b-b984-03be168844ea-kube-api-access-vkxxx\") pod \"redhat-marketplace-bghsr\" (UID: \"a6a913ae-f429-420b-b984-03be168844ea\") " pod="openshift-marketplace/redhat-marketplace-bghsr" Nov 24 22:28:52 crc kubenswrapper[4915]: I1124 22:28:52.637588 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bghsr" Nov 24 22:28:53 crc kubenswrapper[4915]: W1124 22:28:53.139696 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6a913ae_f429_420b_b984_03be168844ea.slice/crio-0cf5e1f27cc685b965ee4627c9303b34ab2a66d9c4091d9ecbe50f0003761ee7 WatchSource:0}: Error finding container 0cf5e1f27cc685b965ee4627c9303b34ab2a66d9c4091d9ecbe50f0003761ee7: Status 404 returned error can't find the container with id 0cf5e1f27cc685b965ee4627c9303b34ab2a66d9c4091d9ecbe50f0003761ee7 Nov 24 22:28:53 crc kubenswrapper[4915]: I1124 22:28:53.146759 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bghsr"] Nov 24 22:28:53 crc kubenswrapper[4915]: I1124 22:28:53.372819 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bghsr" event={"ID":"a6a913ae-f429-420b-b984-03be168844ea","Type":"ContainerStarted","Data":"0cf5e1f27cc685b965ee4627c9303b34ab2a66d9c4091d9ecbe50f0003761ee7"} Nov 24 22:28:54 crc kubenswrapper[4915]: I1124 22:28:54.327824 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:28:54 crc kubenswrapper[4915]: I1124 22:28:54.328202 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:28:54 crc kubenswrapper[4915]: I1124 22:28:54.328269 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 22:28:54 crc kubenswrapper[4915]: I1124 22:28:54.329340 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"05e5ff85235ffb05ebddacc53a07ad723541a83b073e2a1239e33a4379c037a1"} pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:28:54 crc kubenswrapper[4915]: I1124 22:28:54.329422 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" containerID="cri-o://05e5ff85235ffb05ebddacc53a07ad723541a83b073e2a1239e33a4379c037a1" gracePeriod=600 Nov 24 22:28:54 crc kubenswrapper[4915]: I1124 22:28:54.392720 4915 generic.go:334] "Generic (PLEG): container finished" podID="a6a913ae-f429-420b-b984-03be168844ea" containerID="4f775ac2c53484f0396e37e45d70a7618b9676130f15097cb67fb1105340452a" exitCode=0 Nov 24 22:28:54 crc kubenswrapper[4915]: I1124 22:28:54.392887 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bghsr" event={"ID":"a6a913ae-f429-420b-b984-03be168844ea","Type":"ContainerDied","Data":"4f775ac2c53484f0396e37e45d70a7618b9676130f15097cb67fb1105340452a"} Nov 24 22:28:54 crc kubenswrapper[4915]: I1124 22:28:54.397150 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 22:28:55 crc kubenswrapper[4915]: I1124 22:28:55.412837 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerDied","Data":"05e5ff85235ffb05ebddacc53a07ad723541a83b073e2a1239e33a4379c037a1"} Nov 24 22:28:55 crc kubenswrapper[4915]: I1124 22:28:55.412841 4915 generic.go:334] "Generic (PLEG): container finished" podID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerID="05e5ff85235ffb05ebddacc53a07ad723541a83b073e2a1239e33a4379c037a1" exitCode=0 Nov 24 22:28:55 crc kubenswrapper[4915]: I1124 22:28:55.414531 4915 scope.go:117] "RemoveContainer" containerID="50adaa21a51d5ce7115d13cf3c49340f370b75afc08031fe6f5ac4f68c1ae24f" Nov 24 22:28:56 crc kubenswrapper[4915]: I1124 22:28:56.622198 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8r2vt"] Nov 24 22:28:56 crc kubenswrapper[4915]: I1124 22:28:56.628497 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8r2vt" Nov 24 22:28:56 crc kubenswrapper[4915]: I1124 22:28:56.634696 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8r2vt"] Nov 24 22:28:56 crc kubenswrapper[4915]: I1124 22:28:56.773062 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29e5d436-67fe-411f-b770-9ab45a3aa7a1-utilities\") pod \"certified-operators-8r2vt\" (UID: \"29e5d436-67fe-411f-b770-9ab45a3aa7a1\") " pod="openshift-marketplace/certified-operators-8r2vt" Nov 24 22:28:56 crc kubenswrapper[4915]: I1124 22:28:56.773136 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfps4\" (UniqueName: \"kubernetes.io/projected/29e5d436-67fe-411f-b770-9ab45a3aa7a1-kube-api-access-zfps4\") pod \"certified-operators-8r2vt\" (UID: \"29e5d436-67fe-411f-b770-9ab45a3aa7a1\") " pod="openshift-marketplace/certified-operators-8r2vt" Nov 24 22:28:56 crc kubenswrapper[4915]: I1124 22:28:56.773208 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29e5d436-67fe-411f-b770-9ab45a3aa7a1-catalog-content\") pod \"certified-operators-8r2vt\" (UID: \"29e5d436-67fe-411f-b770-9ab45a3aa7a1\") " pod="openshift-marketplace/certified-operators-8r2vt" Nov 24 22:28:56 crc kubenswrapper[4915]: I1124 22:28:56.875508 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29e5d436-67fe-411f-b770-9ab45a3aa7a1-utilities\") pod \"certified-operators-8r2vt\" (UID: \"29e5d436-67fe-411f-b770-9ab45a3aa7a1\") " pod="openshift-marketplace/certified-operators-8r2vt" Nov 24 22:28:56 crc kubenswrapper[4915]: I1124 22:28:56.875617 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfps4\" (UniqueName: \"kubernetes.io/projected/29e5d436-67fe-411f-b770-9ab45a3aa7a1-kube-api-access-zfps4\") pod \"certified-operators-8r2vt\" (UID: \"29e5d436-67fe-411f-b770-9ab45a3aa7a1\") " pod="openshift-marketplace/certified-operators-8r2vt" Nov 24 22:28:56 crc kubenswrapper[4915]: I1124 22:28:56.875735 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29e5d436-67fe-411f-b770-9ab45a3aa7a1-catalog-content\") pod \"certified-operators-8r2vt\" (UID: \"29e5d436-67fe-411f-b770-9ab45a3aa7a1\") " pod="openshift-marketplace/certified-operators-8r2vt" Nov 24 22:28:56 crc kubenswrapper[4915]: I1124 22:28:56.876165 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29e5d436-67fe-411f-b770-9ab45a3aa7a1-utilities\") pod \"certified-operators-8r2vt\" (UID: \"29e5d436-67fe-411f-b770-9ab45a3aa7a1\") " pod="openshift-marketplace/certified-operators-8r2vt" Nov 24 22:28:56 crc kubenswrapper[4915]: I1124 22:28:56.876293 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29e5d436-67fe-411f-b770-9ab45a3aa7a1-catalog-content\") pod \"certified-operators-8r2vt\" (UID: \"29e5d436-67fe-411f-b770-9ab45a3aa7a1\") " pod="openshift-marketplace/certified-operators-8r2vt" Nov 24 22:28:56 crc kubenswrapper[4915]: I1124 22:28:56.899149 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfps4\" (UniqueName: \"kubernetes.io/projected/29e5d436-67fe-411f-b770-9ab45a3aa7a1-kube-api-access-zfps4\") pod \"certified-operators-8r2vt\" (UID: \"29e5d436-67fe-411f-b770-9ab45a3aa7a1\") " pod="openshift-marketplace/certified-operators-8r2vt" Nov 24 22:28:56 crc kubenswrapper[4915]: I1124 22:28:56.994941 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8r2vt" Nov 24 22:28:57 crc kubenswrapper[4915]: I1124 22:28:57.440078 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerStarted","Data":"bc456cd1e964eebc77738a0b189b07c75f8737a62bc684a8dccef19bf963137f"} Nov 24 22:28:57 crc kubenswrapper[4915]: I1124 22:28:57.566268 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8r2vt"] Nov 24 22:28:57 crc kubenswrapper[4915]: W1124 22:28:57.767072 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29e5d436_67fe_411f_b770_9ab45a3aa7a1.slice/crio-0ff02bec1db221fdaca88aa3b9c53227dc8d20cbdba5df42f20b5a721367c41c WatchSource:0}: Error finding container 0ff02bec1db221fdaca88aa3b9c53227dc8d20cbdba5df42f20b5a721367c41c: Status 404 returned error can't find the container with id 0ff02bec1db221fdaca88aa3b9c53227dc8d20cbdba5df42f20b5a721367c41c Nov 24 22:28:58 crc kubenswrapper[4915]: I1124 22:28:58.460443 4915 generic.go:334] "Generic (PLEG): container finished" podID="29e5d436-67fe-411f-b770-9ab45a3aa7a1" containerID="94d67e6e021c044436bde8fd97fcaa6b4f321351a33763e9e3ecaff120e9472e" exitCode=0 Nov 24 22:28:58 crc kubenswrapper[4915]: I1124 22:28:58.460501 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8r2vt" event={"ID":"29e5d436-67fe-411f-b770-9ab45a3aa7a1","Type":"ContainerDied","Data":"94d67e6e021c044436bde8fd97fcaa6b4f321351a33763e9e3ecaff120e9472e"} Nov 24 22:28:58 crc kubenswrapper[4915]: I1124 22:28:58.460951 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8r2vt" event={"ID":"29e5d436-67fe-411f-b770-9ab45a3aa7a1","Type":"ContainerStarted","Data":"0ff02bec1db221fdaca88aa3b9c53227dc8d20cbdba5df42f20b5a721367c41c"} Nov 24 22:28:58 crc kubenswrapper[4915]: I1124 22:28:58.473653 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bghsr" event={"ID":"a6a913ae-f429-420b-b984-03be168844ea","Type":"ContainerStarted","Data":"f69912ec8f256b522b3dd21eb0c9c0202aa0bb3225b99fbaa070c0635edf808a"} Nov 24 22:29:00 crc kubenswrapper[4915]: I1124 22:29:00.502808 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bghsr" event={"ID":"a6a913ae-f429-420b-b984-03be168844ea","Type":"ContainerDied","Data":"f69912ec8f256b522b3dd21eb0c9c0202aa0bb3225b99fbaa070c0635edf808a"} Nov 24 22:29:00 crc kubenswrapper[4915]: I1124 22:29:00.502764 4915 generic.go:334] "Generic (PLEG): container finished" podID="a6a913ae-f429-420b-b984-03be168844ea" containerID="f69912ec8f256b522b3dd21eb0c9c0202aa0bb3225b99fbaa070c0635edf808a" exitCode=0 Nov 24 22:29:04 crc kubenswrapper[4915]: I1124 22:29:04.073764 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-62vd4"] Nov 24 22:29:04 crc kubenswrapper[4915]: I1124 22:29:04.077760 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-62vd4" Nov 24 22:29:04 crc kubenswrapper[4915]: I1124 22:29:04.102302 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-62vd4"] Nov 24 22:29:04 crc kubenswrapper[4915]: I1124 22:29:04.160203 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/985882d4-0533-4ec6-bbaa-b1a414f5791b-utilities\") pod \"community-operators-62vd4\" (UID: \"985882d4-0533-4ec6-bbaa-b1a414f5791b\") " pod="openshift-marketplace/community-operators-62vd4" Nov 24 22:29:04 crc kubenswrapper[4915]: I1124 22:29:04.160278 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgkfw\" (UniqueName: \"kubernetes.io/projected/985882d4-0533-4ec6-bbaa-b1a414f5791b-kube-api-access-cgkfw\") pod \"community-operators-62vd4\" (UID: \"985882d4-0533-4ec6-bbaa-b1a414f5791b\") " pod="openshift-marketplace/community-operators-62vd4" Nov 24 22:29:04 crc kubenswrapper[4915]: I1124 22:29:04.160355 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/985882d4-0533-4ec6-bbaa-b1a414f5791b-catalog-content\") pod \"community-operators-62vd4\" (UID: \"985882d4-0533-4ec6-bbaa-b1a414f5791b\") " pod="openshift-marketplace/community-operators-62vd4" Nov 24 22:29:04 crc kubenswrapper[4915]: I1124 22:29:04.262790 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/985882d4-0533-4ec6-bbaa-b1a414f5791b-utilities\") pod \"community-operators-62vd4\" (UID: \"985882d4-0533-4ec6-bbaa-b1a414f5791b\") " pod="openshift-marketplace/community-operators-62vd4" Nov 24 22:29:04 crc kubenswrapper[4915]: I1124 22:29:04.262871 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgkfw\" (UniqueName: \"kubernetes.io/projected/985882d4-0533-4ec6-bbaa-b1a414f5791b-kube-api-access-cgkfw\") pod \"community-operators-62vd4\" (UID: \"985882d4-0533-4ec6-bbaa-b1a414f5791b\") " pod="openshift-marketplace/community-operators-62vd4" Nov 24 22:29:04 crc kubenswrapper[4915]: I1124 22:29:04.262940 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/985882d4-0533-4ec6-bbaa-b1a414f5791b-catalog-content\") pod \"community-operators-62vd4\" (UID: \"985882d4-0533-4ec6-bbaa-b1a414f5791b\") " pod="openshift-marketplace/community-operators-62vd4" Nov 24 22:29:04 crc kubenswrapper[4915]: I1124 22:29:04.263485 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/985882d4-0533-4ec6-bbaa-b1a414f5791b-catalog-content\") pod \"community-operators-62vd4\" (UID: \"985882d4-0533-4ec6-bbaa-b1a414f5791b\") " pod="openshift-marketplace/community-operators-62vd4" Nov 24 22:29:04 crc kubenswrapper[4915]: I1124 22:29:04.263722 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/985882d4-0533-4ec6-bbaa-b1a414f5791b-utilities\") pod \"community-operators-62vd4\" (UID: \"985882d4-0533-4ec6-bbaa-b1a414f5791b\") " pod="openshift-marketplace/community-operators-62vd4" Nov 24 22:29:04 crc kubenswrapper[4915]: I1124 22:29:04.282680 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgkfw\" (UniqueName: \"kubernetes.io/projected/985882d4-0533-4ec6-bbaa-b1a414f5791b-kube-api-access-cgkfw\") pod \"community-operators-62vd4\" (UID: \"985882d4-0533-4ec6-bbaa-b1a414f5791b\") " pod="openshift-marketplace/community-operators-62vd4" Nov 24 22:29:04 crc kubenswrapper[4915]: I1124 22:29:04.427648 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-62vd4" Nov 24 22:29:05 crc kubenswrapper[4915]: I1124 22:29:05.921498 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-62vd4"] Nov 24 22:29:06 crc kubenswrapper[4915]: I1124 22:29:06.567092 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8r2vt" event={"ID":"29e5d436-67fe-411f-b770-9ab45a3aa7a1","Type":"ContainerStarted","Data":"004932bd3a9df5b5220390776bbdadc7cc946614801655207fd50d1ca550b784"} Nov 24 22:29:06 crc kubenswrapper[4915]: I1124 22:29:06.572127 4915 generic.go:334] "Generic (PLEG): container finished" podID="985882d4-0533-4ec6-bbaa-b1a414f5791b" containerID="34bcb538f7d589e3112291c9668f1aa4f32f678cb65ff2cf9089c08e4176ab8a" exitCode=0 Nov 24 22:29:06 crc kubenswrapper[4915]: I1124 22:29:06.572197 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62vd4" event={"ID":"985882d4-0533-4ec6-bbaa-b1a414f5791b","Type":"ContainerDied","Data":"34bcb538f7d589e3112291c9668f1aa4f32f678cb65ff2cf9089c08e4176ab8a"} Nov 24 22:29:06 crc kubenswrapper[4915]: I1124 22:29:06.572220 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62vd4" event={"ID":"985882d4-0533-4ec6-bbaa-b1a414f5791b","Type":"ContainerStarted","Data":"c58da334c9278148afac35fa046a1a99396199fcf3dd5061ec9efffedf1be77c"} Nov 24 22:29:06 crc kubenswrapper[4915]: I1124 22:29:06.575117 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bghsr" event={"ID":"a6a913ae-f429-420b-b984-03be168844ea","Type":"ContainerStarted","Data":"0856a8992ad10ee630f6b97913b5e6b35012109f7bcd52ab81002cbeba1fa171"} Nov 24 22:29:06 crc kubenswrapper[4915]: I1124 22:29:06.638328 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bghsr" podStartSLOduration=3.735391961 podStartE2EDuration="14.638307241s" podCreationTimestamp="2025-11-24 22:28:52 +0000 UTC" firstStartedPulling="2025-11-24 22:28:54.396851225 +0000 UTC m=+4152.713103398" lastFinishedPulling="2025-11-24 22:29:05.299766475 +0000 UTC m=+4163.616018678" observedRunningTime="2025-11-24 22:29:06.625874336 +0000 UTC m=+4164.942126529" watchObservedRunningTime="2025-11-24 22:29:06.638307241 +0000 UTC m=+4164.954559424" Nov 24 22:29:07 crc kubenswrapper[4915]: I1124 22:29:07.593686 4915 generic.go:334] "Generic (PLEG): container finished" podID="29e5d436-67fe-411f-b770-9ab45a3aa7a1" containerID="004932bd3a9df5b5220390776bbdadc7cc946614801655207fd50d1ca550b784" exitCode=0 Nov 24 22:29:07 crc kubenswrapper[4915]: I1124 22:29:07.593734 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8r2vt" event={"ID":"29e5d436-67fe-411f-b770-9ab45a3aa7a1","Type":"ContainerDied","Data":"004932bd3a9df5b5220390776bbdadc7cc946614801655207fd50d1ca550b784"} Nov 24 22:29:09 crc kubenswrapper[4915]: I1124 22:29:09.630837 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62vd4" event={"ID":"985882d4-0533-4ec6-bbaa-b1a414f5791b","Type":"ContainerStarted","Data":"d74297b4bc7547849b14b9ce9332bc5c6ddbe65aa879a4e126f1289d41976d24"} Nov 24 22:29:09 crc kubenswrapper[4915]: I1124 22:29:09.634652 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8r2vt" event={"ID":"29e5d436-67fe-411f-b770-9ab45a3aa7a1","Type":"ContainerStarted","Data":"6ca838fcd7f87f73bdadacf31dd365f5f69f7f35f0eec537ea355bd2cefe6432"} Nov 24 22:29:09 crc kubenswrapper[4915]: I1124 22:29:09.687991 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8r2vt" podStartSLOduration=4.165609488 podStartE2EDuration="13.687969477s" podCreationTimestamp="2025-11-24 22:28:56 +0000 UTC" firstStartedPulling="2025-11-24 22:28:58.464231594 +0000 UTC m=+4156.780483767" lastFinishedPulling="2025-11-24 22:29:07.986591583 +0000 UTC m=+4166.302843756" observedRunningTime="2025-11-24 22:29:09.676298443 +0000 UTC m=+4167.992550636" watchObservedRunningTime="2025-11-24 22:29:09.687969477 +0000 UTC m=+4168.004221650" Nov 24 22:29:11 crc kubenswrapper[4915]: I1124 22:29:11.661586 4915 generic.go:334] "Generic (PLEG): container finished" podID="985882d4-0533-4ec6-bbaa-b1a414f5791b" containerID="d74297b4bc7547849b14b9ce9332bc5c6ddbe65aa879a4e126f1289d41976d24" exitCode=0 Nov 24 22:29:11 crc kubenswrapper[4915]: I1124 22:29:11.661679 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62vd4" event={"ID":"985882d4-0533-4ec6-bbaa-b1a414f5791b","Type":"ContainerDied","Data":"d74297b4bc7547849b14b9ce9332bc5c6ddbe65aa879a4e126f1289d41976d24"} Nov 24 22:29:12 crc kubenswrapper[4915]: I1124 22:29:12.639081 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bghsr" Nov 24 22:29:12 crc kubenswrapper[4915]: I1124 22:29:12.639447 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bghsr" Nov 24 22:29:12 crc kubenswrapper[4915]: I1124 22:29:12.725697 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62vd4" event={"ID":"985882d4-0533-4ec6-bbaa-b1a414f5791b","Type":"ContainerStarted","Data":"15a5991d7f2a3c15a64b42eb36a78e28b447154110d6fc7294d6ddd1f7b09ecf"} Nov 24 22:29:12 crc kubenswrapper[4915]: I1124 22:29:12.753596 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-62vd4" podStartSLOduration=3.070644684 podStartE2EDuration="8.753573643s" podCreationTimestamp="2025-11-24 22:29:04 +0000 UTC" firstStartedPulling="2025-11-24 22:29:06.57409999 +0000 UTC m=+4164.890352163" lastFinishedPulling="2025-11-24 22:29:12.257028959 +0000 UTC m=+4170.573281122" observedRunningTime="2025-11-24 22:29:12.744164529 +0000 UTC m=+4171.060416702" watchObservedRunningTime="2025-11-24 22:29:12.753573643 +0000 UTC m=+4171.069825816" Nov 24 22:29:13 crc kubenswrapper[4915]: I1124 22:29:13.722701 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-bghsr" podUID="a6a913ae-f429-420b-b984-03be168844ea" containerName="registry-server" probeResult="failure" output=< Nov 24 22:29:13 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Nov 24 22:29:13 crc kubenswrapper[4915]: > Nov 24 22:29:14 crc kubenswrapper[4915]: I1124 22:29:14.443795 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-62vd4" Nov 24 22:29:14 crc kubenswrapper[4915]: I1124 22:29:14.444129 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-62vd4" Nov 24 22:29:15 crc kubenswrapper[4915]: I1124 22:29:15.538316 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-62vd4" podUID="985882d4-0533-4ec6-bbaa-b1a414f5791b" containerName="registry-server" probeResult="failure" output=< Nov 24 22:29:15 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Nov 24 22:29:15 crc kubenswrapper[4915]: > Nov 24 22:29:16 crc kubenswrapper[4915]: I1124 22:29:16.996021 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8r2vt" Nov 24 22:29:16 crc kubenswrapper[4915]: I1124 22:29:16.996351 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8r2vt" Nov 24 22:29:17 crc kubenswrapper[4915]: I1124 22:29:17.053647 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8r2vt" Nov 24 22:29:17 crc kubenswrapper[4915]: I1124 22:29:17.837654 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8r2vt" Nov 24 22:29:17 crc kubenswrapper[4915]: I1124 22:29:17.927619 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8r2vt"] Nov 24 22:29:18 crc kubenswrapper[4915]: I1124 22:29:18.015531 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5hkdx"] Nov 24 22:29:18 crc kubenswrapper[4915]: I1124 22:29:18.015794 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5hkdx" podUID="e0ce54f3-6d64-4d55-85c4-1428aab6cad1" containerName="registry-server" containerID="cri-o://9426ef1ff5a92d2b038745718256dfe5038c364c3e8ea26478f0cee864caefb9" gracePeriod=2 Nov 24 22:29:18 crc kubenswrapper[4915]: I1124 22:29:18.554129 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5hkdx" Nov 24 22:29:18 crc kubenswrapper[4915]: I1124 22:29:18.622603 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0ce54f3-6d64-4d55-85c4-1428aab6cad1-utilities\") pod \"e0ce54f3-6d64-4d55-85c4-1428aab6cad1\" (UID: \"e0ce54f3-6d64-4d55-85c4-1428aab6cad1\") " Nov 24 22:29:18 crc kubenswrapper[4915]: I1124 22:29:18.623214 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0ce54f3-6d64-4d55-85c4-1428aab6cad1-catalog-content\") pod \"e0ce54f3-6d64-4d55-85c4-1428aab6cad1\" (UID: \"e0ce54f3-6d64-4d55-85c4-1428aab6cad1\") " Nov 24 22:29:18 crc kubenswrapper[4915]: I1124 22:29:18.623317 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-276dt\" (UniqueName: \"kubernetes.io/projected/e0ce54f3-6d64-4d55-85c4-1428aab6cad1-kube-api-access-276dt\") pod \"e0ce54f3-6d64-4d55-85c4-1428aab6cad1\" (UID: \"e0ce54f3-6d64-4d55-85c4-1428aab6cad1\") " Nov 24 22:29:18 crc kubenswrapper[4915]: I1124 22:29:18.623311 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0ce54f3-6d64-4d55-85c4-1428aab6cad1-utilities" (OuterVolumeSpecName: "utilities") pod "e0ce54f3-6d64-4d55-85c4-1428aab6cad1" (UID: "e0ce54f3-6d64-4d55-85c4-1428aab6cad1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:29:18 crc kubenswrapper[4915]: I1124 22:29:18.623905 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0ce54f3-6d64-4d55-85c4-1428aab6cad1-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:29:18 crc kubenswrapper[4915]: I1124 22:29:18.630630 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0ce54f3-6d64-4d55-85c4-1428aab6cad1-kube-api-access-276dt" (OuterVolumeSpecName: "kube-api-access-276dt") pod "e0ce54f3-6d64-4d55-85c4-1428aab6cad1" (UID: "e0ce54f3-6d64-4d55-85c4-1428aab6cad1"). InnerVolumeSpecName "kube-api-access-276dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:29:18 crc kubenswrapper[4915]: I1124 22:29:18.704072 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0ce54f3-6d64-4d55-85c4-1428aab6cad1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e0ce54f3-6d64-4d55-85c4-1428aab6cad1" (UID: "e0ce54f3-6d64-4d55-85c4-1428aab6cad1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:29:18 crc kubenswrapper[4915]: I1124 22:29:18.725649 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0ce54f3-6d64-4d55-85c4-1428aab6cad1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:29:18 crc kubenswrapper[4915]: I1124 22:29:18.725681 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-276dt\" (UniqueName: \"kubernetes.io/projected/e0ce54f3-6d64-4d55-85c4-1428aab6cad1-kube-api-access-276dt\") on node \"crc\" DevicePath \"\"" Nov 24 22:29:18 crc kubenswrapper[4915]: I1124 22:29:18.784421 4915 generic.go:334] "Generic (PLEG): container finished" podID="e0ce54f3-6d64-4d55-85c4-1428aab6cad1" containerID="9426ef1ff5a92d2b038745718256dfe5038c364c3e8ea26478f0cee864caefb9" exitCode=0 Nov 24 22:29:18 crc kubenswrapper[4915]: I1124 22:29:18.784492 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5hkdx" Nov 24 22:29:18 crc kubenswrapper[4915]: I1124 22:29:18.784525 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hkdx" event={"ID":"e0ce54f3-6d64-4d55-85c4-1428aab6cad1","Type":"ContainerDied","Data":"9426ef1ff5a92d2b038745718256dfe5038c364c3e8ea26478f0cee864caefb9"} Nov 24 22:29:18 crc kubenswrapper[4915]: I1124 22:29:18.784586 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hkdx" event={"ID":"e0ce54f3-6d64-4d55-85c4-1428aab6cad1","Type":"ContainerDied","Data":"f8e596e81dff3700cdcd035be0e490cb6c56ae7526da317cbeef0d85592fab18"} Nov 24 22:29:18 crc kubenswrapper[4915]: I1124 22:29:18.784608 4915 scope.go:117] "RemoveContainer" containerID="9426ef1ff5a92d2b038745718256dfe5038c364c3e8ea26478f0cee864caefb9" Nov 24 22:29:18 crc kubenswrapper[4915]: I1124 22:29:18.814314 4915 scope.go:117] "RemoveContainer" containerID="c06fd06629e316023ba83d66d941ebd5014a19c4210ece5809b7e82a24de9bb3" Nov 24 22:29:18 crc kubenswrapper[4915]: I1124 22:29:18.820159 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5hkdx"] Nov 24 22:29:18 crc kubenswrapper[4915]: I1124 22:29:18.829502 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5hkdx"] Nov 24 22:29:19 crc kubenswrapper[4915]: I1124 22:29:19.275159 4915 scope.go:117] "RemoveContainer" containerID="0001a8fc4732fb7fd374f524706d68f6be6601f9242dcc3c1845d828d56d2afb" Nov 24 22:29:19 crc kubenswrapper[4915]: I1124 22:29:19.348239 4915 scope.go:117] "RemoveContainer" containerID="9426ef1ff5a92d2b038745718256dfe5038c364c3e8ea26478f0cee864caefb9" Nov 24 22:29:19 crc kubenswrapper[4915]: E1124 22:29:19.349280 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9426ef1ff5a92d2b038745718256dfe5038c364c3e8ea26478f0cee864caefb9\": container with ID starting with 9426ef1ff5a92d2b038745718256dfe5038c364c3e8ea26478f0cee864caefb9 not found: ID does not exist" containerID="9426ef1ff5a92d2b038745718256dfe5038c364c3e8ea26478f0cee864caefb9" Nov 24 22:29:19 crc kubenswrapper[4915]: I1124 22:29:19.349343 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9426ef1ff5a92d2b038745718256dfe5038c364c3e8ea26478f0cee864caefb9"} err="failed to get container status \"9426ef1ff5a92d2b038745718256dfe5038c364c3e8ea26478f0cee864caefb9\": rpc error: code = NotFound desc = could not find container \"9426ef1ff5a92d2b038745718256dfe5038c364c3e8ea26478f0cee864caefb9\": container with ID starting with 9426ef1ff5a92d2b038745718256dfe5038c364c3e8ea26478f0cee864caefb9 not found: ID does not exist" Nov 24 22:29:19 crc kubenswrapper[4915]: I1124 22:29:19.349371 4915 scope.go:117] "RemoveContainer" containerID="c06fd06629e316023ba83d66d941ebd5014a19c4210ece5809b7e82a24de9bb3" Nov 24 22:29:19 crc kubenswrapper[4915]: E1124 22:29:19.349759 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c06fd06629e316023ba83d66d941ebd5014a19c4210ece5809b7e82a24de9bb3\": container with ID starting with c06fd06629e316023ba83d66d941ebd5014a19c4210ece5809b7e82a24de9bb3 not found: ID does not exist" containerID="c06fd06629e316023ba83d66d941ebd5014a19c4210ece5809b7e82a24de9bb3" Nov 24 22:29:19 crc kubenswrapper[4915]: I1124 22:29:19.349827 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c06fd06629e316023ba83d66d941ebd5014a19c4210ece5809b7e82a24de9bb3"} err="failed to get container status \"c06fd06629e316023ba83d66d941ebd5014a19c4210ece5809b7e82a24de9bb3\": rpc error: code = NotFound desc = could not find container \"c06fd06629e316023ba83d66d941ebd5014a19c4210ece5809b7e82a24de9bb3\": container with ID starting with c06fd06629e316023ba83d66d941ebd5014a19c4210ece5809b7e82a24de9bb3 not found: ID does not exist" Nov 24 22:29:19 crc kubenswrapper[4915]: I1124 22:29:19.349859 4915 scope.go:117] "RemoveContainer" containerID="0001a8fc4732fb7fd374f524706d68f6be6601f9242dcc3c1845d828d56d2afb" Nov 24 22:29:19 crc kubenswrapper[4915]: E1124 22:29:19.350180 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0001a8fc4732fb7fd374f524706d68f6be6601f9242dcc3c1845d828d56d2afb\": container with ID starting with 0001a8fc4732fb7fd374f524706d68f6be6601f9242dcc3c1845d828d56d2afb not found: ID does not exist" containerID="0001a8fc4732fb7fd374f524706d68f6be6601f9242dcc3c1845d828d56d2afb" Nov 24 22:29:19 crc kubenswrapper[4915]: I1124 22:29:19.350222 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0001a8fc4732fb7fd374f524706d68f6be6601f9242dcc3c1845d828d56d2afb"} err="failed to get container status \"0001a8fc4732fb7fd374f524706d68f6be6601f9242dcc3c1845d828d56d2afb\": rpc error: code = NotFound desc = could not find container \"0001a8fc4732fb7fd374f524706d68f6be6601f9242dcc3c1845d828d56d2afb\": container with ID starting with 0001a8fc4732fb7fd374f524706d68f6be6601f9242dcc3c1845d828d56d2afb not found: ID does not exist" Nov 24 22:29:20 crc kubenswrapper[4915]: I1124 22:29:20.440883 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0ce54f3-6d64-4d55-85c4-1428aab6cad1" path="/var/lib/kubelet/pods/e0ce54f3-6d64-4d55-85c4-1428aab6cad1/volumes" Nov 24 22:29:22 crc kubenswrapper[4915]: I1124 22:29:22.704575 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bghsr" Nov 24 22:29:22 crc kubenswrapper[4915]: I1124 22:29:22.753403 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bghsr" Nov 24 22:29:23 crc kubenswrapper[4915]: I1124 22:29:23.504556 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bghsr"] Nov 24 22:29:23 crc kubenswrapper[4915]: I1124 22:29:23.845542 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bghsr" podUID="a6a913ae-f429-420b-b984-03be168844ea" containerName="registry-server" containerID="cri-o://0856a8992ad10ee630f6b97913b5e6b35012109f7bcd52ab81002cbeba1fa171" gracePeriod=2 Nov 24 22:29:24 crc kubenswrapper[4915]: I1124 22:29:24.405265 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bghsr" Nov 24 22:29:24 crc kubenswrapper[4915]: I1124 22:29:24.460953 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkxxx\" (UniqueName: \"kubernetes.io/projected/a6a913ae-f429-420b-b984-03be168844ea-kube-api-access-vkxxx\") pod \"a6a913ae-f429-420b-b984-03be168844ea\" (UID: \"a6a913ae-f429-420b-b984-03be168844ea\") " Nov 24 22:29:24 crc kubenswrapper[4915]: I1124 22:29:24.461104 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6a913ae-f429-420b-b984-03be168844ea-utilities\") pod \"a6a913ae-f429-420b-b984-03be168844ea\" (UID: \"a6a913ae-f429-420b-b984-03be168844ea\") " Nov 24 22:29:24 crc kubenswrapper[4915]: I1124 22:29:24.461144 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6a913ae-f429-420b-b984-03be168844ea-catalog-content\") pod \"a6a913ae-f429-420b-b984-03be168844ea\" (UID: \"a6a913ae-f429-420b-b984-03be168844ea\") " Nov 24 22:29:24 crc kubenswrapper[4915]: I1124 22:29:24.463234 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6a913ae-f429-420b-b984-03be168844ea-utilities" (OuterVolumeSpecName: "utilities") pod "a6a913ae-f429-420b-b984-03be168844ea" (UID: "a6a913ae-f429-420b-b984-03be168844ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:29:24 crc kubenswrapper[4915]: I1124 22:29:24.469768 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6a913ae-f429-420b-b984-03be168844ea-kube-api-access-vkxxx" (OuterVolumeSpecName: "kube-api-access-vkxxx") pod "a6a913ae-f429-420b-b984-03be168844ea" (UID: "a6a913ae-f429-420b-b984-03be168844ea"). InnerVolumeSpecName "kube-api-access-vkxxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:29:24 crc kubenswrapper[4915]: I1124 22:29:24.481683 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-62vd4" Nov 24 22:29:24 crc kubenswrapper[4915]: I1124 22:29:24.497377 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6a913ae-f429-420b-b984-03be168844ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6a913ae-f429-420b-b984-03be168844ea" (UID: "a6a913ae-f429-420b-b984-03be168844ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:29:24 crc kubenswrapper[4915]: I1124 22:29:24.533420 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-62vd4" Nov 24 22:29:24 crc kubenswrapper[4915]: I1124 22:29:24.564018 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkxxx\" (UniqueName: \"kubernetes.io/projected/a6a913ae-f429-420b-b984-03be168844ea-kube-api-access-vkxxx\") on node \"crc\" DevicePath \"\"" Nov 24 22:29:24 crc kubenswrapper[4915]: I1124 22:29:24.564051 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6a913ae-f429-420b-b984-03be168844ea-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:29:24 crc kubenswrapper[4915]: I1124 22:29:24.564060 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6a913ae-f429-420b-b984-03be168844ea-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:29:24 crc kubenswrapper[4915]: I1124 22:29:24.865404 4915 generic.go:334] "Generic (PLEG): container finished" podID="a6a913ae-f429-420b-b984-03be168844ea" containerID="0856a8992ad10ee630f6b97913b5e6b35012109f7bcd52ab81002cbeba1fa171" exitCode=0 Nov 24 22:29:24 crc kubenswrapper[4915]: I1124 22:29:24.865880 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bghsr" Nov 24 22:29:24 crc kubenswrapper[4915]: I1124 22:29:24.866850 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bghsr" event={"ID":"a6a913ae-f429-420b-b984-03be168844ea","Type":"ContainerDied","Data":"0856a8992ad10ee630f6b97913b5e6b35012109f7bcd52ab81002cbeba1fa171"} Nov 24 22:29:24 crc kubenswrapper[4915]: I1124 22:29:24.866894 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bghsr" event={"ID":"a6a913ae-f429-420b-b984-03be168844ea","Type":"ContainerDied","Data":"0cf5e1f27cc685b965ee4627c9303b34ab2a66d9c4091d9ecbe50f0003761ee7"} Nov 24 22:29:24 crc kubenswrapper[4915]: I1124 22:29:24.866924 4915 scope.go:117] "RemoveContainer" containerID="0856a8992ad10ee630f6b97913b5e6b35012109f7bcd52ab81002cbeba1fa171" Nov 24 22:29:24 crc kubenswrapper[4915]: I1124 22:29:24.910745 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bghsr"] Nov 24 22:29:24 crc kubenswrapper[4915]: I1124 22:29:24.912639 4915 scope.go:117] "RemoveContainer" containerID="f69912ec8f256b522b3dd21eb0c9c0202aa0bb3225b99fbaa070c0635edf808a" Nov 24 22:29:24 crc kubenswrapper[4915]: I1124 22:29:24.925334 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bghsr"] Nov 24 22:29:24 crc kubenswrapper[4915]: I1124 22:29:24.955445 4915 scope.go:117] "RemoveContainer" containerID="4f775ac2c53484f0396e37e45d70a7618b9676130f15097cb67fb1105340452a" Nov 24 22:29:25 crc kubenswrapper[4915]: I1124 22:29:25.023351 4915 scope.go:117] "RemoveContainer" containerID="0856a8992ad10ee630f6b97913b5e6b35012109f7bcd52ab81002cbeba1fa171" Nov 24 22:29:25 crc kubenswrapper[4915]: E1124 22:29:25.023976 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0856a8992ad10ee630f6b97913b5e6b35012109f7bcd52ab81002cbeba1fa171\": container with ID starting with 0856a8992ad10ee630f6b97913b5e6b35012109f7bcd52ab81002cbeba1fa171 not found: ID does not exist" containerID="0856a8992ad10ee630f6b97913b5e6b35012109f7bcd52ab81002cbeba1fa171" Nov 24 22:29:25 crc kubenswrapper[4915]: I1124 22:29:25.024031 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0856a8992ad10ee630f6b97913b5e6b35012109f7bcd52ab81002cbeba1fa171"} err="failed to get container status \"0856a8992ad10ee630f6b97913b5e6b35012109f7bcd52ab81002cbeba1fa171\": rpc error: code = NotFound desc = could not find container \"0856a8992ad10ee630f6b97913b5e6b35012109f7bcd52ab81002cbeba1fa171\": container with ID starting with 0856a8992ad10ee630f6b97913b5e6b35012109f7bcd52ab81002cbeba1fa171 not found: ID does not exist" Nov 24 22:29:25 crc kubenswrapper[4915]: I1124 22:29:25.024064 4915 scope.go:117] "RemoveContainer" containerID="f69912ec8f256b522b3dd21eb0c9c0202aa0bb3225b99fbaa070c0635edf808a" Nov 24 22:29:25 crc kubenswrapper[4915]: E1124 22:29:25.024563 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f69912ec8f256b522b3dd21eb0c9c0202aa0bb3225b99fbaa070c0635edf808a\": container with ID starting with f69912ec8f256b522b3dd21eb0c9c0202aa0bb3225b99fbaa070c0635edf808a not found: ID does not exist" containerID="f69912ec8f256b522b3dd21eb0c9c0202aa0bb3225b99fbaa070c0635edf808a" Nov 24 22:29:25 crc kubenswrapper[4915]: I1124 22:29:25.024607 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f69912ec8f256b522b3dd21eb0c9c0202aa0bb3225b99fbaa070c0635edf808a"} err="failed to get container status \"f69912ec8f256b522b3dd21eb0c9c0202aa0bb3225b99fbaa070c0635edf808a\": rpc error: code = NotFound desc = could not find container \"f69912ec8f256b522b3dd21eb0c9c0202aa0bb3225b99fbaa070c0635edf808a\": container with ID starting with f69912ec8f256b522b3dd21eb0c9c0202aa0bb3225b99fbaa070c0635edf808a not found: ID does not exist" Nov 24 22:29:25 crc kubenswrapper[4915]: I1124 22:29:25.024632 4915 scope.go:117] "RemoveContainer" containerID="4f775ac2c53484f0396e37e45d70a7618b9676130f15097cb67fb1105340452a" Nov 24 22:29:25 crc kubenswrapper[4915]: E1124 22:29:25.025154 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f775ac2c53484f0396e37e45d70a7618b9676130f15097cb67fb1105340452a\": container with ID starting with 4f775ac2c53484f0396e37e45d70a7618b9676130f15097cb67fb1105340452a not found: ID does not exist" containerID="4f775ac2c53484f0396e37e45d70a7618b9676130f15097cb67fb1105340452a" Nov 24 22:29:25 crc kubenswrapper[4915]: I1124 22:29:25.025189 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f775ac2c53484f0396e37e45d70a7618b9676130f15097cb67fb1105340452a"} err="failed to get container status \"4f775ac2c53484f0396e37e45d70a7618b9676130f15097cb67fb1105340452a\": rpc error: code = NotFound desc = could not find container \"4f775ac2c53484f0396e37e45d70a7618b9676130f15097cb67fb1105340452a\": container with ID starting with 4f775ac2c53484f0396e37e45d70a7618b9676130f15097cb67fb1105340452a not found: ID does not exist" Nov 24 22:29:26 crc kubenswrapper[4915]: I1124 22:29:26.445395 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6a913ae-f429-420b-b984-03be168844ea" path="/var/lib/kubelet/pods/a6a913ae-f429-420b-b984-03be168844ea/volumes" Nov 24 22:29:26 crc kubenswrapper[4915]: I1124 22:29:26.899397 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-62vd4"] Nov 24 22:29:26 crc kubenswrapper[4915]: I1124 22:29:26.899657 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-62vd4" podUID="985882d4-0533-4ec6-bbaa-b1a414f5791b" containerName="registry-server" containerID="cri-o://15a5991d7f2a3c15a64b42eb36a78e28b447154110d6fc7294d6ddd1f7b09ecf" gracePeriod=2 Nov 24 22:29:27 crc kubenswrapper[4915]: I1124 22:29:27.502856 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-62vd4" Nov 24 22:29:27 crc kubenswrapper[4915]: I1124 22:29:27.640656 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/985882d4-0533-4ec6-bbaa-b1a414f5791b-utilities\") pod \"985882d4-0533-4ec6-bbaa-b1a414f5791b\" (UID: \"985882d4-0533-4ec6-bbaa-b1a414f5791b\") " Nov 24 22:29:27 crc kubenswrapper[4915]: I1124 22:29:27.641000 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgkfw\" (UniqueName: \"kubernetes.io/projected/985882d4-0533-4ec6-bbaa-b1a414f5791b-kube-api-access-cgkfw\") pod \"985882d4-0533-4ec6-bbaa-b1a414f5791b\" (UID: \"985882d4-0533-4ec6-bbaa-b1a414f5791b\") " Nov 24 22:29:27 crc kubenswrapper[4915]: I1124 22:29:27.641116 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/985882d4-0533-4ec6-bbaa-b1a414f5791b-catalog-content\") pod \"985882d4-0533-4ec6-bbaa-b1a414f5791b\" (UID: \"985882d4-0533-4ec6-bbaa-b1a414f5791b\") " Nov 24 22:29:27 crc kubenswrapper[4915]: I1124 22:29:27.645445 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/985882d4-0533-4ec6-bbaa-b1a414f5791b-utilities" (OuterVolumeSpecName: "utilities") pod "985882d4-0533-4ec6-bbaa-b1a414f5791b" (UID: "985882d4-0533-4ec6-bbaa-b1a414f5791b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:29:27 crc kubenswrapper[4915]: I1124 22:29:27.661092 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/985882d4-0533-4ec6-bbaa-b1a414f5791b-kube-api-access-cgkfw" (OuterVolumeSpecName: "kube-api-access-cgkfw") pod "985882d4-0533-4ec6-bbaa-b1a414f5791b" (UID: "985882d4-0533-4ec6-bbaa-b1a414f5791b"). InnerVolumeSpecName "kube-api-access-cgkfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:29:27 crc kubenswrapper[4915]: I1124 22:29:27.699267 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/985882d4-0533-4ec6-bbaa-b1a414f5791b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "985882d4-0533-4ec6-bbaa-b1a414f5791b" (UID: "985882d4-0533-4ec6-bbaa-b1a414f5791b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:29:27 crc kubenswrapper[4915]: I1124 22:29:27.743926 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/985882d4-0533-4ec6-bbaa-b1a414f5791b-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:29:27 crc kubenswrapper[4915]: I1124 22:29:27.743981 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgkfw\" (UniqueName: \"kubernetes.io/projected/985882d4-0533-4ec6-bbaa-b1a414f5791b-kube-api-access-cgkfw\") on node \"crc\" DevicePath \"\"" Nov 24 22:29:27 crc kubenswrapper[4915]: I1124 22:29:27.743993 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/985882d4-0533-4ec6-bbaa-b1a414f5791b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:29:27 crc kubenswrapper[4915]: I1124 22:29:27.913818 4915 generic.go:334] "Generic (PLEG): container finished" podID="985882d4-0533-4ec6-bbaa-b1a414f5791b" containerID="15a5991d7f2a3c15a64b42eb36a78e28b447154110d6fc7294d6ddd1f7b09ecf" exitCode=0 Nov 24 22:29:27 crc kubenswrapper[4915]: I1124 22:29:27.913861 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62vd4" event={"ID":"985882d4-0533-4ec6-bbaa-b1a414f5791b","Type":"ContainerDied","Data":"15a5991d7f2a3c15a64b42eb36a78e28b447154110d6fc7294d6ddd1f7b09ecf"} Nov 24 22:29:27 crc kubenswrapper[4915]: I1124 22:29:27.913887 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62vd4" event={"ID":"985882d4-0533-4ec6-bbaa-b1a414f5791b","Type":"ContainerDied","Data":"c58da334c9278148afac35fa046a1a99396199fcf3dd5061ec9efffedf1be77c"} Nov 24 22:29:27 crc kubenswrapper[4915]: I1124 22:29:27.913906 4915 scope.go:117] "RemoveContainer" containerID="15a5991d7f2a3c15a64b42eb36a78e28b447154110d6fc7294d6ddd1f7b09ecf" Nov 24 22:29:27 crc kubenswrapper[4915]: I1124 22:29:27.914044 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-62vd4" Nov 24 22:29:27 crc kubenswrapper[4915]: I1124 22:29:27.955603 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-62vd4"] Nov 24 22:29:27 crc kubenswrapper[4915]: I1124 22:29:27.956438 4915 scope.go:117] "RemoveContainer" containerID="d74297b4bc7547849b14b9ce9332bc5c6ddbe65aa879a4e126f1289d41976d24" Nov 24 22:29:27 crc kubenswrapper[4915]: I1124 22:29:27.966429 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-62vd4"] Nov 24 22:29:28 crc kubenswrapper[4915]: I1124 22:29:28.009887 4915 scope.go:117] "RemoveContainer" containerID="34bcb538f7d589e3112291c9668f1aa4f32f678cb65ff2cf9089c08e4176ab8a" Nov 24 22:29:28 crc kubenswrapper[4915]: I1124 22:29:28.057306 4915 scope.go:117] "RemoveContainer" containerID="15a5991d7f2a3c15a64b42eb36a78e28b447154110d6fc7294d6ddd1f7b09ecf" Nov 24 22:29:28 crc kubenswrapper[4915]: E1124 22:29:28.057646 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15a5991d7f2a3c15a64b42eb36a78e28b447154110d6fc7294d6ddd1f7b09ecf\": container with ID starting with 15a5991d7f2a3c15a64b42eb36a78e28b447154110d6fc7294d6ddd1f7b09ecf not found: ID does not exist" containerID="15a5991d7f2a3c15a64b42eb36a78e28b447154110d6fc7294d6ddd1f7b09ecf" Nov 24 22:29:28 crc kubenswrapper[4915]: I1124 22:29:28.057688 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15a5991d7f2a3c15a64b42eb36a78e28b447154110d6fc7294d6ddd1f7b09ecf"} err="failed to get container status \"15a5991d7f2a3c15a64b42eb36a78e28b447154110d6fc7294d6ddd1f7b09ecf\": rpc error: code = NotFound desc = could not find container \"15a5991d7f2a3c15a64b42eb36a78e28b447154110d6fc7294d6ddd1f7b09ecf\": container with ID starting with 15a5991d7f2a3c15a64b42eb36a78e28b447154110d6fc7294d6ddd1f7b09ecf not found: ID does not exist" Nov 24 22:29:28 crc kubenswrapper[4915]: I1124 22:29:28.057712 4915 scope.go:117] "RemoveContainer" containerID="d74297b4bc7547849b14b9ce9332bc5c6ddbe65aa879a4e126f1289d41976d24" Nov 24 22:29:28 crc kubenswrapper[4915]: E1124 22:29:28.057979 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d74297b4bc7547849b14b9ce9332bc5c6ddbe65aa879a4e126f1289d41976d24\": container with ID starting with d74297b4bc7547849b14b9ce9332bc5c6ddbe65aa879a4e126f1289d41976d24 not found: ID does not exist" containerID="d74297b4bc7547849b14b9ce9332bc5c6ddbe65aa879a4e126f1289d41976d24" Nov 24 22:29:28 crc kubenswrapper[4915]: I1124 22:29:28.058002 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d74297b4bc7547849b14b9ce9332bc5c6ddbe65aa879a4e126f1289d41976d24"} err="failed to get container status \"d74297b4bc7547849b14b9ce9332bc5c6ddbe65aa879a4e126f1289d41976d24\": rpc error: code = NotFound desc = could not find container \"d74297b4bc7547849b14b9ce9332bc5c6ddbe65aa879a4e126f1289d41976d24\": container with ID starting with d74297b4bc7547849b14b9ce9332bc5c6ddbe65aa879a4e126f1289d41976d24 not found: ID does not exist" Nov 24 22:29:28 crc kubenswrapper[4915]: I1124 22:29:28.058017 4915 scope.go:117] "RemoveContainer" containerID="34bcb538f7d589e3112291c9668f1aa4f32f678cb65ff2cf9089c08e4176ab8a" Nov 24 22:29:28 crc kubenswrapper[4915]: E1124 22:29:28.058224 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34bcb538f7d589e3112291c9668f1aa4f32f678cb65ff2cf9089c08e4176ab8a\": container with ID starting with 34bcb538f7d589e3112291c9668f1aa4f32f678cb65ff2cf9089c08e4176ab8a not found: ID does not exist" containerID="34bcb538f7d589e3112291c9668f1aa4f32f678cb65ff2cf9089c08e4176ab8a" Nov 24 22:29:28 crc kubenswrapper[4915]: I1124 22:29:28.058245 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34bcb538f7d589e3112291c9668f1aa4f32f678cb65ff2cf9089c08e4176ab8a"} err="failed to get container status \"34bcb538f7d589e3112291c9668f1aa4f32f678cb65ff2cf9089c08e4176ab8a\": rpc error: code = NotFound desc = could not find container \"34bcb538f7d589e3112291c9668f1aa4f32f678cb65ff2cf9089c08e4176ab8a\": container with ID starting with 34bcb538f7d589e3112291c9668f1aa4f32f678cb65ff2cf9089c08e4176ab8a not found: ID does not exist" Nov 24 22:29:28 crc kubenswrapper[4915]: I1124 22:29:28.445132 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="985882d4-0533-4ec6-bbaa-b1a414f5791b" path="/var/lib/kubelet/pods/985882d4-0533-4ec6-bbaa-b1a414f5791b/volumes" Nov 24 22:30:00 crc kubenswrapper[4915]: I1124 22:30:00.200401 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400390-wmlbj"] Nov 24 22:30:00 crc kubenswrapper[4915]: E1124 22:30:00.201297 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6a913ae-f429-420b-b984-03be168844ea" containerName="registry-server" Nov 24 22:30:00 crc kubenswrapper[4915]: I1124 22:30:00.201310 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6a913ae-f429-420b-b984-03be168844ea" containerName="registry-server" Nov 24 22:30:00 crc kubenswrapper[4915]: E1124 22:30:00.201341 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985882d4-0533-4ec6-bbaa-b1a414f5791b" containerName="extract-content" Nov 24 22:30:00 crc kubenswrapper[4915]: I1124 22:30:00.201349 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="985882d4-0533-4ec6-bbaa-b1a414f5791b" containerName="extract-content" Nov 24 22:30:00 crc kubenswrapper[4915]: E1124 22:30:00.201366 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0ce54f3-6d64-4d55-85c4-1428aab6cad1" containerName="extract-utilities" Nov 24 22:30:00 crc kubenswrapper[4915]: I1124 22:30:00.201374 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0ce54f3-6d64-4d55-85c4-1428aab6cad1" containerName="extract-utilities" Nov 24 22:30:00 crc kubenswrapper[4915]: E1124 22:30:00.201387 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0ce54f3-6d64-4d55-85c4-1428aab6cad1" containerName="registry-server" Nov 24 22:30:00 crc kubenswrapper[4915]: I1124 22:30:00.201394 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0ce54f3-6d64-4d55-85c4-1428aab6cad1" containerName="registry-server" Nov 24 22:30:00 crc kubenswrapper[4915]: E1124 22:30:00.201416 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985882d4-0533-4ec6-bbaa-b1a414f5791b" containerName="registry-server" Nov 24 22:30:00 crc kubenswrapper[4915]: I1124 22:30:00.201422 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="985882d4-0533-4ec6-bbaa-b1a414f5791b" containerName="registry-server" Nov 24 22:30:00 crc kubenswrapper[4915]: E1124 22:30:00.201437 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6a913ae-f429-420b-b984-03be168844ea" containerName="extract-utilities" Nov 24 22:30:00 crc kubenswrapper[4915]: I1124 22:30:00.201442 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6a913ae-f429-420b-b984-03be168844ea" containerName="extract-utilities" Nov 24 22:30:00 crc kubenswrapper[4915]: E1124 22:30:00.201451 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0ce54f3-6d64-4d55-85c4-1428aab6cad1" containerName="extract-content" Nov 24 22:30:00 crc kubenswrapper[4915]: I1124 22:30:00.201456 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0ce54f3-6d64-4d55-85c4-1428aab6cad1" containerName="extract-content" Nov 24 22:30:00 crc kubenswrapper[4915]: E1124 22:30:00.201468 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985882d4-0533-4ec6-bbaa-b1a414f5791b" containerName="extract-utilities" Nov 24 22:30:00 crc kubenswrapper[4915]: I1124 22:30:00.201474 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="985882d4-0533-4ec6-bbaa-b1a414f5791b" containerName="extract-utilities" Nov 24 22:30:00 crc kubenswrapper[4915]: E1124 22:30:00.201487 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6a913ae-f429-420b-b984-03be168844ea" containerName="extract-content" Nov 24 22:30:00 crc kubenswrapper[4915]: I1124 22:30:00.201492 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6a913ae-f429-420b-b984-03be168844ea" containerName="extract-content" Nov 24 22:30:00 crc kubenswrapper[4915]: I1124 22:30:00.201692 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0ce54f3-6d64-4d55-85c4-1428aab6cad1" containerName="registry-server" Nov 24 22:30:00 crc kubenswrapper[4915]: I1124 22:30:00.201710 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="985882d4-0533-4ec6-bbaa-b1a414f5791b" containerName="registry-server" Nov 24 22:30:00 crc kubenswrapper[4915]: I1124 22:30:00.201738 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6a913ae-f429-420b-b984-03be168844ea" containerName="registry-server" Nov 24 22:30:00 crc kubenswrapper[4915]: I1124 22:30:00.202636 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-wmlbj" Nov 24 22:30:00 crc kubenswrapper[4915]: I1124 22:30:00.213080 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 22:30:00 crc kubenswrapper[4915]: I1124 22:30:00.213256 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 22:30:00 crc kubenswrapper[4915]: I1124 22:30:00.221486 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400390-wmlbj"] Nov 24 22:30:00 crc kubenswrapper[4915]: I1124 22:30:00.269577 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95e5ac9f-3f14-4c8c-b8ac-94ef315888d1-config-volume\") pod \"collect-profiles-29400390-wmlbj\" (UID: \"95e5ac9f-3f14-4c8c-b8ac-94ef315888d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-wmlbj" Nov 24 22:30:00 crc kubenswrapper[4915]: I1124 22:30:00.269813 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msxkr\" (UniqueName: \"kubernetes.io/projected/95e5ac9f-3f14-4c8c-b8ac-94ef315888d1-kube-api-access-msxkr\") pod \"collect-profiles-29400390-wmlbj\" (UID: \"95e5ac9f-3f14-4c8c-b8ac-94ef315888d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-wmlbj" Nov 24 22:30:00 crc kubenswrapper[4915]: I1124 22:30:00.269840 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95e5ac9f-3f14-4c8c-b8ac-94ef315888d1-secret-volume\") pod \"collect-profiles-29400390-wmlbj\" (UID: \"95e5ac9f-3f14-4c8c-b8ac-94ef315888d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-wmlbj" Nov 24 22:30:00 crc kubenswrapper[4915]: I1124 22:30:00.372262 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msxkr\" (UniqueName: \"kubernetes.io/projected/95e5ac9f-3f14-4c8c-b8ac-94ef315888d1-kube-api-access-msxkr\") pod \"collect-profiles-29400390-wmlbj\" (UID: \"95e5ac9f-3f14-4c8c-b8ac-94ef315888d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-wmlbj" Nov 24 22:30:00 crc kubenswrapper[4915]: I1124 22:30:00.372312 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95e5ac9f-3f14-4c8c-b8ac-94ef315888d1-secret-volume\") pod \"collect-profiles-29400390-wmlbj\" (UID: \"95e5ac9f-3f14-4c8c-b8ac-94ef315888d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-wmlbj" Nov 24 22:30:00 crc kubenswrapper[4915]: I1124 22:30:00.372424 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95e5ac9f-3f14-4c8c-b8ac-94ef315888d1-config-volume\") pod \"collect-profiles-29400390-wmlbj\" (UID: \"95e5ac9f-3f14-4c8c-b8ac-94ef315888d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-wmlbj" Nov 24 22:30:00 crc kubenswrapper[4915]: I1124 22:30:00.373559 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95e5ac9f-3f14-4c8c-b8ac-94ef315888d1-config-volume\") pod \"collect-profiles-29400390-wmlbj\" (UID: \"95e5ac9f-3f14-4c8c-b8ac-94ef315888d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-wmlbj" Nov 24 22:30:00 crc kubenswrapper[4915]: I1124 22:30:00.388672 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95e5ac9f-3f14-4c8c-b8ac-94ef315888d1-secret-volume\") pod \"collect-profiles-29400390-wmlbj\" (UID: \"95e5ac9f-3f14-4c8c-b8ac-94ef315888d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-wmlbj" Nov 24 22:30:00 crc kubenswrapper[4915]: I1124 22:30:00.395543 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msxkr\" (UniqueName: \"kubernetes.io/projected/95e5ac9f-3f14-4c8c-b8ac-94ef315888d1-kube-api-access-msxkr\") pod \"collect-profiles-29400390-wmlbj\" (UID: \"95e5ac9f-3f14-4c8c-b8ac-94ef315888d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-wmlbj" Nov 24 22:30:00 crc kubenswrapper[4915]: I1124 22:30:00.538973 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-wmlbj" Nov 24 22:30:01 crc kubenswrapper[4915]: I1124 22:30:01.029070 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400390-wmlbj"] Nov 24 22:30:01 crc kubenswrapper[4915]: I1124 22:30:01.341129 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-wmlbj" event={"ID":"95e5ac9f-3f14-4c8c-b8ac-94ef315888d1","Type":"ContainerStarted","Data":"3735226948a77bf2a9b6295e8b1d7902a207546caf501a087586ff7a0df08d34"} Nov 24 22:30:01 crc kubenswrapper[4915]: I1124 22:30:01.341183 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-wmlbj" event={"ID":"95e5ac9f-3f14-4c8c-b8ac-94ef315888d1","Type":"ContainerStarted","Data":"f1d19d0b3f1af8e633bba929b6b0a9e32ffcab917ef2d5beac56e4379d9ea5f4"} Nov 24 22:30:01 crc kubenswrapper[4915]: I1124 22:30:01.357408 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-wmlbj" podStartSLOduration=1.3573892889999999 podStartE2EDuration="1.357389289s" podCreationTimestamp="2025-11-24 22:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 22:30:01.354204473 +0000 UTC m=+4219.670456656" watchObservedRunningTime="2025-11-24 22:30:01.357389289 +0000 UTC m=+4219.673641462" Nov 24 22:30:02 crc kubenswrapper[4915]: I1124 22:30:02.356437 4915 generic.go:334] "Generic (PLEG): container finished" podID="95e5ac9f-3f14-4c8c-b8ac-94ef315888d1" containerID="3735226948a77bf2a9b6295e8b1d7902a207546caf501a087586ff7a0df08d34" exitCode=0 Nov 24 22:30:02 crc kubenswrapper[4915]: I1124 22:30:02.356565 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-wmlbj" event={"ID":"95e5ac9f-3f14-4c8c-b8ac-94ef315888d1","Type":"ContainerDied","Data":"3735226948a77bf2a9b6295e8b1d7902a207546caf501a087586ff7a0df08d34"} Nov 24 22:30:03 crc kubenswrapper[4915]: I1124 22:30:03.816502 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-wmlbj" Nov 24 22:30:03 crc kubenswrapper[4915]: I1124 22:30:03.975299 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95e5ac9f-3f14-4c8c-b8ac-94ef315888d1-secret-volume\") pod \"95e5ac9f-3f14-4c8c-b8ac-94ef315888d1\" (UID: \"95e5ac9f-3f14-4c8c-b8ac-94ef315888d1\") " Nov 24 22:30:03 crc kubenswrapper[4915]: I1124 22:30:03.975463 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msxkr\" (UniqueName: \"kubernetes.io/projected/95e5ac9f-3f14-4c8c-b8ac-94ef315888d1-kube-api-access-msxkr\") pod \"95e5ac9f-3f14-4c8c-b8ac-94ef315888d1\" (UID: \"95e5ac9f-3f14-4c8c-b8ac-94ef315888d1\") " Nov 24 22:30:03 crc kubenswrapper[4915]: I1124 22:30:03.975762 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95e5ac9f-3f14-4c8c-b8ac-94ef315888d1-config-volume\") pod \"95e5ac9f-3f14-4c8c-b8ac-94ef315888d1\" (UID: \"95e5ac9f-3f14-4c8c-b8ac-94ef315888d1\") " Nov 24 22:30:03 crc kubenswrapper[4915]: I1124 22:30:03.976397 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95e5ac9f-3f14-4c8c-b8ac-94ef315888d1-config-volume" (OuterVolumeSpecName: "config-volume") pod "95e5ac9f-3f14-4c8c-b8ac-94ef315888d1" (UID: "95e5ac9f-3f14-4c8c-b8ac-94ef315888d1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 22:30:03 crc kubenswrapper[4915]: I1124 22:30:03.977535 4915 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95e5ac9f-3f14-4c8c-b8ac-94ef315888d1-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 22:30:03 crc kubenswrapper[4915]: I1124 22:30:03.982214 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95e5ac9f-3f14-4c8c-b8ac-94ef315888d1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "95e5ac9f-3f14-4c8c-b8ac-94ef315888d1" (UID: "95e5ac9f-3f14-4c8c-b8ac-94ef315888d1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:30:03 crc kubenswrapper[4915]: I1124 22:30:03.990121 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95e5ac9f-3f14-4c8c-b8ac-94ef315888d1-kube-api-access-msxkr" (OuterVolumeSpecName: "kube-api-access-msxkr") pod "95e5ac9f-3f14-4c8c-b8ac-94ef315888d1" (UID: "95e5ac9f-3f14-4c8c-b8ac-94ef315888d1"). InnerVolumeSpecName "kube-api-access-msxkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:30:04 crc kubenswrapper[4915]: I1124 22:30:04.081251 4915 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95e5ac9f-3f14-4c8c-b8ac-94ef315888d1-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 22:30:04 crc kubenswrapper[4915]: I1124 22:30:04.081290 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msxkr\" (UniqueName: \"kubernetes.io/projected/95e5ac9f-3f14-4c8c-b8ac-94ef315888d1-kube-api-access-msxkr\") on node \"crc\" DevicePath \"\"" Nov 24 22:30:04 crc kubenswrapper[4915]: I1124 22:30:04.413164 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-wmlbj" event={"ID":"95e5ac9f-3f14-4c8c-b8ac-94ef315888d1","Type":"ContainerDied","Data":"f1d19d0b3f1af8e633bba929b6b0a9e32ffcab917ef2d5beac56e4379d9ea5f4"} Nov 24 22:30:04 crc kubenswrapper[4915]: I1124 22:30:04.413823 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1d19d0b3f1af8e633bba929b6b0a9e32ffcab917ef2d5beac56e4379d9ea5f4" Nov 24 22:30:04 crc kubenswrapper[4915]: I1124 22:30:04.413447 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400390-wmlbj" Nov 24 22:30:04 crc kubenswrapper[4915]: I1124 22:30:04.461027 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400345-6jvqv"] Nov 24 22:30:04 crc kubenswrapper[4915]: I1124 22:30:04.470726 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400345-6jvqv"] Nov 24 22:30:06 crc kubenswrapper[4915]: I1124 22:30:06.441518 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c25290f4-4f85-47fd-af5d-f141d5bb80a1" path="/var/lib/kubelet/pods/c25290f4-4f85-47fd-af5d-f141d5bb80a1/volumes" Nov 24 22:30:18 crc kubenswrapper[4915]: I1124 22:30:18.528939 4915 scope.go:117] "RemoveContainer" containerID="4f7e26fb86d3d63488f9eed5b8c0cc93f186bd4474fca2541112fe8c5e504d03" Nov 24 22:31:24 crc kubenswrapper[4915]: I1124 22:31:24.328928 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:31:24 crc kubenswrapper[4915]: I1124 22:31:24.330660 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:31:54 crc kubenswrapper[4915]: I1124 22:31:54.327809 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:31:54 crc kubenswrapper[4915]: I1124 22:31:54.328432 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:32:24 crc kubenswrapper[4915]: I1124 22:32:24.327424 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:32:24 crc kubenswrapper[4915]: I1124 22:32:24.328267 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:32:24 crc kubenswrapper[4915]: I1124 22:32:24.328354 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 22:32:24 crc kubenswrapper[4915]: I1124 22:32:24.329846 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bc456cd1e964eebc77738a0b189b07c75f8737a62bc684a8dccef19bf963137f"} pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:32:24 crc kubenswrapper[4915]: I1124 22:32:24.329951 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" containerID="cri-o://bc456cd1e964eebc77738a0b189b07c75f8737a62bc684a8dccef19bf963137f" gracePeriod=600 Nov 24 22:32:24 crc kubenswrapper[4915]: E1124 22:32:24.476143 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:32:25 crc kubenswrapper[4915]: I1124 22:32:25.249462 4915 generic.go:334] "Generic (PLEG): container finished" podID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerID="bc456cd1e964eebc77738a0b189b07c75f8737a62bc684a8dccef19bf963137f" exitCode=0 Nov 24 22:32:25 crc kubenswrapper[4915]: I1124 22:32:25.249655 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerDied","Data":"bc456cd1e964eebc77738a0b189b07c75f8737a62bc684a8dccef19bf963137f"} Nov 24 22:32:25 crc kubenswrapper[4915]: I1124 22:32:25.249760 4915 scope.go:117] "RemoveContainer" containerID="05e5ff85235ffb05ebddacc53a07ad723541a83b073e2a1239e33a4379c037a1" Nov 24 22:32:25 crc kubenswrapper[4915]: I1124 22:32:25.251687 4915 scope.go:117] "RemoveContainer" containerID="bc456cd1e964eebc77738a0b189b07c75f8737a62bc684a8dccef19bf963137f" Nov 24 22:32:25 crc kubenswrapper[4915]: E1124 22:32:25.252836 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:32:28 crc kubenswrapper[4915]: E1124 22:32:28.416227 4915 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.107:60818->38.102.83.107:46247: write tcp 38.102.83.107:60818->38.102.83.107:46247: write: broken pipe Nov 24 22:32:39 crc kubenswrapper[4915]: I1124 22:32:39.427848 4915 scope.go:117] "RemoveContainer" containerID="bc456cd1e964eebc77738a0b189b07c75f8737a62bc684a8dccef19bf963137f" Nov 24 22:32:39 crc kubenswrapper[4915]: E1124 22:32:39.429158 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:32:51 crc kubenswrapper[4915]: I1124 22:32:51.087589 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n7467"] Nov 24 22:32:51 crc kubenswrapper[4915]: E1124 22:32:51.088818 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95e5ac9f-3f14-4c8c-b8ac-94ef315888d1" containerName="collect-profiles" Nov 24 22:32:51 crc kubenswrapper[4915]: I1124 22:32:51.088835 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="95e5ac9f-3f14-4c8c-b8ac-94ef315888d1" containerName="collect-profiles" Nov 24 22:32:51 crc kubenswrapper[4915]: I1124 22:32:51.089070 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="95e5ac9f-3f14-4c8c-b8ac-94ef315888d1" containerName="collect-profiles" Nov 24 22:32:51 crc kubenswrapper[4915]: I1124 22:32:51.090654 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n7467" Nov 24 22:32:51 crc kubenswrapper[4915]: I1124 22:32:51.102380 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n7467"] Nov 24 22:32:51 crc kubenswrapper[4915]: I1124 22:32:51.195556 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddedfb13-d4bd-438b-9e98-3ecfab7f65be-utilities\") pod \"redhat-operators-n7467\" (UID: \"ddedfb13-d4bd-438b-9e98-3ecfab7f65be\") " pod="openshift-marketplace/redhat-operators-n7467" Nov 24 22:32:51 crc kubenswrapper[4915]: I1124 22:32:51.195647 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf55r\" (UniqueName: \"kubernetes.io/projected/ddedfb13-d4bd-438b-9e98-3ecfab7f65be-kube-api-access-nf55r\") pod \"redhat-operators-n7467\" (UID: \"ddedfb13-d4bd-438b-9e98-3ecfab7f65be\") " pod="openshift-marketplace/redhat-operators-n7467" Nov 24 22:32:51 crc kubenswrapper[4915]: I1124 22:32:51.195736 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddedfb13-d4bd-438b-9e98-3ecfab7f65be-catalog-content\") pod \"redhat-operators-n7467\" (UID: \"ddedfb13-d4bd-438b-9e98-3ecfab7f65be\") " pod="openshift-marketplace/redhat-operators-n7467" Nov 24 22:32:51 crc kubenswrapper[4915]: I1124 22:32:51.297425 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddedfb13-d4bd-438b-9e98-3ecfab7f65be-utilities\") pod \"redhat-operators-n7467\" (UID: \"ddedfb13-d4bd-438b-9e98-3ecfab7f65be\") " pod="openshift-marketplace/redhat-operators-n7467" Nov 24 22:32:51 crc kubenswrapper[4915]: I1124 22:32:51.297506 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf55r\" (UniqueName: \"kubernetes.io/projected/ddedfb13-d4bd-438b-9e98-3ecfab7f65be-kube-api-access-nf55r\") pod \"redhat-operators-n7467\" (UID: \"ddedfb13-d4bd-438b-9e98-3ecfab7f65be\") " pod="openshift-marketplace/redhat-operators-n7467" Nov 24 22:32:51 crc kubenswrapper[4915]: I1124 22:32:51.297581 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddedfb13-d4bd-438b-9e98-3ecfab7f65be-catalog-content\") pod \"redhat-operators-n7467\" (UID: \"ddedfb13-d4bd-438b-9e98-3ecfab7f65be\") " pod="openshift-marketplace/redhat-operators-n7467" Nov 24 22:32:51 crc kubenswrapper[4915]: I1124 22:32:51.297990 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddedfb13-d4bd-438b-9e98-3ecfab7f65be-utilities\") pod \"redhat-operators-n7467\" (UID: \"ddedfb13-d4bd-438b-9e98-3ecfab7f65be\") " pod="openshift-marketplace/redhat-operators-n7467" Nov 24 22:32:51 crc kubenswrapper[4915]: I1124 22:32:51.298086 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddedfb13-d4bd-438b-9e98-3ecfab7f65be-catalog-content\") pod \"redhat-operators-n7467\" (UID: \"ddedfb13-d4bd-438b-9e98-3ecfab7f65be\") " pod="openshift-marketplace/redhat-operators-n7467" Nov 24 22:32:51 crc kubenswrapper[4915]: I1124 22:32:51.321154 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf55r\" (UniqueName: \"kubernetes.io/projected/ddedfb13-d4bd-438b-9e98-3ecfab7f65be-kube-api-access-nf55r\") pod \"redhat-operators-n7467\" (UID: \"ddedfb13-d4bd-438b-9e98-3ecfab7f65be\") " pod="openshift-marketplace/redhat-operators-n7467" Nov 24 22:32:51 crc kubenswrapper[4915]: I1124 22:32:51.408217 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n7467" Nov 24 22:32:51 crc kubenswrapper[4915]: I1124 22:32:51.929949 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n7467"] Nov 24 22:32:52 crc kubenswrapper[4915]: I1124 22:32:52.662400 4915 generic.go:334] "Generic (PLEG): container finished" podID="ddedfb13-d4bd-438b-9e98-3ecfab7f65be" containerID="fffd52c5509c8210fa8c610f28b00b7ec6d22d87aca777a363bdc741f959f83a" exitCode=0 Nov 24 22:32:52 crc kubenswrapper[4915]: I1124 22:32:52.662520 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7467" event={"ID":"ddedfb13-d4bd-438b-9e98-3ecfab7f65be","Type":"ContainerDied","Data":"fffd52c5509c8210fa8c610f28b00b7ec6d22d87aca777a363bdc741f959f83a"} Nov 24 22:32:52 crc kubenswrapper[4915]: I1124 22:32:52.662698 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7467" event={"ID":"ddedfb13-d4bd-438b-9e98-3ecfab7f65be","Type":"ContainerStarted","Data":"742161f720da5ecce342e87f9f32afa6ee195e917bdebe53e9357ac64c7a480b"} Nov 24 22:32:53 crc kubenswrapper[4915]: I1124 22:32:53.681721 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7467" event={"ID":"ddedfb13-d4bd-438b-9e98-3ecfab7f65be","Type":"ContainerStarted","Data":"dffedfa58c86a37bc42435e8d44846c5cbb1f2416ce37b34f10d481cd63586ce"} Nov 24 22:32:54 crc kubenswrapper[4915]: I1124 22:32:54.427639 4915 scope.go:117] "RemoveContainer" containerID="bc456cd1e964eebc77738a0b189b07c75f8737a62bc684a8dccef19bf963137f" Nov 24 22:32:54 crc kubenswrapper[4915]: E1124 22:32:54.428103 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:32:58 crc kubenswrapper[4915]: I1124 22:32:58.745835 4915 generic.go:334] "Generic (PLEG): container finished" podID="ddedfb13-d4bd-438b-9e98-3ecfab7f65be" containerID="dffedfa58c86a37bc42435e8d44846c5cbb1f2416ce37b34f10d481cd63586ce" exitCode=0 Nov 24 22:32:58 crc kubenswrapper[4915]: I1124 22:32:58.746454 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7467" event={"ID":"ddedfb13-d4bd-438b-9e98-3ecfab7f65be","Type":"ContainerDied","Data":"dffedfa58c86a37bc42435e8d44846c5cbb1f2416ce37b34f10d481cd63586ce"} Nov 24 22:32:59 crc kubenswrapper[4915]: I1124 22:32:59.771331 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7467" event={"ID":"ddedfb13-d4bd-438b-9e98-3ecfab7f65be","Type":"ContainerStarted","Data":"0d0f783e7e3c3916fff4961cae79c72283144ff5e5699863fd7da5b9dc90b4ff"} Nov 24 22:32:59 crc kubenswrapper[4915]: I1124 22:32:59.800084 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n7467" podStartSLOduration=1.973636881 podStartE2EDuration="8.800058896s" podCreationTimestamp="2025-11-24 22:32:51 +0000 UTC" firstStartedPulling="2025-11-24 22:32:52.664464351 +0000 UTC m=+4390.980716524" lastFinishedPulling="2025-11-24 22:32:59.490886356 +0000 UTC m=+4397.807138539" observedRunningTime="2025-11-24 22:32:59.794338521 +0000 UTC m=+4398.110590714" watchObservedRunningTime="2025-11-24 22:32:59.800058896 +0000 UTC m=+4398.116311109" Nov 24 22:33:01 crc kubenswrapper[4915]: I1124 22:33:01.408819 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n7467" Nov 24 22:33:01 crc kubenswrapper[4915]: I1124 22:33:01.409187 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n7467" Nov 24 22:33:02 crc kubenswrapper[4915]: I1124 22:33:02.489402 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n7467" podUID="ddedfb13-d4bd-438b-9e98-3ecfab7f65be" containerName="registry-server" probeResult="failure" output=< Nov 24 22:33:02 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Nov 24 22:33:02 crc kubenswrapper[4915]: > Nov 24 22:33:06 crc kubenswrapper[4915]: I1124 22:33:06.426840 4915 scope.go:117] "RemoveContainer" containerID="bc456cd1e964eebc77738a0b189b07c75f8737a62bc684a8dccef19bf963137f" Nov 24 22:33:06 crc kubenswrapper[4915]: E1124 22:33:06.427679 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:33:11 crc kubenswrapper[4915]: I1124 22:33:11.478118 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n7467" Nov 24 22:33:11 crc kubenswrapper[4915]: I1124 22:33:11.554519 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n7467" Nov 24 22:33:11 crc kubenswrapper[4915]: I1124 22:33:11.722959 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n7467"] Nov 24 22:33:12 crc kubenswrapper[4915]: I1124 22:33:12.930269 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n7467" podUID="ddedfb13-d4bd-438b-9e98-3ecfab7f65be" containerName="registry-server" containerID="cri-o://0d0f783e7e3c3916fff4961cae79c72283144ff5e5699863fd7da5b9dc90b4ff" gracePeriod=2 Nov 24 22:33:13 crc kubenswrapper[4915]: I1124 22:33:13.524160 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n7467" Nov 24 22:33:13 crc kubenswrapper[4915]: I1124 22:33:13.589061 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddedfb13-d4bd-438b-9e98-3ecfab7f65be-catalog-content\") pod \"ddedfb13-d4bd-438b-9e98-3ecfab7f65be\" (UID: \"ddedfb13-d4bd-438b-9e98-3ecfab7f65be\") " Nov 24 22:33:13 crc kubenswrapper[4915]: I1124 22:33:13.589123 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddedfb13-d4bd-438b-9e98-3ecfab7f65be-utilities\") pod \"ddedfb13-d4bd-438b-9e98-3ecfab7f65be\" (UID: \"ddedfb13-d4bd-438b-9e98-3ecfab7f65be\") " Nov 24 22:33:13 crc kubenswrapper[4915]: I1124 22:33:13.589150 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf55r\" (UniqueName: \"kubernetes.io/projected/ddedfb13-d4bd-438b-9e98-3ecfab7f65be-kube-api-access-nf55r\") pod \"ddedfb13-d4bd-438b-9e98-3ecfab7f65be\" (UID: \"ddedfb13-d4bd-438b-9e98-3ecfab7f65be\") " Nov 24 22:33:13 crc kubenswrapper[4915]: I1124 22:33:13.590842 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddedfb13-d4bd-438b-9e98-3ecfab7f65be-utilities" (OuterVolumeSpecName: "utilities") pod "ddedfb13-d4bd-438b-9e98-3ecfab7f65be" (UID: "ddedfb13-d4bd-438b-9e98-3ecfab7f65be"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:33:13 crc kubenswrapper[4915]: I1124 22:33:13.598605 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddedfb13-d4bd-438b-9e98-3ecfab7f65be-kube-api-access-nf55r" (OuterVolumeSpecName: "kube-api-access-nf55r") pod "ddedfb13-d4bd-438b-9e98-3ecfab7f65be" (UID: "ddedfb13-d4bd-438b-9e98-3ecfab7f65be"). InnerVolumeSpecName "kube-api-access-nf55r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:33:13 crc kubenswrapper[4915]: I1124 22:33:13.676220 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddedfb13-d4bd-438b-9e98-3ecfab7f65be-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ddedfb13-d4bd-438b-9e98-3ecfab7f65be" (UID: "ddedfb13-d4bd-438b-9e98-3ecfab7f65be"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:33:13 crc kubenswrapper[4915]: I1124 22:33:13.692481 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddedfb13-d4bd-438b-9e98-3ecfab7f65be-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:33:13 crc kubenswrapper[4915]: I1124 22:33:13.692522 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddedfb13-d4bd-438b-9e98-3ecfab7f65be-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:33:13 crc kubenswrapper[4915]: I1124 22:33:13.692535 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf55r\" (UniqueName: \"kubernetes.io/projected/ddedfb13-d4bd-438b-9e98-3ecfab7f65be-kube-api-access-nf55r\") on node \"crc\" DevicePath \"\"" Nov 24 22:33:13 crc kubenswrapper[4915]: I1124 22:33:13.950838 4915 generic.go:334] "Generic (PLEG): container finished" podID="ddedfb13-d4bd-438b-9e98-3ecfab7f65be" containerID="0d0f783e7e3c3916fff4961cae79c72283144ff5e5699863fd7da5b9dc90b4ff" exitCode=0 Nov 24 22:33:13 crc kubenswrapper[4915]: I1124 22:33:13.950920 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7467" event={"ID":"ddedfb13-d4bd-438b-9e98-3ecfab7f65be","Type":"ContainerDied","Data":"0d0f783e7e3c3916fff4961cae79c72283144ff5e5699863fd7da5b9dc90b4ff"} Nov 24 22:33:13 crc kubenswrapper[4915]: I1124 22:33:13.950983 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n7467" Nov 24 22:33:13 crc kubenswrapper[4915]: I1124 22:33:13.951228 4915 scope.go:117] "RemoveContainer" containerID="0d0f783e7e3c3916fff4961cae79c72283144ff5e5699863fd7da5b9dc90b4ff" Nov 24 22:33:13 crc kubenswrapper[4915]: I1124 22:33:13.951204 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7467" event={"ID":"ddedfb13-d4bd-438b-9e98-3ecfab7f65be","Type":"ContainerDied","Data":"742161f720da5ecce342e87f9f32afa6ee195e917bdebe53e9357ac64c7a480b"} Nov 24 22:33:14 crc kubenswrapper[4915]: I1124 22:33:14.004133 4915 scope.go:117] "RemoveContainer" containerID="dffedfa58c86a37bc42435e8d44846c5cbb1f2416ce37b34f10d481cd63586ce" Nov 24 22:33:14 crc kubenswrapper[4915]: I1124 22:33:14.017469 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n7467"] Nov 24 22:33:14 crc kubenswrapper[4915]: I1124 22:33:14.036193 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n7467"] Nov 24 22:33:14 crc kubenswrapper[4915]: I1124 22:33:14.056382 4915 scope.go:117] "RemoveContainer" containerID="fffd52c5509c8210fa8c610f28b00b7ec6d22d87aca777a363bdc741f959f83a" Nov 24 22:33:14 crc kubenswrapper[4915]: I1124 22:33:14.116387 4915 scope.go:117] "RemoveContainer" containerID="0d0f783e7e3c3916fff4961cae79c72283144ff5e5699863fd7da5b9dc90b4ff" Nov 24 22:33:14 crc kubenswrapper[4915]: E1124 22:33:14.117103 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d0f783e7e3c3916fff4961cae79c72283144ff5e5699863fd7da5b9dc90b4ff\": container with ID starting with 0d0f783e7e3c3916fff4961cae79c72283144ff5e5699863fd7da5b9dc90b4ff not found: ID does not exist" containerID="0d0f783e7e3c3916fff4961cae79c72283144ff5e5699863fd7da5b9dc90b4ff" Nov 24 22:33:14 crc kubenswrapper[4915]: I1124 22:33:14.117160 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d0f783e7e3c3916fff4961cae79c72283144ff5e5699863fd7da5b9dc90b4ff"} err="failed to get container status \"0d0f783e7e3c3916fff4961cae79c72283144ff5e5699863fd7da5b9dc90b4ff\": rpc error: code = NotFound desc = could not find container \"0d0f783e7e3c3916fff4961cae79c72283144ff5e5699863fd7da5b9dc90b4ff\": container with ID starting with 0d0f783e7e3c3916fff4961cae79c72283144ff5e5699863fd7da5b9dc90b4ff not found: ID does not exist" Nov 24 22:33:14 crc kubenswrapper[4915]: I1124 22:33:14.117192 4915 scope.go:117] "RemoveContainer" containerID="dffedfa58c86a37bc42435e8d44846c5cbb1f2416ce37b34f10d481cd63586ce" Nov 24 22:33:14 crc kubenswrapper[4915]: E1124 22:33:14.117632 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dffedfa58c86a37bc42435e8d44846c5cbb1f2416ce37b34f10d481cd63586ce\": container with ID starting with dffedfa58c86a37bc42435e8d44846c5cbb1f2416ce37b34f10d481cd63586ce not found: ID does not exist" containerID="dffedfa58c86a37bc42435e8d44846c5cbb1f2416ce37b34f10d481cd63586ce" Nov 24 22:33:14 crc kubenswrapper[4915]: I1124 22:33:14.117672 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dffedfa58c86a37bc42435e8d44846c5cbb1f2416ce37b34f10d481cd63586ce"} err="failed to get container status \"dffedfa58c86a37bc42435e8d44846c5cbb1f2416ce37b34f10d481cd63586ce\": rpc error: code = NotFound desc = could not find container \"dffedfa58c86a37bc42435e8d44846c5cbb1f2416ce37b34f10d481cd63586ce\": container with ID starting with dffedfa58c86a37bc42435e8d44846c5cbb1f2416ce37b34f10d481cd63586ce not found: ID does not exist" Nov 24 22:33:14 crc kubenswrapper[4915]: I1124 22:33:14.117700 4915 scope.go:117] "RemoveContainer" containerID="fffd52c5509c8210fa8c610f28b00b7ec6d22d87aca777a363bdc741f959f83a" Nov 24 22:33:14 crc kubenswrapper[4915]: E1124 22:33:14.118168 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fffd52c5509c8210fa8c610f28b00b7ec6d22d87aca777a363bdc741f959f83a\": container with ID starting with fffd52c5509c8210fa8c610f28b00b7ec6d22d87aca777a363bdc741f959f83a not found: ID does not exist" containerID="fffd52c5509c8210fa8c610f28b00b7ec6d22d87aca777a363bdc741f959f83a" Nov 24 22:33:14 crc kubenswrapper[4915]: I1124 22:33:14.118199 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fffd52c5509c8210fa8c610f28b00b7ec6d22d87aca777a363bdc741f959f83a"} err="failed to get container status \"fffd52c5509c8210fa8c610f28b00b7ec6d22d87aca777a363bdc741f959f83a\": rpc error: code = NotFound desc = could not find container \"fffd52c5509c8210fa8c610f28b00b7ec6d22d87aca777a363bdc741f959f83a\": container with ID starting with fffd52c5509c8210fa8c610f28b00b7ec6d22d87aca777a363bdc741f959f83a not found: ID does not exist" Nov 24 22:33:14 crc kubenswrapper[4915]: I1124 22:33:14.451303 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddedfb13-d4bd-438b-9e98-3ecfab7f65be" path="/var/lib/kubelet/pods/ddedfb13-d4bd-438b-9e98-3ecfab7f65be/volumes" Nov 24 22:33:17 crc kubenswrapper[4915]: I1124 22:33:17.426524 4915 scope.go:117] "RemoveContainer" containerID="bc456cd1e964eebc77738a0b189b07c75f8737a62bc684a8dccef19bf963137f" Nov 24 22:33:17 crc kubenswrapper[4915]: E1124 22:33:17.427422 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:33:30 crc kubenswrapper[4915]: I1124 22:33:30.428732 4915 scope.go:117] "RemoveContainer" containerID="bc456cd1e964eebc77738a0b189b07c75f8737a62bc684a8dccef19bf963137f" Nov 24 22:33:30 crc kubenswrapper[4915]: E1124 22:33:30.429978 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:33:41 crc kubenswrapper[4915]: I1124 22:33:41.427353 4915 scope.go:117] "RemoveContainer" containerID="bc456cd1e964eebc77738a0b189b07c75f8737a62bc684a8dccef19bf963137f" Nov 24 22:33:41 crc kubenswrapper[4915]: E1124 22:33:41.428375 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:33:52 crc kubenswrapper[4915]: I1124 22:33:52.438256 4915 scope.go:117] "RemoveContainer" containerID="bc456cd1e964eebc77738a0b189b07c75f8737a62bc684a8dccef19bf963137f" Nov 24 22:33:52 crc kubenswrapper[4915]: E1124 22:33:52.439330 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:34:06 crc kubenswrapper[4915]: I1124 22:34:06.429851 4915 scope.go:117] "RemoveContainer" containerID="bc456cd1e964eebc77738a0b189b07c75f8737a62bc684a8dccef19bf963137f" Nov 24 22:34:06 crc kubenswrapper[4915]: E1124 22:34:06.430611 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:34:18 crc kubenswrapper[4915]: I1124 22:34:18.427610 4915 scope.go:117] "RemoveContainer" containerID="bc456cd1e964eebc77738a0b189b07c75f8737a62bc684a8dccef19bf963137f" Nov 24 22:34:18 crc kubenswrapper[4915]: E1124 22:34:18.428494 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:34:31 crc kubenswrapper[4915]: I1124 22:34:31.427470 4915 scope.go:117] "RemoveContainer" containerID="bc456cd1e964eebc77738a0b189b07c75f8737a62bc684a8dccef19bf963137f" Nov 24 22:34:31 crc kubenswrapper[4915]: E1124 22:34:31.428800 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:34:42 crc kubenswrapper[4915]: I1124 22:34:42.479066 4915 scope.go:117] "RemoveContainer" containerID="bc456cd1e964eebc77738a0b189b07c75f8737a62bc684a8dccef19bf963137f" Nov 24 22:34:42 crc kubenswrapper[4915]: E1124 22:34:42.479968 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:34:53 crc kubenswrapper[4915]: I1124 22:34:53.427341 4915 scope.go:117] "RemoveContainer" containerID="bc456cd1e964eebc77738a0b189b07c75f8737a62bc684a8dccef19bf963137f" Nov 24 22:34:53 crc kubenswrapper[4915]: E1124 22:34:53.428346 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:35:07 crc kubenswrapper[4915]: I1124 22:35:07.426963 4915 scope.go:117] "RemoveContainer" containerID="bc456cd1e964eebc77738a0b189b07c75f8737a62bc684a8dccef19bf963137f" Nov 24 22:35:07 crc kubenswrapper[4915]: E1124 22:35:07.428024 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:35:22 crc kubenswrapper[4915]: I1124 22:35:22.427068 4915 scope.go:117] "RemoveContainer" containerID="bc456cd1e964eebc77738a0b189b07c75f8737a62bc684a8dccef19bf963137f" Nov 24 22:35:22 crc kubenswrapper[4915]: E1124 22:35:22.428078 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:35:35 crc kubenswrapper[4915]: I1124 22:35:35.428196 4915 scope.go:117] "RemoveContainer" containerID="bc456cd1e964eebc77738a0b189b07c75f8737a62bc684a8dccef19bf963137f" Nov 24 22:35:35 crc kubenswrapper[4915]: E1124 22:35:35.429280 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:35:48 crc kubenswrapper[4915]: I1124 22:35:48.427465 4915 scope.go:117] "RemoveContainer" containerID="bc456cd1e964eebc77738a0b189b07c75f8737a62bc684a8dccef19bf963137f" Nov 24 22:35:48 crc kubenswrapper[4915]: E1124 22:35:48.428721 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:36:00 crc kubenswrapper[4915]: I1124 22:36:00.428298 4915 scope.go:117] "RemoveContainer" containerID="bc456cd1e964eebc77738a0b189b07c75f8737a62bc684a8dccef19bf963137f" Nov 24 22:36:00 crc kubenswrapper[4915]: E1124 22:36:00.429492 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:36:15 crc kubenswrapper[4915]: I1124 22:36:15.427515 4915 scope.go:117] "RemoveContainer" containerID="bc456cd1e964eebc77738a0b189b07c75f8737a62bc684a8dccef19bf963137f" Nov 24 22:36:15 crc kubenswrapper[4915]: E1124 22:36:15.428523 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:36:28 crc kubenswrapper[4915]: I1124 22:36:28.427238 4915 scope.go:117] "RemoveContainer" containerID="bc456cd1e964eebc77738a0b189b07c75f8737a62bc684a8dccef19bf963137f" Nov 24 22:36:28 crc kubenswrapper[4915]: E1124 22:36:28.428622 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:36:40 crc kubenswrapper[4915]: I1124 22:36:40.429645 4915 scope.go:117] "RemoveContainer" containerID="bc456cd1e964eebc77738a0b189b07c75f8737a62bc684a8dccef19bf963137f" Nov 24 22:36:40 crc kubenswrapper[4915]: E1124 22:36:40.430698 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:36:52 crc kubenswrapper[4915]: I1124 22:36:52.434757 4915 scope.go:117] "RemoveContainer" containerID="bc456cd1e964eebc77738a0b189b07c75f8737a62bc684a8dccef19bf963137f" Nov 24 22:36:52 crc kubenswrapper[4915]: E1124 22:36:52.435870 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:37:06 crc kubenswrapper[4915]: I1124 22:37:06.427549 4915 scope.go:117] "RemoveContainer" containerID="bc456cd1e964eebc77738a0b189b07c75f8737a62bc684a8dccef19bf963137f" Nov 24 22:37:06 crc kubenswrapper[4915]: E1124 22:37:06.428805 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:37:21 crc kubenswrapper[4915]: I1124 22:37:21.429276 4915 scope.go:117] "RemoveContainer" containerID="bc456cd1e964eebc77738a0b189b07c75f8737a62bc684a8dccef19bf963137f" Nov 24 22:37:21 crc kubenswrapper[4915]: E1124 22:37:21.430849 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:37:36 crc kubenswrapper[4915]: I1124 22:37:36.429038 4915 scope.go:117] "RemoveContainer" containerID="bc456cd1e964eebc77738a0b189b07c75f8737a62bc684a8dccef19bf963137f" Nov 24 22:37:37 crc kubenswrapper[4915]: I1124 22:37:37.365478 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerStarted","Data":"ea973474b59d7bc0efd115a91a13d725742fe8a4cea1563a201effa13e0bcff0"} Nov 24 22:39:33 crc kubenswrapper[4915]: I1124 22:39:33.909166 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rhjmj"] Nov 24 22:39:33 crc kubenswrapper[4915]: E1124 22:39:33.910667 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddedfb13-d4bd-438b-9e98-3ecfab7f65be" containerName="registry-server" Nov 24 22:39:33 crc kubenswrapper[4915]: I1124 22:39:33.910690 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddedfb13-d4bd-438b-9e98-3ecfab7f65be" containerName="registry-server" Nov 24 22:39:33 crc kubenswrapper[4915]: E1124 22:39:33.910736 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddedfb13-d4bd-438b-9e98-3ecfab7f65be" containerName="extract-content" Nov 24 22:39:33 crc kubenswrapper[4915]: I1124 22:39:33.910751 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddedfb13-d4bd-438b-9e98-3ecfab7f65be" containerName="extract-content" Nov 24 22:39:33 crc kubenswrapper[4915]: E1124 22:39:33.910853 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddedfb13-d4bd-438b-9e98-3ecfab7f65be" containerName="extract-utilities" Nov 24 22:39:33 crc kubenswrapper[4915]: I1124 22:39:33.910869 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddedfb13-d4bd-438b-9e98-3ecfab7f65be" containerName="extract-utilities" Nov 24 22:39:33 crc kubenswrapper[4915]: I1124 22:39:33.911330 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddedfb13-d4bd-438b-9e98-3ecfab7f65be" containerName="registry-server" Nov 24 22:39:33 crc kubenswrapper[4915]: I1124 22:39:33.916570 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rhjmj" Nov 24 22:39:33 crc kubenswrapper[4915]: I1124 22:39:33.923903 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rhjmj"] Nov 24 22:39:34 crc kubenswrapper[4915]: I1124 22:39:34.073536 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e562dfa-ed3f-4450-b35d-1300d36ff4f9-catalog-content\") pod \"redhat-marketplace-rhjmj\" (UID: \"0e562dfa-ed3f-4450-b35d-1300d36ff4f9\") " pod="openshift-marketplace/redhat-marketplace-rhjmj" Nov 24 22:39:34 crc kubenswrapper[4915]: I1124 22:39:34.073678 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98r6f\" (UniqueName: \"kubernetes.io/projected/0e562dfa-ed3f-4450-b35d-1300d36ff4f9-kube-api-access-98r6f\") pod \"redhat-marketplace-rhjmj\" (UID: \"0e562dfa-ed3f-4450-b35d-1300d36ff4f9\") " pod="openshift-marketplace/redhat-marketplace-rhjmj" Nov 24 22:39:34 crc kubenswrapper[4915]: I1124 22:39:34.073726 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e562dfa-ed3f-4450-b35d-1300d36ff4f9-utilities\") pod \"redhat-marketplace-rhjmj\" (UID: \"0e562dfa-ed3f-4450-b35d-1300d36ff4f9\") " pod="openshift-marketplace/redhat-marketplace-rhjmj" Nov 24 22:39:34 crc kubenswrapper[4915]: I1124 22:39:34.175642 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e562dfa-ed3f-4450-b35d-1300d36ff4f9-catalog-content\") pod \"redhat-marketplace-rhjmj\" (UID: \"0e562dfa-ed3f-4450-b35d-1300d36ff4f9\") " pod="openshift-marketplace/redhat-marketplace-rhjmj" Nov 24 22:39:34 crc kubenswrapper[4915]: I1124 22:39:34.176119 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98r6f\" (UniqueName: \"kubernetes.io/projected/0e562dfa-ed3f-4450-b35d-1300d36ff4f9-kube-api-access-98r6f\") pod \"redhat-marketplace-rhjmj\" (UID: \"0e562dfa-ed3f-4450-b35d-1300d36ff4f9\") " pod="openshift-marketplace/redhat-marketplace-rhjmj" Nov 24 22:39:34 crc kubenswrapper[4915]: I1124 22:39:34.176162 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e562dfa-ed3f-4450-b35d-1300d36ff4f9-utilities\") pod \"redhat-marketplace-rhjmj\" (UID: \"0e562dfa-ed3f-4450-b35d-1300d36ff4f9\") " pod="openshift-marketplace/redhat-marketplace-rhjmj" Nov 24 22:39:34 crc kubenswrapper[4915]: I1124 22:39:34.176311 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e562dfa-ed3f-4450-b35d-1300d36ff4f9-catalog-content\") pod \"redhat-marketplace-rhjmj\" (UID: \"0e562dfa-ed3f-4450-b35d-1300d36ff4f9\") " pod="openshift-marketplace/redhat-marketplace-rhjmj" Nov 24 22:39:34 crc kubenswrapper[4915]: I1124 22:39:34.176549 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e562dfa-ed3f-4450-b35d-1300d36ff4f9-utilities\") pod \"redhat-marketplace-rhjmj\" (UID: \"0e562dfa-ed3f-4450-b35d-1300d36ff4f9\") " pod="openshift-marketplace/redhat-marketplace-rhjmj" Nov 24 22:39:34 crc kubenswrapper[4915]: I1124 22:39:34.201633 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98r6f\" (UniqueName: \"kubernetes.io/projected/0e562dfa-ed3f-4450-b35d-1300d36ff4f9-kube-api-access-98r6f\") pod \"redhat-marketplace-rhjmj\" (UID: \"0e562dfa-ed3f-4450-b35d-1300d36ff4f9\") " pod="openshift-marketplace/redhat-marketplace-rhjmj" Nov 24 22:39:34 crc kubenswrapper[4915]: I1124 22:39:34.257352 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rhjmj" Nov 24 22:39:34 crc kubenswrapper[4915]: I1124 22:39:34.790706 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rhjmj"] Nov 24 22:39:34 crc kubenswrapper[4915]: I1124 22:39:34.957482 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rhjmj" event={"ID":"0e562dfa-ed3f-4450-b35d-1300d36ff4f9","Type":"ContainerStarted","Data":"a319b59d478e05ea8ccefc015a176dd83d8dbc4cce9ebb11bdf204e01a39acc4"} Nov 24 22:39:35 crc kubenswrapper[4915]: I1124 22:39:35.972001 4915 generic.go:334] "Generic (PLEG): container finished" podID="0e562dfa-ed3f-4450-b35d-1300d36ff4f9" containerID="ca6c788c0b58d2de138fea9ae4b44eb050253cc6877c6a2e5639e1bfd13338d3" exitCode=0 Nov 24 22:39:35 crc kubenswrapper[4915]: I1124 22:39:35.972045 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rhjmj" event={"ID":"0e562dfa-ed3f-4450-b35d-1300d36ff4f9","Type":"ContainerDied","Data":"ca6c788c0b58d2de138fea9ae4b44eb050253cc6877c6a2e5639e1bfd13338d3"} Nov 24 22:39:35 crc kubenswrapper[4915]: I1124 22:39:35.974828 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 22:39:37 crc kubenswrapper[4915]: I1124 22:39:37.891340 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8ljx2"] Nov 24 22:39:37 crc kubenswrapper[4915]: I1124 22:39:37.896741 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8ljx2" Nov 24 22:39:37 crc kubenswrapper[4915]: I1124 22:39:37.911379 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8ljx2"] Nov 24 22:39:37 crc kubenswrapper[4915]: I1124 22:39:37.973617 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00-catalog-content\") pod \"certified-operators-8ljx2\" (UID: \"f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00\") " pod="openshift-marketplace/certified-operators-8ljx2" Nov 24 22:39:37 crc kubenswrapper[4915]: I1124 22:39:37.974695 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcwtq\" (UniqueName: \"kubernetes.io/projected/f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00-kube-api-access-vcwtq\") pod \"certified-operators-8ljx2\" (UID: \"f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00\") " pod="openshift-marketplace/certified-operators-8ljx2" Nov 24 22:39:37 crc kubenswrapper[4915]: I1124 22:39:37.978344 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00-utilities\") pod \"certified-operators-8ljx2\" (UID: \"f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00\") " pod="openshift-marketplace/certified-operators-8ljx2" Nov 24 22:39:38 crc kubenswrapper[4915]: I1124 22:39:38.000044 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rhjmj" event={"ID":"0e562dfa-ed3f-4450-b35d-1300d36ff4f9","Type":"ContainerStarted","Data":"9812b70fc918796d2d954aaea0b38e5551a6473c11c81211cd12f1d83db66d59"} Nov 24 22:39:38 crc kubenswrapper[4915]: I1124 22:39:38.081400 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00-utilities\") pod \"certified-operators-8ljx2\" (UID: \"f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00\") " pod="openshift-marketplace/certified-operators-8ljx2" Nov 24 22:39:38 crc kubenswrapper[4915]: I1124 22:39:38.081559 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00-catalog-content\") pod \"certified-operators-8ljx2\" (UID: \"f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00\") " pod="openshift-marketplace/certified-operators-8ljx2" Nov 24 22:39:38 crc kubenswrapper[4915]: I1124 22:39:38.081767 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcwtq\" (UniqueName: \"kubernetes.io/projected/f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00-kube-api-access-vcwtq\") pod \"certified-operators-8ljx2\" (UID: \"f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00\") " pod="openshift-marketplace/certified-operators-8ljx2" Nov 24 22:39:38 crc kubenswrapper[4915]: I1124 22:39:38.082066 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00-utilities\") pod \"certified-operators-8ljx2\" (UID: \"f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00\") " pod="openshift-marketplace/certified-operators-8ljx2" Nov 24 22:39:38 crc kubenswrapper[4915]: I1124 22:39:38.082118 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00-catalog-content\") pod \"certified-operators-8ljx2\" (UID: \"f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00\") " pod="openshift-marketplace/certified-operators-8ljx2" Nov 24 22:39:38 crc kubenswrapper[4915]: I1124 22:39:38.114007 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcwtq\" (UniqueName: \"kubernetes.io/projected/f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00-kube-api-access-vcwtq\") pod \"certified-operators-8ljx2\" (UID: \"f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00\") " pod="openshift-marketplace/certified-operators-8ljx2" Nov 24 22:39:38 crc kubenswrapper[4915]: I1124 22:39:38.228558 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8ljx2" Nov 24 22:39:38 crc kubenswrapper[4915]: I1124 22:39:38.782505 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8ljx2"] Nov 24 22:39:39 crc kubenswrapper[4915]: I1124 22:39:39.011895 4915 generic.go:334] "Generic (PLEG): container finished" podID="0e562dfa-ed3f-4450-b35d-1300d36ff4f9" containerID="9812b70fc918796d2d954aaea0b38e5551a6473c11c81211cd12f1d83db66d59" exitCode=0 Nov 24 22:39:39 crc kubenswrapper[4915]: I1124 22:39:39.011955 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rhjmj" event={"ID":"0e562dfa-ed3f-4450-b35d-1300d36ff4f9","Type":"ContainerDied","Data":"9812b70fc918796d2d954aaea0b38e5551a6473c11c81211cd12f1d83db66d59"} Nov 24 22:39:39 crc kubenswrapper[4915]: W1124 22:39:39.069007 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1f582fd_cebc_4f07_8f7e_4d96e3cb9e00.slice/crio-e14977303e74ac6b4068cbb88b9a84b9df08c9f8637ce08c3a0b2257abf12113 WatchSource:0}: Error finding container e14977303e74ac6b4068cbb88b9a84b9df08c9f8637ce08c3a0b2257abf12113: Status 404 returned error can't find the container with id e14977303e74ac6b4068cbb88b9a84b9df08c9f8637ce08c3a0b2257abf12113 Nov 24 22:39:40 crc kubenswrapper[4915]: I1124 22:39:40.028767 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rhjmj" event={"ID":"0e562dfa-ed3f-4450-b35d-1300d36ff4f9","Type":"ContainerStarted","Data":"1581242c4f4fa5f33d1d58839c74d4771d0c5410475bdf41350064aae65eed50"} Nov 24 22:39:40 crc kubenswrapper[4915]: I1124 22:39:40.032484 4915 generic.go:334] "Generic (PLEG): container finished" podID="f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00" containerID="55d1de7dc9dbc91b4281e6bf9272032c40c23f9e3409ee33b4cabb2e732b4ab7" exitCode=0 Nov 24 22:39:40 crc kubenswrapper[4915]: I1124 22:39:40.032546 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ljx2" event={"ID":"f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00","Type":"ContainerDied","Data":"55d1de7dc9dbc91b4281e6bf9272032c40c23f9e3409ee33b4cabb2e732b4ab7"} Nov 24 22:39:40 crc kubenswrapper[4915]: I1124 22:39:40.033137 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ljx2" event={"ID":"f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00","Type":"ContainerStarted","Data":"e14977303e74ac6b4068cbb88b9a84b9df08c9f8637ce08c3a0b2257abf12113"} Nov 24 22:39:40 crc kubenswrapper[4915]: I1124 22:39:40.063458 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rhjmj" podStartSLOduration=3.540924143 podStartE2EDuration="7.063436954s" podCreationTimestamp="2025-11-24 22:39:33 +0000 UTC" firstStartedPulling="2025-11-24 22:39:35.974579886 +0000 UTC m=+4794.290832059" lastFinishedPulling="2025-11-24 22:39:39.497092677 +0000 UTC m=+4797.813344870" observedRunningTime="2025-11-24 22:39:40.05955596 +0000 UTC m=+4798.375808163" watchObservedRunningTime="2025-11-24 22:39:40.063436954 +0000 UTC m=+4798.379689157" Nov 24 22:39:41 crc kubenswrapper[4915]: I1124 22:39:41.047724 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ljx2" event={"ID":"f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00","Type":"ContainerStarted","Data":"13915f84759d9fb926c7ea363d046069241f6bb8a3b479aa93c724df33c939fb"} Nov 24 22:39:43 crc kubenswrapper[4915]: I1124 22:39:43.074515 4915 generic.go:334] "Generic (PLEG): container finished" podID="f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00" containerID="13915f84759d9fb926c7ea363d046069241f6bb8a3b479aa93c724df33c939fb" exitCode=0 Nov 24 22:39:43 crc kubenswrapper[4915]: I1124 22:39:43.074591 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ljx2" event={"ID":"f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00","Type":"ContainerDied","Data":"13915f84759d9fb926c7ea363d046069241f6bb8a3b479aa93c724df33c939fb"} Nov 24 22:39:44 crc kubenswrapper[4915]: I1124 22:39:44.088377 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ljx2" event={"ID":"f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00","Type":"ContainerStarted","Data":"e2c4dee1d0ad112a1b3051c01aba9b0ec74f6eab90ca8488a17fdcd81bee0df4"} Nov 24 22:39:44 crc kubenswrapper[4915]: I1124 22:39:44.122800 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8ljx2" podStartSLOduration=3.660085779 podStartE2EDuration="7.122753586s" podCreationTimestamp="2025-11-24 22:39:37 +0000 UTC" firstStartedPulling="2025-11-24 22:39:40.034583626 +0000 UTC m=+4798.350835839" lastFinishedPulling="2025-11-24 22:39:43.497251463 +0000 UTC m=+4801.813503646" observedRunningTime="2025-11-24 22:39:44.107404951 +0000 UTC m=+4802.423657164" watchObservedRunningTime="2025-11-24 22:39:44.122753586 +0000 UTC m=+4802.439005799" Nov 24 22:39:44 crc kubenswrapper[4915]: I1124 22:39:44.258838 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rhjmj" Nov 24 22:39:44 crc kubenswrapper[4915]: I1124 22:39:44.258898 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rhjmj" Nov 24 22:39:45 crc kubenswrapper[4915]: I1124 22:39:45.339414 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-rhjmj" podUID="0e562dfa-ed3f-4450-b35d-1300d36ff4f9" containerName="registry-server" probeResult="failure" output=< Nov 24 22:39:45 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Nov 24 22:39:45 crc kubenswrapper[4915]: > Nov 24 22:39:47 crc kubenswrapper[4915]: E1124 22:39:47.060542 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1f582fd_cebc_4f07_8f7e_4d96e3cb9e00.slice/crio-13915f84759d9fb926c7ea363d046069241f6bb8a3b479aa93c724df33c939fb.scope\": RecentStats: unable to find data in memory cache]" Nov 24 22:39:48 crc kubenswrapper[4915]: E1124 22:39:48.105890 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1f582fd_cebc_4f07_8f7e_4d96e3cb9e00.slice/crio-13915f84759d9fb926c7ea363d046069241f6bb8a3b479aa93c724df33c939fb.scope\": RecentStats: unable to find data in memory cache]" Nov 24 22:39:48 crc kubenswrapper[4915]: E1124 22:39:48.106615 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1f582fd_cebc_4f07_8f7e_4d96e3cb9e00.slice/crio-13915f84759d9fb926c7ea363d046069241f6bb8a3b479aa93c724df33c939fb.scope\": RecentStats: unable to find data in memory cache]" Nov 24 22:39:48 crc kubenswrapper[4915]: I1124 22:39:48.229520 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8ljx2" Nov 24 22:39:48 crc kubenswrapper[4915]: I1124 22:39:48.230698 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8ljx2" Nov 24 22:39:49 crc kubenswrapper[4915]: I1124 22:39:49.290635 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8ljx2" podUID="f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00" containerName="registry-server" probeResult="failure" output=< Nov 24 22:39:49 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Nov 24 22:39:49 crc kubenswrapper[4915]: > Nov 24 22:39:51 crc kubenswrapper[4915]: E1124 22:39:51.811834 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1f582fd_cebc_4f07_8f7e_4d96e3cb9e00.slice/crio-13915f84759d9fb926c7ea363d046069241f6bb8a3b479aa93c724df33c939fb.scope\": RecentStats: unable to find data in memory cache]" Nov 24 22:39:54 crc kubenswrapper[4915]: I1124 22:39:54.327156 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:39:54 crc kubenswrapper[4915]: I1124 22:39:54.327831 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:39:54 crc kubenswrapper[4915]: I1124 22:39:54.338011 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rhjmj" Nov 24 22:39:54 crc kubenswrapper[4915]: I1124 22:39:54.442227 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rhjmj" Nov 24 22:39:54 crc kubenswrapper[4915]: I1124 22:39:54.595733 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rhjmj"] Nov 24 22:39:56 crc kubenswrapper[4915]: I1124 22:39:56.243876 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rhjmj" podUID="0e562dfa-ed3f-4450-b35d-1300d36ff4f9" containerName="registry-server" containerID="cri-o://1581242c4f4fa5f33d1d58839c74d4771d0c5410475bdf41350064aae65eed50" gracePeriod=2 Nov 24 22:39:56 crc kubenswrapper[4915]: I1124 22:39:56.790903 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rhjmj" Nov 24 22:39:56 crc kubenswrapper[4915]: I1124 22:39:56.977320 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98r6f\" (UniqueName: \"kubernetes.io/projected/0e562dfa-ed3f-4450-b35d-1300d36ff4f9-kube-api-access-98r6f\") pod \"0e562dfa-ed3f-4450-b35d-1300d36ff4f9\" (UID: \"0e562dfa-ed3f-4450-b35d-1300d36ff4f9\") " Nov 24 22:39:56 crc kubenswrapper[4915]: I1124 22:39:56.978407 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e562dfa-ed3f-4450-b35d-1300d36ff4f9-catalog-content\") pod \"0e562dfa-ed3f-4450-b35d-1300d36ff4f9\" (UID: \"0e562dfa-ed3f-4450-b35d-1300d36ff4f9\") " Nov 24 22:39:56 crc kubenswrapper[4915]: I1124 22:39:56.978618 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e562dfa-ed3f-4450-b35d-1300d36ff4f9-utilities\") pod \"0e562dfa-ed3f-4450-b35d-1300d36ff4f9\" (UID: \"0e562dfa-ed3f-4450-b35d-1300d36ff4f9\") " Nov 24 22:39:56 crc kubenswrapper[4915]: I1124 22:39:56.979615 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e562dfa-ed3f-4450-b35d-1300d36ff4f9-utilities" (OuterVolumeSpecName: "utilities") pod "0e562dfa-ed3f-4450-b35d-1300d36ff4f9" (UID: "0e562dfa-ed3f-4450-b35d-1300d36ff4f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:39:56 crc kubenswrapper[4915]: I1124 22:39:56.983761 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e562dfa-ed3f-4450-b35d-1300d36ff4f9-kube-api-access-98r6f" (OuterVolumeSpecName: "kube-api-access-98r6f") pod "0e562dfa-ed3f-4450-b35d-1300d36ff4f9" (UID: "0e562dfa-ed3f-4450-b35d-1300d36ff4f9"). InnerVolumeSpecName "kube-api-access-98r6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:39:57 crc kubenswrapper[4915]: I1124 22:39:57.002598 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e562dfa-ed3f-4450-b35d-1300d36ff4f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e562dfa-ed3f-4450-b35d-1300d36ff4f9" (UID: "0e562dfa-ed3f-4450-b35d-1300d36ff4f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:39:57 crc kubenswrapper[4915]: I1124 22:39:57.083003 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e562dfa-ed3f-4450-b35d-1300d36ff4f9-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:39:57 crc kubenswrapper[4915]: I1124 22:39:57.083050 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98r6f\" (UniqueName: \"kubernetes.io/projected/0e562dfa-ed3f-4450-b35d-1300d36ff4f9-kube-api-access-98r6f\") on node \"crc\" DevicePath \"\"" Nov 24 22:39:57 crc kubenswrapper[4915]: I1124 22:39:57.083070 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e562dfa-ed3f-4450-b35d-1300d36ff4f9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:39:57 crc kubenswrapper[4915]: I1124 22:39:57.258455 4915 generic.go:334] "Generic (PLEG): container finished" podID="0e562dfa-ed3f-4450-b35d-1300d36ff4f9" containerID="1581242c4f4fa5f33d1d58839c74d4771d0c5410475bdf41350064aae65eed50" exitCode=0 Nov 24 22:39:57 crc kubenswrapper[4915]: I1124 22:39:57.259630 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rhjmj" event={"ID":"0e562dfa-ed3f-4450-b35d-1300d36ff4f9","Type":"ContainerDied","Data":"1581242c4f4fa5f33d1d58839c74d4771d0c5410475bdf41350064aae65eed50"} Nov 24 22:39:57 crc kubenswrapper[4915]: I1124 22:39:57.259708 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rhjmj" event={"ID":"0e562dfa-ed3f-4450-b35d-1300d36ff4f9","Type":"ContainerDied","Data":"a319b59d478e05ea8ccefc015a176dd83d8dbc4cce9ebb11bdf204e01a39acc4"} Nov 24 22:39:57 crc kubenswrapper[4915]: I1124 22:39:57.259801 4915 scope.go:117] "RemoveContainer" containerID="1581242c4f4fa5f33d1d58839c74d4771d0c5410475bdf41350064aae65eed50" Nov 24 22:39:57 crc kubenswrapper[4915]: I1124 22:39:57.259945 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rhjmj" Nov 24 22:39:57 crc kubenswrapper[4915]: I1124 22:39:57.316548 4915 scope.go:117] "RemoveContainer" containerID="9812b70fc918796d2d954aaea0b38e5551a6473c11c81211cd12f1d83db66d59" Nov 24 22:39:57 crc kubenswrapper[4915]: I1124 22:39:57.367750 4915 scope.go:117] "RemoveContainer" containerID="ca6c788c0b58d2de138fea9ae4b44eb050253cc6877c6a2e5639e1bfd13338d3" Nov 24 22:39:57 crc kubenswrapper[4915]: I1124 22:39:57.413145 4915 scope.go:117] "RemoveContainer" containerID="1581242c4f4fa5f33d1d58839c74d4771d0c5410475bdf41350064aae65eed50" Nov 24 22:39:57 crc kubenswrapper[4915]: E1124 22:39:57.413689 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1581242c4f4fa5f33d1d58839c74d4771d0c5410475bdf41350064aae65eed50\": container with ID starting with 1581242c4f4fa5f33d1d58839c74d4771d0c5410475bdf41350064aae65eed50 not found: ID does not exist" containerID="1581242c4f4fa5f33d1d58839c74d4771d0c5410475bdf41350064aae65eed50" Nov 24 22:39:57 crc kubenswrapper[4915]: I1124 22:39:57.413729 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1581242c4f4fa5f33d1d58839c74d4771d0c5410475bdf41350064aae65eed50"} err="failed to get container status \"1581242c4f4fa5f33d1d58839c74d4771d0c5410475bdf41350064aae65eed50\": rpc error: code = NotFound desc = could not find container \"1581242c4f4fa5f33d1d58839c74d4771d0c5410475bdf41350064aae65eed50\": container with ID starting with 1581242c4f4fa5f33d1d58839c74d4771d0c5410475bdf41350064aae65eed50 not found: ID does not exist" Nov 24 22:39:57 crc kubenswrapper[4915]: I1124 22:39:57.413757 4915 scope.go:117] "RemoveContainer" containerID="9812b70fc918796d2d954aaea0b38e5551a6473c11c81211cd12f1d83db66d59" Nov 24 22:39:57 crc kubenswrapper[4915]: E1124 22:39:57.414268 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9812b70fc918796d2d954aaea0b38e5551a6473c11c81211cd12f1d83db66d59\": container with ID starting with 9812b70fc918796d2d954aaea0b38e5551a6473c11c81211cd12f1d83db66d59 not found: ID does not exist" containerID="9812b70fc918796d2d954aaea0b38e5551a6473c11c81211cd12f1d83db66d59" Nov 24 22:39:57 crc kubenswrapper[4915]: I1124 22:39:57.414382 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9812b70fc918796d2d954aaea0b38e5551a6473c11c81211cd12f1d83db66d59"} err="failed to get container status \"9812b70fc918796d2d954aaea0b38e5551a6473c11c81211cd12f1d83db66d59\": rpc error: code = NotFound desc = could not find container \"9812b70fc918796d2d954aaea0b38e5551a6473c11c81211cd12f1d83db66d59\": container with ID starting with 9812b70fc918796d2d954aaea0b38e5551a6473c11c81211cd12f1d83db66d59 not found: ID does not exist" Nov 24 22:39:57 crc kubenswrapper[4915]: I1124 22:39:57.414886 4915 scope.go:117] "RemoveContainer" containerID="ca6c788c0b58d2de138fea9ae4b44eb050253cc6877c6a2e5639e1bfd13338d3" Nov 24 22:39:57 crc kubenswrapper[4915]: E1124 22:39:57.415317 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca6c788c0b58d2de138fea9ae4b44eb050253cc6877c6a2e5639e1bfd13338d3\": container with ID starting with ca6c788c0b58d2de138fea9ae4b44eb050253cc6877c6a2e5639e1bfd13338d3 not found: ID does not exist" containerID="ca6c788c0b58d2de138fea9ae4b44eb050253cc6877c6a2e5639e1bfd13338d3" Nov 24 22:39:57 crc kubenswrapper[4915]: I1124 22:39:57.415348 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca6c788c0b58d2de138fea9ae4b44eb050253cc6877c6a2e5639e1bfd13338d3"} err="failed to get container status \"ca6c788c0b58d2de138fea9ae4b44eb050253cc6877c6a2e5639e1bfd13338d3\": rpc error: code = NotFound desc = could not find container \"ca6c788c0b58d2de138fea9ae4b44eb050253cc6877c6a2e5639e1bfd13338d3\": container with ID starting with ca6c788c0b58d2de138fea9ae4b44eb050253cc6877c6a2e5639e1bfd13338d3 not found: ID does not exist" Nov 24 22:39:57 crc kubenswrapper[4915]: I1124 22:39:57.478529 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rhjmj"] Nov 24 22:39:57 crc kubenswrapper[4915]: E1124 22:39:57.486641 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1f582fd_cebc_4f07_8f7e_4d96e3cb9e00.slice/crio-13915f84759d9fb926c7ea363d046069241f6bb8a3b479aa93c724df33c939fb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e562dfa_ed3f_4450_b35d_1300d36ff4f9.slice\": RecentStats: unable to find data in memory cache]" Nov 24 22:39:57 crc kubenswrapper[4915]: I1124 22:39:57.490616 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rhjmj"] Nov 24 22:39:58 crc kubenswrapper[4915]: I1124 22:39:58.299053 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8ljx2" Nov 24 22:39:58 crc kubenswrapper[4915]: I1124 22:39:58.369201 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8ljx2" Nov 24 22:39:58 crc kubenswrapper[4915]: I1124 22:39:58.449062 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e562dfa-ed3f-4450-b35d-1300d36ff4f9" path="/var/lib/kubelet/pods/0e562dfa-ed3f-4450-b35d-1300d36ff4f9/volumes" Nov 24 22:40:00 crc kubenswrapper[4915]: I1124 22:40:00.009475 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8ljx2"] Nov 24 22:40:00 crc kubenswrapper[4915]: I1124 22:40:00.297477 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8ljx2" podUID="f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00" containerName="registry-server" containerID="cri-o://e2c4dee1d0ad112a1b3051c01aba9b0ec74f6eab90ca8488a17fdcd81bee0df4" gracePeriod=2 Nov 24 22:40:01 crc kubenswrapper[4915]: I1124 22:40:01.336598 4915 generic.go:334] "Generic (PLEG): container finished" podID="f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00" containerID="e2c4dee1d0ad112a1b3051c01aba9b0ec74f6eab90ca8488a17fdcd81bee0df4" exitCode=0 Nov 24 22:40:01 crc kubenswrapper[4915]: I1124 22:40:01.336926 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ljx2" event={"ID":"f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00","Type":"ContainerDied","Data":"e2c4dee1d0ad112a1b3051c01aba9b0ec74f6eab90ca8488a17fdcd81bee0df4"} Nov 24 22:40:01 crc kubenswrapper[4915]: I1124 22:40:01.505245 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8ljx2" Nov 24 22:40:01 crc kubenswrapper[4915]: I1124 22:40:01.646423 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcwtq\" (UniqueName: \"kubernetes.io/projected/f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00-kube-api-access-vcwtq\") pod \"f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00\" (UID: \"f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00\") " Nov 24 22:40:01 crc kubenswrapper[4915]: I1124 22:40:01.646565 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00-utilities\") pod \"f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00\" (UID: \"f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00\") " Nov 24 22:40:01 crc kubenswrapper[4915]: I1124 22:40:01.646643 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00-catalog-content\") pod \"f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00\" (UID: \"f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00\") " Nov 24 22:40:01 crc kubenswrapper[4915]: I1124 22:40:01.647450 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00-utilities" (OuterVolumeSpecName: "utilities") pod "f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00" (UID: "f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:40:01 crc kubenswrapper[4915]: I1124 22:40:01.657465 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00-kube-api-access-vcwtq" (OuterVolumeSpecName: "kube-api-access-vcwtq") pod "f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00" (UID: "f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00"). InnerVolumeSpecName "kube-api-access-vcwtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:40:01 crc kubenswrapper[4915]: I1124 22:40:01.706914 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00" (UID: "f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:40:01 crc kubenswrapper[4915]: I1124 22:40:01.749686 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:40:01 crc kubenswrapper[4915]: I1124 22:40:01.749734 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcwtq\" (UniqueName: \"kubernetes.io/projected/f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00-kube-api-access-vcwtq\") on node \"crc\" DevicePath \"\"" Nov 24 22:40:01 crc kubenswrapper[4915]: I1124 22:40:01.749753 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:40:02 crc kubenswrapper[4915]: I1124 22:40:02.353462 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8ljx2" event={"ID":"f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00","Type":"ContainerDied","Data":"e14977303e74ac6b4068cbb88b9a84b9df08c9f8637ce08c3a0b2257abf12113"} Nov 24 22:40:02 crc kubenswrapper[4915]: I1124 22:40:02.353515 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8ljx2" Nov 24 22:40:02 crc kubenswrapper[4915]: I1124 22:40:02.353538 4915 scope.go:117] "RemoveContainer" containerID="e2c4dee1d0ad112a1b3051c01aba9b0ec74f6eab90ca8488a17fdcd81bee0df4" Nov 24 22:40:02 crc kubenswrapper[4915]: I1124 22:40:02.381614 4915 scope.go:117] "RemoveContainer" containerID="13915f84759d9fb926c7ea363d046069241f6bb8a3b479aa93c724df33c939fb" Nov 24 22:40:02 crc kubenswrapper[4915]: I1124 22:40:02.391738 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8ljx2"] Nov 24 22:40:02 crc kubenswrapper[4915]: I1124 22:40:02.406088 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8ljx2"] Nov 24 22:40:02 crc kubenswrapper[4915]: I1124 22:40:02.444365 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00" path="/var/lib/kubelet/pods/f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00/volumes" Nov 24 22:40:02 crc kubenswrapper[4915]: I1124 22:40:02.485410 4915 scope.go:117] "RemoveContainer" containerID="55d1de7dc9dbc91b4281e6bf9272032c40c23f9e3409ee33b4cabb2e732b4ab7" Nov 24 22:40:06 crc kubenswrapper[4915]: E1124 22:40:06.809394 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1f582fd_cebc_4f07_8f7e_4d96e3cb9e00.slice/crio-13915f84759d9fb926c7ea363d046069241f6bb8a3b479aa93c724df33c939fb.scope\": RecentStats: unable to find data in memory cache]" Nov 24 22:40:07 crc kubenswrapper[4915]: E1124 22:40:07.730523 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1f582fd_cebc_4f07_8f7e_4d96e3cb9e00.slice/crio-13915f84759d9fb926c7ea363d046069241f6bb8a3b479aa93c724df33c939fb.scope\": RecentStats: unable to find data in memory cache]" Nov 24 22:40:18 crc kubenswrapper[4915]: E1124 22:40:18.071435 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1f582fd_cebc_4f07_8f7e_4d96e3cb9e00.slice/crio-13915f84759d9fb926c7ea363d046069241f6bb8a3b479aa93c724df33c939fb.scope\": RecentStats: unable to find data in memory cache]" Nov 24 22:40:21 crc kubenswrapper[4915]: E1124 22:40:21.814248 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1f582fd_cebc_4f07_8f7e_4d96e3cb9e00.slice/crio-13915f84759d9fb926c7ea363d046069241f6bb8a3b479aa93c724df33c939fb.scope\": RecentStats: unable to find data in memory cache]" Nov 24 22:40:24 crc kubenswrapper[4915]: I1124 22:40:24.327695 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:40:24 crc kubenswrapper[4915]: I1124 22:40:24.328476 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:40:28 crc kubenswrapper[4915]: E1124 22:40:28.361891 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1f582fd_cebc_4f07_8f7e_4d96e3cb9e00.slice/crio-13915f84759d9fb926c7ea363d046069241f6bb8a3b479aa93c724df33c939fb.scope\": RecentStats: unable to find data in memory cache]" Nov 24 22:40:37 crc kubenswrapper[4915]: E1124 22:40:37.080928 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1f582fd_cebc_4f07_8f7e_4d96e3cb9e00.slice/crio-13915f84759d9fb926c7ea363d046069241f6bb8a3b479aa93c724df33c939fb.scope\": RecentStats: unable to find data in memory cache]" Nov 24 22:40:38 crc kubenswrapper[4915]: E1124 22:40:38.430375 4915 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1f582fd_cebc_4f07_8f7e_4d96e3cb9e00.slice/crio-13915f84759d9fb926c7ea363d046069241f6bb8a3b479aa93c724df33c939fb.scope\": RecentStats: unable to find data in memory cache]" Nov 24 22:40:42 crc kubenswrapper[4915]: E1124 22:40:42.493245 4915 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/47763f3af81717b976aa31eb54a92b907bf8236b6b7216d3858b8a468694f991/diff" to get inode usage: stat /var/lib/containers/storage/overlay/47763f3af81717b976aa31eb54a92b907bf8236b6b7216d3858b8a468694f991/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openshift-marketplace_certified-operators-8ljx2_f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00/extract-content/0.log" to get inode usage: stat /var/log/pods/openshift-marketplace_certified-operators-8ljx2_f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00/extract-content/0.log: no such file or directory Nov 24 22:40:54 crc kubenswrapper[4915]: I1124 22:40:54.327683 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:40:54 crc kubenswrapper[4915]: I1124 22:40:54.328139 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:40:54 crc kubenswrapper[4915]: I1124 22:40:54.328180 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 22:40:54 crc kubenswrapper[4915]: I1124 22:40:54.328665 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ea973474b59d7bc0efd115a91a13d725742fe8a4cea1563a201effa13e0bcff0"} pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:40:54 crc kubenswrapper[4915]: I1124 22:40:54.328718 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" containerID="cri-o://ea973474b59d7bc0efd115a91a13d725742fe8a4cea1563a201effa13e0bcff0" gracePeriod=600 Nov 24 22:40:55 crc kubenswrapper[4915]: I1124 22:40:55.075462 4915 generic.go:334] "Generic (PLEG): container finished" podID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerID="ea973474b59d7bc0efd115a91a13d725742fe8a4cea1563a201effa13e0bcff0" exitCode=0 Nov 24 22:40:55 crc kubenswrapper[4915]: I1124 22:40:55.075587 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerDied","Data":"ea973474b59d7bc0efd115a91a13d725742fe8a4cea1563a201effa13e0bcff0"} Nov 24 22:40:55 crc kubenswrapper[4915]: I1124 22:40:55.075814 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerStarted","Data":"e8f55126a8f72bcde79852645be4e6a7d3de58b23503a90df0838a9583b0b7f5"} Nov 24 22:40:55 crc kubenswrapper[4915]: I1124 22:40:55.075844 4915 scope.go:117] "RemoveContainer" containerID="bc456cd1e964eebc77738a0b189b07c75f8737a62bc684a8dccef19bf963137f" Nov 24 22:42:54 crc kubenswrapper[4915]: I1124 22:42:54.328068 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:42:54 crc kubenswrapper[4915]: I1124 22:42:54.328685 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:43:24 crc kubenswrapper[4915]: I1124 22:43:24.327359 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:43:24 crc kubenswrapper[4915]: I1124 22:43:24.329619 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:43:54 crc kubenswrapper[4915]: I1124 22:43:54.328170 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:43:54 crc kubenswrapper[4915]: I1124 22:43:54.328893 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:43:54 crc kubenswrapper[4915]: I1124 22:43:54.328964 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 22:43:54 crc kubenswrapper[4915]: I1124 22:43:54.330264 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e8f55126a8f72bcde79852645be4e6a7d3de58b23503a90df0838a9583b0b7f5"} pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:43:54 crc kubenswrapper[4915]: I1124 22:43:54.330365 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" containerID="cri-o://e8f55126a8f72bcde79852645be4e6a7d3de58b23503a90df0838a9583b0b7f5" gracePeriod=600 Nov 24 22:43:54 crc kubenswrapper[4915]: E1124 22:43:54.453915 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:43:54 crc kubenswrapper[4915]: I1124 22:43:54.910197 4915 generic.go:334] "Generic (PLEG): container finished" podID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerID="e8f55126a8f72bcde79852645be4e6a7d3de58b23503a90df0838a9583b0b7f5" exitCode=0 Nov 24 22:43:54 crc kubenswrapper[4915]: I1124 22:43:54.910332 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerDied","Data":"e8f55126a8f72bcde79852645be4e6a7d3de58b23503a90df0838a9583b0b7f5"} Nov 24 22:43:54 crc kubenswrapper[4915]: I1124 22:43:54.910610 4915 scope.go:117] "RemoveContainer" containerID="ea973474b59d7bc0efd115a91a13d725742fe8a4cea1563a201effa13e0bcff0" Nov 24 22:43:54 crc kubenswrapper[4915]: I1124 22:43:54.911872 4915 scope.go:117] "RemoveContainer" containerID="e8f55126a8f72bcde79852645be4e6a7d3de58b23503a90df0838a9583b0b7f5" Nov 24 22:43:54 crc kubenswrapper[4915]: E1124 22:43:54.912525 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:44:09 crc kubenswrapper[4915]: I1124 22:44:09.427176 4915 scope.go:117] "RemoveContainer" containerID="e8f55126a8f72bcde79852645be4e6a7d3de58b23503a90df0838a9583b0b7f5" Nov 24 22:44:09 crc kubenswrapper[4915]: E1124 22:44:09.428234 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:44:24 crc kubenswrapper[4915]: I1124 22:44:24.428548 4915 scope.go:117] "RemoveContainer" containerID="e8f55126a8f72bcde79852645be4e6a7d3de58b23503a90df0838a9583b0b7f5" Nov 24 22:44:24 crc kubenswrapper[4915]: E1124 22:44:24.430072 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:44:35 crc kubenswrapper[4915]: I1124 22:44:35.427671 4915 scope.go:117] "RemoveContainer" containerID="e8f55126a8f72bcde79852645be4e6a7d3de58b23503a90df0838a9583b0b7f5" Nov 24 22:44:35 crc kubenswrapper[4915]: E1124 22:44:35.428694 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:44:47 crc kubenswrapper[4915]: I1124 22:44:47.428032 4915 scope.go:117] "RemoveContainer" containerID="e8f55126a8f72bcde79852645be4e6a7d3de58b23503a90df0838a9583b0b7f5" Nov 24 22:44:47 crc kubenswrapper[4915]: E1124 22:44:47.429295 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:45:00 crc kubenswrapper[4915]: I1124 22:45:00.187412 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400405-lnstt"] Nov 24 22:45:00 crc kubenswrapper[4915]: E1124 22:45:00.202251 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00" containerName="extract-content" Nov 24 22:45:00 crc kubenswrapper[4915]: I1124 22:45:00.202313 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00" containerName="extract-content" Nov 24 22:45:00 crc kubenswrapper[4915]: E1124 22:45:00.202354 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00" containerName="registry-server" Nov 24 22:45:00 crc kubenswrapper[4915]: I1124 22:45:00.202369 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00" containerName="registry-server" Nov 24 22:45:00 crc kubenswrapper[4915]: E1124 22:45:00.202420 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e562dfa-ed3f-4450-b35d-1300d36ff4f9" containerName="registry-server" Nov 24 22:45:00 crc kubenswrapper[4915]: I1124 22:45:00.202438 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e562dfa-ed3f-4450-b35d-1300d36ff4f9" containerName="registry-server" Nov 24 22:45:00 crc kubenswrapper[4915]: E1124 22:45:00.202480 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00" containerName="extract-utilities" Nov 24 22:45:00 crc kubenswrapper[4915]: I1124 22:45:00.202494 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00" containerName="extract-utilities" Nov 24 22:45:00 crc kubenswrapper[4915]: E1124 22:45:00.202523 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e562dfa-ed3f-4450-b35d-1300d36ff4f9" containerName="extract-content" Nov 24 22:45:00 crc kubenswrapper[4915]: I1124 22:45:00.202537 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e562dfa-ed3f-4450-b35d-1300d36ff4f9" containerName="extract-content" Nov 24 22:45:00 crc kubenswrapper[4915]: E1124 22:45:00.202567 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e562dfa-ed3f-4450-b35d-1300d36ff4f9" containerName="extract-utilities" Nov 24 22:45:00 crc kubenswrapper[4915]: I1124 22:45:00.202580 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e562dfa-ed3f-4450-b35d-1300d36ff4f9" containerName="extract-utilities" Nov 24 22:45:00 crc kubenswrapper[4915]: I1124 22:45:00.203187 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1f582fd-cebc-4f07-8f7e-4d96e3cb9e00" containerName="registry-server" Nov 24 22:45:00 crc kubenswrapper[4915]: I1124 22:45:00.203247 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e562dfa-ed3f-4450-b35d-1300d36ff4f9" containerName="registry-server" Nov 24 22:45:00 crc kubenswrapper[4915]: I1124 22:45:00.204845 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400405-lnstt"] Nov 24 22:45:00 crc kubenswrapper[4915]: I1124 22:45:00.205043 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-lnstt" Nov 24 22:45:00 crc kubenswrapper[4915]: I1124 22:45:00.208331 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 22:45:00 crc kubenswrapper[4915]: I1124 22:45:00.208446 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 22:45:00 crc kubenswrapper[4915]: I1124 22:45:00.337187 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fzxv\" (UniqueName: \"kubernetes.io/projected/5a065df9-6ce4-477c-a23d-f4eb5757dff2-kube-api-access-2fzxv\") pod \"collect-profiles-29400405-lnstt\" (UID: \"5a065df9-6ce4-477c-a23d-f4eb5757dff2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-lnstt" Nov 24 22:45:00 crc kubenswrapper[4915]: I1124 22:45:00.337604 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a065df9-6ce4-477c-a23d-f4eb5757dff2-config-volume\") pod \"collect-profiles-29400405-lnstt\" (UID: \"5a065df9-6ce4-477c-a23d-f4eb5757dff2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-lnstt" Nov 24 22:45:00 crc kubenswrapper[4915]: I1124 22:45:00.337764 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a065df9-6ce4-477c-a23d-f4eb5757dff2-secret-volume\") pod \"collect-profiles-29400405-lnstt\" (UID: \"5a065df9-6ce4-477c-a23d-f4eb5757dff2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-lnstt" Nov 24 22:45:00 crc kubenswrapper[4915]: I1124 22:45:00.440710 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a065df9-6ce4-477c-a23d-f4eb5757dff2-config-volume\") pod \"collect-profiles-29400405-lnstt\" (UID: \"5a065df9-6ce4-477c-a23d-f4eb5757dff2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-lnstt" Nov 24 22:45:00 crc kubenswrapper[4915]: I1124 22:45:00.440835 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a065df9-6ce4-477c-a23d-f4eb5757dff2-secret-volume\") pod \"collect-profiles-29400405-lnstt\" (UID: \"5a065df9-6ce4-477c-a23d-f4eb5757dff2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-lnstt" Nov 24 22:45:00 crc kubenswrapper[4915]: I1124 22:45:00.441123 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fzxv\" (UniqueName: \"kubernetes.io/projected/5a065df9-6ce4-477c-a23d-f4eb5757dff2-kube-api-access-2fzxv\") pod \"collect-profiles-29400405-lnstt\" (UID: \"5a065df9-6ce4-477c-a23d-f4eb5757dff2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-lnstt" Nov 24 22:45:00 crc kubenswrapper[4915]: I1124 22:45:00.442274 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a065df9-6ce4-477c-a23d-f4eb5757dff2-config-volume\") pod \"collect-profiles-29400405-lnstt\" (UID: \"5a065df9-6ce4-477c-a23d-f4eb5757dff2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-lnstt" Nov 24 22:45:00 crc kubenswrapper[4915]: I1124 22:45:00.452238 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a065df9-6ce4-477c-a23d-f4eb5757dff2-secret-volume\") pod \"collect-profiles-29400405-lnstt\" (UID: \"5a065df9-6ce4-477c-a23d-f4eb5757dff2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-lnstt" Nov 24 22:45:00 crc kubenswrapper[4915]: I1124 22:45:00.466559 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fzxv\" (UniqueName: \"kubernetes.io/projected/5a065df9-6ce4-477c-a23d-f4eb5757dff2-kube-api-access-2fzxv\") pod \"collect-profiles-29400405-lnstt\" (UID: \"5a065df9-6ce4-477c-a23d-f4eb5757dff2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-lnstt" Nov 24 22:45:00 crc kubenswrapper[4915]: I1124 22:45:00.537233 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-lnstt" Nov 24 22:45:01 crc kubenswrapper[4915]: I1124 22:45:01.044277 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400405-lnstt"] Nov 24 22:45:01 crc kubenswrapper[4915]: I1124 22:45:01.891301 4915 generic.go:334] "Generic (PLEG): container finished" podID="5a065df9-6ce4-477c-a23d-f4eb5757dff2" containerID="59712c454fa9711dfe45b9661a08c2d9e78b9f5699a59c0fc2a83f1b8cccb166" exitCode=0 Nov 24 22:45:01 crc kubenswrapper[4915]: I1124 22:45:01.891547 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-lnstt" event={"ID":"5a065df9-6ce4-477c-a23d-f4eb5757dff2","Type":"ContainerDied","Data":"59712c454fa9711dfe45b9661a08c2d9e78b9f5699a59c0fc2a83f1b8cccb166"} Nov 24 22:45:01 crc kubenswrapper[4915]: I1124 22:45:01.892047 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-lnstt" event={"ID":"5a065df9-6ce4-477c-a23d-f4eb5757dff2","Type":"ContainerStarted","Data":"57166b5dcb6e93870549baeae43d613d2d96b81f72283b6d701b78859a9fedbe"} Nov 24 22:45:02 crc kubenswrapper[4915]: I1124 22:45:02.443848 4915 scope.go:117] "RemoveContainer" containerID="e8f55126a8f72bcde79852645be4e6a7d3de58b23503a90df0838a9583b0b7f5" Nov 24 22:45:02 crc kubenswrapper[4915]: E1124 22:45:02.444296 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:45:03 crc kubenswrapper[4915]: I1124 22:45:03.323453 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-lnstt" Nov 24 22:45:03 crc kubenswrapper[4915]: I1124 22:45:03.424310 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fzxv\" (UniqueName: \"kubernetes.io/projected/5a065df9-6ce4-477c-a23d-f4eb5757dff2-kube-api-access-2fzxv\") pod \"5a065df9-6ce4-477c-a23d-f4eb5757dff2\" (UID: \"5a065df9-6ce4-477c-a23d-f4eb5757dff2\") " Nov 24 22:45:03 crc kubenswrapper[4915]: I1124 22:45:03.424646 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a065df9-6ce4-477c-a23d-f4eb5757dff2-secret-volume\") pod \"5a065df9-6ce4-477c-a23d-f4eb5757dff2\" (UID: \"5a065df9-6ce4-477c-a23d-f4eb5757dff2\") " Nov 24 22:45:03 crc kubenswrapper[4915]: I1124 22:45:03.424701 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a065df9-6ce4-477c-a23d-f4eb5757dff2-config-volume\") pod \"5a065df9-6ce4-477c-a23d-f4eb5757dff2\" (UID: \"5a065df9-6ce4-477c-a23d-f4eb5757dff2\") " Nov 24 22:45:03 crc kubenswrapper[4915]: I1124 22:45:03.426228 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a065df9-6ce4-477c-a23d-f4eb5757dff2-config-volume" (OuterVolumeSpecName: "config-volume") pod "5a065df9-6ce4-477c-a23d-f4eb5757dff2" (UID: "5a065df9-6ce4-477c-a23d-f4eb5757dff2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 22:45:03 crc kubenswrapper[4915]: I1124 22:45:03.434385 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a065df9-6ce4-477c-a23d-f4eb5757dff2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5a065df9-6ce4-477c-a23d-f4eb5757dff2" (UID: "5a065df9-6ce4-477c-a23d-f4eb5757dff2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 22:45:03 crc kubenswrapper[4915]: I1124 22:45:03.442611 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a065df9-6ce4-477c-a23d-f4eb5757dff2-kube-api-access-2fzxv" (OuterVolumeSpecName: "kube-api-access-2fzxv") pod "5a065df9-6ce4-477c-a23d-f4eb5757dff2" (UID: "5a065df9-6ce4-477c-a23d-f4eb5757dff2"). InnerVolumeSpecName "kube-api-access-2fzxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:45:03 crc kubenswrapper[4915]: I1124 22:45:03.533833 4915 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a065df9-6ce4-477c-a23d-f4eb5757dff2-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 22:45:03 crc kubenswrapper[4915]: I1124 22:45:03.533866 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fzxv\" (UniqueName: \"kubernetes.io/projected/5a065df9-6ce4-477c-a23d-f4eb5757dff2-kube-api-access-2fzxv\") on node \"crc\" DevicePath \"\"" Nov 24 22:45:03 crc kubenswrapper[4915]: I1124 22:45:03.533881 4915 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5a065df9-6ce4-477c-a23d-f4eb5757dff2-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 22:45:03 crc kubenswrapper[4915]: I1124 22:45:03.915426 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-lnstt" event={"ID":"5a065df9-6ce4-477c-a23d-f4eb5757dff2","Type":"ContainerDied","Data":"57166b5dcb6e93870549baeae43d613d2d96b81f72283b6d701b78859a9fedbe"} Nov 24 22:45:03 crc kubenswrapper[4915]: I1124 22:45:03.915465 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57166b5dcb6e93870549baeae43d613d2d96b81f72283b6d701b78859a9fedbe" Nov 24 22:45:03 crc kubenswrapper[4915]: I1124 22:45:03.915534 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400405-lnstt" Nov 24 22:45:04 crc kubenswrapper[4915]: I1124 22:45:04.405448 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400360-gh2dw"] Nov 24 22:45:04 crc kubenswrapper[4915]: I1124 22:45:04.417039 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400360-gh2dw"] Nov 24 22:45:04 crc kubenswrapper[4915]: I1124 22:45:04.440115 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2782360d-c02c-49e5-978b-92d85dee47c0" path="/var/lib/kubelet/pods/2782360d-c02c-49e5-978b-92d85dee47c0/volumes" Nov 24 22:45:13 crc kubenswrapper[4915]: I1124 22:45:13.426769 4915 scope.go:117] "RemoveContainer" containerID="e8f55126a8f72bcde79852645be4e6a7d3de58b23503a90df0838a9583b0b7f5" Nov 24 22:45:13 crc kubenswrapper[4915]: E1124 22:45:13.427499 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:45:18 crc kubenswrapper[4915]: I1124 22:45:18.616650 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-968b48459-dfqg8" podUID="cdff3fcc-adf5-4e04-9cea-b12c43b4f025" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 24 22:45:19 crc kubenswrapper[4915]: I1124 22:45:19.150219 4915 scope.go:117] "RemoveContainer" containerID="6bf1527c96607dd0362fd85ee2349e8814c726b4a4fcf7e25c06f20ab51115d7" Nov 24 22:45:27 crc kubenswrapper[4915]: I1124 22:45:27.427331 4915 scope.go:117] "RemoveContainer" containerID="e8f55126a8f72bcde79852645be4e6a7d3de58b23503a90df0838a9583b0b7f5" Nov 24 22:45:27 crc kubenswrapper[4915]: E1124 22:45:27.428261 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:45:41 crc kubenswrapper[4915]: I1124 22:45:41.428491 4915 scope.go:117] "RemoveContainer" containerID="e8f55126a8f72bcde79852645be4e6a7d3de58b23503a90df0838a9583b0b7f5" Nov 24 22:45:41 crc kubenswrapper[4915]: E1124 22:45:41.429764 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:45:53 crc kubenswrapper[4915]: I1124 22:45:53.427041 4915 scope.go:117] "RemoveContainer" containerID="e8f55126a8f72bcde79852645be4e6a7d3de58b23503a90df0838a9583b0b7f5" Nov 24 22:45:53 crc kubenswrapper[4915]: E1124 22:45:53.427759 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:46:07 crc kubenswrapper[4915]: I1124 22:46:07.427045 4915 scope.go:117] "RemoveContainer" containerID="e8f55126a8f72bcde79852645be4e6a7d3de58b23503a90df0838a9583b0b7f5" Nov 24 22:46:07 crc kubenswrapper[4915]: E1124 22:46:07.428121 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:46:18 crc kubenswrapper[4915]: I1124 22:46:18.426724 4915 scope.go:117] "RemoveContainer" containerID="e8f55126a8f72bcde79852645be4e6a7d3de58b23503a90df0838a9583b0b7f5" Nov 24 22:46:18 crc kubenswrapper[4915]: E1124 22:46:18.427486 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:46:32 crc kubenswrapper[4915]: I1124 22:46:32.436567 4915 scope.go:117] "RemoveContainer" containerID="e8f55126a8f72bcde79852645be4e6a7d3de58b23503a90df0838a9583b0b7f5" Nov 24 22:46:32 crc kubenswrapper[4915]: E1124 22:46:32.437432 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:46:37 crc kubenswrapper[4915]: I1124 22:46:37.829083 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9l6rj"] Nov 24 22:46:37 crc kubenswrapper[4915]: E1124 22:46:37.830329 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a065df9-6ce4-477c-a23d-f4eb5757dff2" containerName="collect-profiles" Nov 24 22:46:37 crc kubenswrapper[4915]: I1124 22:46:37.830345 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a065df9-6ce4-477c-a23d-f4eb5757dff2" containerName="collect-profiles" Nov 24 22:46:37 crc kubenswrapper[4915]: I1124 22:46:37.830565 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a065df9-6ce4-477c-a23d-f4eb5757dff2" containerName="collect-profiles" Nov 24 22:46:37 crc kubenswrapper[4915]: I1124 22:46:37.832392 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9l6rj" Nov 24 22:46:37 crc kubenswrapper[4915]: I1124 22:46:37.851583 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9l6rj"] Nov 24 22:46:37 crc kubenswrapper[4915]: I1124 22:46:37.990036 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c68cddeb-8921-443e-aae0-8b4e74a785f0-catalog-content\") pod \"redhat-operators-9l6rj\" (UID: \"c68cddeb-8921-443e-aae0-8b4e74a785f0\") " pod="openshift-marketplace/redhat-operators-9l6rj" Nov 24 22:46:37 crc kubenswrapper[4915]: I1124 22:46:37.990519 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8c7z\" (UniqueName: \"kubernetes.io/projected/c68cddeb-8921-443e-aae0-8b4e74a785f0-kube-api-access-p8c7z\") pod \"redhat-operators-9l6rj\" (UID: \"c68cddeb-8921-443e-aae0-8b4e74a785f0\") " pod="openshift-marketplace/redhat-operators-9l6rj" Nov 24 22:46:37 crc kubenswrapper[4915]: I1124 22:46:37.991110 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c68cddeb-8921-443e-aae0-8b4e74a785f0-utilities\") pod \"redhat-operators-9l6rj\" (UID: \"c68cddeb-8921-443e-aae0-8b4e74a785f0\") " pod="openshift-marketplace/redhat-operators-9l6rj" Nov 24 22:46:38 crc kubenswrapper[4915]: I1124 22:46:38.093524 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c68cddeb-8921-443e-aae0-8b4e74a785f0-utilities\") pod \"redhat-operators-9l6rj\" (UID: \"c68cddeb-8921-443e-aae0-8b4e74a785f0\") " pod="openshift-marketplace/redhat-operators-9l6rj" Nov 24 22:46:38 crc kubenswrapper[4915]: I1124 22:46:38.093644 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c68cddeb-8921-443e-aae0-8b4e74a785f0-catalog-content\") pod \"redhat-operators-9l6rj\" (UID: \"c68cddeb-8921-443e-aae0-8b4e74a785f0\") " pod="openshift-marketplace/redhat-operators-9l6rj" Nov 24 22:46:38 crc kubenswrapper[4915]: I1124 22:46:38.093714 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8c7z\" (UniqueName: \"kubernetes.io/projected/c68cddeb-8921-443e-aae0-8b4e74a785f0-kube-api-access-p8c7z\") pod \"redhat-operators-9l6rj\" (UID: \"c68cddeb-8921-443e-aae0-8b4e74a785f0\") " pod="openshift-marketplace/redhat-operators-9l6rj" Nov 24 22:46:38 crc kubenswrapper[4915]: I1124 22:46:38.094184 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c68cddeb-8921-443e-aae0-8b4e74a785f0-utilities\") pod \"redhat-operators-9l6rj\" (UID: \"c68cddeb-8921-443e-aae0-8b4e74a785f0\") " pod="openshift-marketplace/redhat-operators-9l6rj" Nov 24 22:46:38 crc kubenswrapper[4915]: I1124 22:46:38.094289 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c68cddeb-8921-443e-aae0-8b4e74a785f0-catalog-content\") pod \"redhat-operators-9l6rj\" (UID: \"c68cddeb-8921-443e-aae0-8b4e74a785f0\") " pod="openshift-marketplace/redhat-operators-9l6rj" Nov 24 22:46:38 crc kubenswrapper[4915]: I1124 22:46:38.117799 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8c7z\" (UniqueName: \"kubernetes.io/projected/c68cddeb-8921-443e-aae0-8b4e74a785f0-kube-api-access-p8c7z\") pod \"redhat-operators-9l6rj\" (UID: \"c68cddeb-8921-443e-aae0-8b4e74a785f0\") " pod="openshift-marketplace/redhat-operators-9l6rj" Nov 24 22:46:38 crc kubenswrapper[4915]: I1124 22:46:38.172467 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9l6rj" Nov 24 22:46:38 crc kubenswrapper[4915]: I1124 22:46:38.672268 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9l6rj"] Nov 24 22:46:39 crc kubenswrapper[4915]: I1124 22:46:39.238226 4915 generic.go:334] "Generic (PLEG): container finished" podID="c68cddeb-8921-443e-aae0-8b4e74a785f0" containerID="6e19df2a36562cd93b3a33cc465e4b66d45e92dc5cdf0b110c3a16c37f94f8cb" exitCode=0 Nov 24 22:46:39 crc kubenswrapper[4915]: I1124 22:46:39.238278 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9l6rj" event={"ID":"c68cddeb-8921-443e-aae0-8b4e74a785f0","Type":"ContainerDied","Data":"6e19df2a36562cd93b3a33cc465e4b66d45e92dc5cdf0b110c3a16c37f94f8cb"} Nov 24 22:46:39 crc kubenswrapper[4915]: I1124 22:46:39.238305 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9l6rj" event={"ID":"c68cddeb-8921-443e-aae0-8b4e74a785f0","Type":"ContainerStarted","Data":"840dcd55b8790919767b43615905e69aafd6d68a8b2f488741e876f3df474429"} Nov 24 22:46:39 crc kubenswrapper[4915]: I1124 22:46:39.240629 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 22:46:39 crc kubenswrapper[4915]: I1124 22:46:39.814152 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-66cqx"] Nov 24 22:46:39 crc kubenswrapper[4915]: I1124 22:46:39.817595 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-66cqx" Nov 24 22:46:39 crc kubenswrapper[4915]: I1124 22:46:39.839965 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-66cqx"] Nov 24 22:46:39 crc kubenswrapper[4915]: I1124 22:46:39.861833 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21fc6d9f-92e2-46f5-812a-c2e6dbc66820-utilities\") pod \"community-operators-66cqx\" (UID: \"21fc6d9f-92e2-46f5-812a-c2e6dbc66820\") " pod="openshift-marketplace/community-operators-66cqx" Nov 24 22:46:39 crc kubenswrapper[4915]: I1124 22:46:39.861935 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zctrk\" (UniqueName: \"kubernetes.io/projected/21fc6d9f-92e2-46f5-812a-c2e6dbc66820-kube-api-access-zctrk\") pod \"community-operators-66cqx\" (UID: \"21fc6d9f-92e2-46f5-812a-c2e6dbc66820\") " pod="openshift-marketplace/community-operators-66cqx" Nov 24 22:46:39 crc kubenswrapper[4915]: I1124 22:46:39.862106 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21fc6d9f-92e2-46f5-812a-c2e6dbc66820-catalog-content\") pod \"community-operators-66cqx\" (UID: \"21fc6d9f-92e2-46f5-812a-c2e6dbc66820\") " pod="openshift-marketplace/community-operators-66cqx" Nov 24 22:46:39 crc kubenswrapper[4915]: I1124 22:46:39.964152 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21fc6d9f-92e2-46f5-812a-c2e6dbc66820-utilities\") pod \"community-operators-66cqx\" (UID: \"21fc6d9f-92e2-46f5-812a-c2e6dbc66820\") " pod="openshift-marketplace/community-operators-66cqx" Nov 24 22:46:39 crc kubenswrapper[4915]: I1124 22:46:39.964237 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zctrk\" (UniqueName: \"kubernetes.io/projected/21fc6d9f-92e2-46f5-812a-c2e6dbc66820-kube-api-access-zctrk\") pod \"community-operators-66cqx\" (UID: \"21fc6d9f-92e2-46f5-812a-c2e6dbc66820\") " pod="openshift-marketplace/community-operators-66cqx" Nov 24 22:46:39 crc kubenswrapper[4915]: I1124 22:46:39.964386 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21fc6d9f-92e2-46f5-812a-c2e6dbc66820-catalog-content\") pod \"community-operators-66cqx\" (UID: \"21fc6d9f-92e2-46f5-812a-c2e6dbc66820\") " pod="openshift-marketplace/community-operators-66cqx" Nov 24 22:46:39 crc kubenswrapper[4915]: I1124 22:46:39.965274 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21fc6d9f-92e2-46f5-812a-c2e6dbc66820-utilities\") pod \"community-operators-66cqx\" (UID: \"21fc6d9f-92e2-46f5-812a-c2e6dbc66820\") " pod="openshift-marketplace/community-operators-66cqx" Nov 24 22:46:39 crc kubenswrapper[4915]: I1124 22:46:39.965291 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21fc6d9f-92e2-46f5-812a-c2e6dbc66820-catalog-content\") pod \"community-operators-66cqx\" (UID: \"21fc6d9f-92e2-46f5-812a-c2e6dbc66820\") " pod="openshift-marketplace/community-operators-66cqx" Nov 24 22:46:39 crc kubenswrapper[4915]: I1124 22:46:39.984572 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zctrk\" (UniqueName: \"kubernetes.io/projected/21fc6d9f-92e2-46f5-812a-c2e6dbc66820-kube-api-access-zctrk\") pod \"community-operators-66cqx\" (UID: \"21fc6d9f-92e2-46f5-812a-c2e6dbc66820\") " pod="openshift-marketplace/community-operators-66cqx" Nov 24 22:46:40 crc kubenswrapper[4915]: I1124 22:46:40.142294 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-66cqx" Nov 24 22:46:40 crc kubenswrapper[4915]: I1124 22:46:40.277943 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9l6rj" event={"ID":"c68cddeb-8921-443e-aae0-8b4e74a785f0","Type":"ContainerStarted","Data":"19b8bc928e7d7224507c7adecc6652e911cc8e585f6b991ad9ee5dd534f82a0d"} Nov 24 22:46:40 crc kubenswrapper[4915]: I1124 22:46:40.868757 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-66cqx"] Nov 24 22:46:41 crc kubenswrapper[4915]: I1124 22:46:41.291560 4915 generic.go:334] "Generic (PLEG): container finished" podID="21fc6d9f-92e2-46f5-812a-c2e6dbc66820" containerID="066e0fcaf0524a3e17d20439e204865e9363c8b845da2aecc331aae296ea338d" exitCode=0 Nov 24 22:46:41 crc kubenswrapper[4915]: I1124 22:46:41.291645 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-66cqx" event={"ID":"21fc6d9f-92e2-46f5-812a-c2e6dbc66820","Type":"ContainerDied","Data":"066e0fcaf0524a3e17d20439e204865e9363c8b845da2aecc331aae296ea338d"} Nov 24 22:46:41 crc kubenswrapper[4915]: I1124 22:46:41.291725 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-66cqx" event={"ID":"21fc6d9f-92e2-46f5-812a-c2e6dbc66820","Type":"ContainerStarted","Data":"78ca83b1b3be7d9dd6d1c787bad9363e8334b74c8d3b31f7ae1c140a14955576"} Nov 24 22:46:42 crc kubenswrapper[4915]: I1124 22:46:42.306980 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-66cqx" event={"ID":"21fc6d9f-92e2-46f5-812a-c2e6dbc66820","Type":"ContainerStarted","Data":"c98594b194b64f6f846ec8b297c1c3534324ff36b40166930bb98a3b85d85019"} Nov 24 22:46:45 crc kubenswrapper[4915]: I1124 22:46:45.377447 4915 generic.go:334] "Generic (PLEG): container finished" podID="21fc6d9f-92e2-46f5-812a-c2e6dbc66820" containerID="c98594b194b64f6f846ec8b297c1c3534324ff36b40166930bb98a3b85d85019" exitCode=0 Nov 24 22:46:45 crc kubenswrapper[4915]: I1124 22:46:45.377560 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-66cqx" event={"ID":"21fc6d9f-92e2-46f5-812a-c2e6dbc66820","Type":"ContainerDied","Data":"c98594b194b64f6f846ec8b297c1c3534324ff36b40166930bb98a3b85d85019"} Nov 24 22:46:45 crc kubenswrapper[4915]: I1124 22:46:45.383333 4915 generic.go:334] "Generic (PLEG): container finished" podID="c68cddeb-8921-443e-aae0-8b4e74a785f0" containerID="19b8bc928e7d7224507c7adecc6652e911cc8e585f6b991ad9ee5dd534f82a0d" exitCode=0 Nov 24 22:46:45 crc kubenswrapper[4915]: I1124 22:46:45.383408 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9l6rj" event={"ID":"c68cddeb-8921-443e-aae0-8b4e74a785f0","Type":"ContainerDied","Data":"19b8bc928e7d7224507c7adecc6652e911cc8e585f6b991ad9ee5dd534f82a0d"} Nov 24 22:46:46 crc kubenswrapper[4915]: I1124 22:46:46.427369 4915 scope.go:117] "RemoveContainer" containerID="e8f55126a8f72bcde79852645be4e6a7d3de58b23503a90df0838a9583b0b7f5" Nov 24 22:46:46 crc kubenswrapper[4915]: E1124 22:46:46.428449 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:46:47 crc kubenswrapper[4915]: I1124 22:46:47.417668 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-66cqx" event={"ID":"21fc6d9f-92e2-46f5-812a-c2e6dbc66820","Type":"ContainerStarted","Data":"b0f7acd88e236842712ebf475c96e6f1162e7e21e3bc5b053abba3097c8ee79c"} Nov 24 22:46:47 crc kubenswrapper[4915]: I1124 22:46:47.428849 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9l6rj" event={"ID":"c68cddeb-8921-443e-aae0-8b4e74a785f0","Type":"ContainerStarted","Data":"c24d8d7c2946ef29e3bb2da28ef823dfd5d54c8d1c3210c04d67941933897b20"} Nov 24 22:46:47 crc kubenswrapper[4915]: I1124 22:46:47.442326 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-66cqx" podStartSLOduration=3.906025826 podStartE2EDuration="8.442302468s" podCreationTimestamp="2025-11-24 22:46:39 +0000 UTC" firstStartedPulling="2025-11-24 22:46:41.293638362 +0000 UTC m=+5219.609890525" lastFinishedPulling="2025-11-24 22:46:45.829914984 +0000 UTC m=+5224.146167167" observedRunningTime="2025-11-24 22:46:47.438197767 +0000 UTC m=+5225.754449980" watchObservedRunningTime="2025-11-24 22:46:47.442302468 +0000 UTC m=+5225.758554671" Nov 24 22:46:47 crc kubenswrapper[4915]: I1124 22:46:47.470937 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9l6rj" podStartSLOduration=3.9273271 podStartE2EDuration="10.47091654s" podCreationTimestamp="2025-11-24 22:46:37 +0000 UTC" firstStartedPulling="2025-11-24 22:46:39.240417977 +0000 UTC m=+5217.556670150" lastFinishedPulling="2025-11-24 22:46:45.784007407 +0000 UTC m=+5224.100259590" observedRunningTime="2025-11-24 22:46:47.45349305 +0000 UTC m=+5225.769745223" watchObservedRunningTime="2025-11-24 22:46:47.47091654 +0000 UTC m=+5225.787168723" Nov 24 22:46:48 crc kubenswrapper[4915]: I1124 22:46:48.173345 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9l6rj" Nov 24 22:46:48 crc kubenswrapper[4915]: I1124 22:46:48.173494 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9l6rj" Nov 24 22:46:49 crc kubenswrapper[4915]: I1124 22:46:49.221599 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9l6rj" podUID="c68cddeb-8921-443e-aae0-8b4e74a785f0" containerName="registry-server" probeResult="failure" output=< Nov 24 22:46:49 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Nov 24 22:46:49 crc kubenswrapper[4915]: > Nov 24 22:46:50 crc kubenswrapper[4915]: I1124 22:46:50.143320 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-66cqx" Nov 24 22:46:50 crc kubenswrapper[4915]: I1124 22:46:50.143576 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-66cqx" Nov 24 22:46:51 crc kubenswrapper[4915]: I1124 22:46:51.208720 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-66cqx" podUID="21fc6d9f-92e2-46f5-812a-c2e6dbc66820" containerName="registry-server" probeResult="failure" output=< Nov 24 22:46:51 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Nov 24 22:46:51 crc kubenswrapper[4915]: > Nov 24 22:46:57 crc kubenswrapper[4915]: I1124 22:46:57.428028 4915 scope.go:117] "RemoveContainer" containerID="e8f55126a8f72bcde79852645be4e6a7d3de58b23503a90df0838a9583b0b7f5" Nov 24 22:46:57 crc kubenswrapper[4915]: E1124 22:46:57.428998 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:46:59 crc kubenswrapper[4915]: I1124 22:46:59.222132 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9l6rj" podUID="c68cddeb-8921-443e-aae0-8b4e74a785f0" containerName="registry-server" probeResult="failure" output=< Nov 24 22:46:59 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Nov 24 22:46:59 crc kubenswrapper[4915]: > Nov 24 22:47:00 crc kubenswrapper[4915]: I1124 22:47:00.223704 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-66cqx" Nov 24 22:47:00 crc kubenswrapper[4915]: I1124 22:47:00.273408 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-66cqx" Nov 24 22:47:00 crc kubenswrapper[4915]: I1124 22:47:00.460033 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-66cqx"] Nov 24 22:47:01 crc kubenswrapper[4915]: I1124 22:47:01.619388 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-66cqx" podUID="21fc6d9f-92e2-46f5-812a-c2e6dbc66820" containerName="registry-server" containerID="cri-o://b0f7acd88e236842712ebf475c96e6f1162e7e21e3bc5b053abba3097c8ee79c" gracePeriod=2 Nov 24 22:47:02 crc kubenswrapper[4915]: I1124 22:47:02.262696 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-66cqx" Nov 24 22:47:02 crc kubenswrapper[4915]: I1124 22:47:02.372420 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21fc6d9f-92e2-46f5-812a-c2e6dbc66820-catalog-content\") pod \"21fc6d9f-92e2-46f5-812a-c2e6dbc66820\" (UID: \"21fc6d9f-92e2-46f5-812a-c2e6dbc66820\") " Nov 24 22:47:02 crc kubenswrapper[4915]: I1124 22:47:02.372760 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21fc6d9f-92e2-46f5-812a-c2e6dbc66820-utilities\") pod \"21fc6d9f-92e2-46f5-812a-c2e6dbc66820\" (UID: \"21fc6d9f-92e2-46f5-812a-c2e6dbc66820\") " Nov 24 22:47:02 crc kubenswrapper[4915]: I1124 22:47:02.372869 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zctrk\" (UniqueName: \"kubernetes.io/projected/21fc6d9f-92e2-46f5-812a-c2e6dbc66820-kube-api-access-zctrk\") pod \"21fc6d9f-92e2-46f5-812a-c2e6dbc66820\" (UID: \"21fc6d9f-92e2-46f5-812a-c2e6dbc66820\") " Nov 24 22:47:02 crc kubenswrapper[4915]: I1124 22:47:02.373494 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21fc6d9f-92e2-46f5-812a-c2e6dbc66820-utilities" (OuterVolumeSpecName: "utilities") pod "21fc6d9f-92e2-46f5-812a-c2e6dbc66820" (UID: "21fc6d9f-92e2-46f5-812a-c2e6dbc66820"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:47:02 crc kubenswrapper[4915]: I1124 22:47:02.379697 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21fc6d9f-92e2-46f5-812a-c2e6dbc66820-kube-api-access-zctrk" (OuterVolumeSpecName: "kube-api-access-zctrk") pod "21fc6d9f-92e2-46f5-812a-c2e6dbc66820" (UID: "21fc6d9f-92e2-46f5-812a-c2e6dbc66820"). InnerVolumeSpecName "kube-api-access-zctrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:47:02 crc kubenswrapper[4915]: I1124 22:47:02.430164 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21fc6d9f-92e2-46f5-812a-c2e6dbc66820-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21fc6d9f-92e2-46f5-812a-c2e6dbc66820" (UID: "21fc6d9f-92e2-46f5-812a-c2e6dbc66820"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:47:02 crc kubenswrapper[4915]: I1124 22:47:02.475238 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21fc6d9f-92e2-46f5-812a-c2e6dbc66820-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:47:02 crc kubenswrapper[4915]: I1124 22:47:02.475277 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zctrk\" (UniqueName: \"kubernetes.io/projected/21fc6d9f-92e2-46f5-812a-c2e6dbc66820-kube-api-access-zctrk\") on node \"crc\" DevicePath \"\"" Nov 24 22:47:02 crc kubenswrapper[4915]: I1124 22:47:02.475297 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21fc6d9f-92e2-46f5-812a-c2e6dbc66820-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:47:02 crc kubenswrapper[4915]: I1124 22:47:02.654182 4915 generic.go:334] "Generic (PLEG): container finished" podID="21fc6d9f-92e2-46f5-812a-c2e6dbc66820" containerID="b0f7acd88e236842712ebf475c96e6f1162e7e21e3bc5b053abba3097c8ee79c" exitCode=0 Nov 24 22:47:02 crc kubenswrapper[4915]: I1124 22:47:02.654255 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-66cqx" event={"ID":"21fc6d9f-92e2-46f5-812a-c2e6dbc66820","Type":"ContainerDied","Data":"b0f7acd88e236842712ebf475c96e6f1162e7e21e3bc5b053abba3097c8ee79c"} Nov 24 22:47:02 crc kubenswrapper[4915]: I1124 22:47:02.654298 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-66cqx" event={"ID":"21fc6d9f-92e2-46f5-812a-c2e6dbc66820","Type":"ContainerDied","Data":"78ca83b1b3be7d9dd6d1c787bad9363e8334b74c8d3b31f7ae1c140a14955576"} Nov 24 22:47:02 crc kubenswrapper[4915]: I1124 22:47:02.654327 4915 scope.go:117] "RemoveContainer" containerID="b0f7acd88e236842712ebf475c96e6f1162e7e21e3bc5b053abba3097c8ee79c" Nov 24 22:47:02 crc kubenswrapper[4915]: I1124 22:47:02.654564 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-66cqx" Nov 24 22:47:02 crc kubenswrapper[4915]: I1124 22:47:02.689589 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-66cqx"] Nov 24 22:47:02 crc kubenswrapper[4915]: I1124 22:47:02.689763 4915 scope.go:117] "RemoveContainer" containerID="c98594b194b64f6f846ec8b297c1c3534324ff36b40166930bb98a3b85d85019" Nov 24 22:47:02 crc kubenswrapper[4915]: I1124 22:47:02.698825 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-66cqx"] Nov 24 22:47:02 crc kubenswrapper[4915]: I1124 22:47:02.712140 4915 scope.go:117] "RemoveContainer" containerID="066e0fcaf0524a3e17d20439e204865e9363c8b845da2aecc331aae296ea338d" Nov 24 22:47:02 crc kubenswrapper[4915]: I1124 22:47:02.772367 4915 scope.go:117] "RemoveContainer" containerID="b0f7acd88e236842712ebf475c96e6f1162e7e21e3bc5b053abba3097c8ee79c" Nov 24 22:47:02 crc kubenswrapper[4915]: E1124 22:47:02.772874 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0f7acd88e236842712ebf475c96e6f1162e7e21e3bc5b053abba3097c8ee79c\": container with ID starting with b0f7acd88e236842712ebf475c96e6f1162e7e21e3bc5b053abba3097c8ee79c not found: ID does not exist" containerID="b0f7acd88e236842712ebf475c96e6f1162e7e21e3bc5b053abba3097c8ee79c" Nov 24 22:47:02 crc kubenswrapper[4915]: I1124 22:47:02.772909 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0f7acd88e236842712ebf475c96e6f1162e7e21e3bc5b053abba3097c8ee79c"} err="failed to get container status \"b0f7acd88e236842712ebf475c96e6f1162e7e21e3bc5b053abba3097c8ee79c\": rpc error: code = NotFound desc = could not find container \"b0f7acd88e236842712ebf475c96e6f1162e7e21e3bc5b053abba3097c8ee79c\": container with ID starting with b0f7acd88e236842712ebf475c96e6f1162e7e21e3bc5b053abba3097c8ee79c not found: ID does not exist" Nov 24 22:47:02 crc kubenswrapper[4915]: I1124 22:47:02.772933 4915 scope.go:117] "RemoveContainer" containerID="c98594b194b64f6f846ec8b297c1c3534324ff36b40166930bb98a3b85d85019" Nov 24 22:47:02 crc kubenswrapper[4915]: E1124 22:47:02.773382 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c98594b194b64f6f846ec8b297c1c3534324ff36b40166930bb98a3b85d85019\": container with ID starting with c98594b194b64f6f846ec8b297c1c3534324ff36b40166930bb98a3b85d85019 not found: ID does not exist" containerID="c98594b194b64f6f846ec8b297c1c3534324ff36b40166930bb98a3b85d85019" Nov 24 22:47:02 crc kubenswrapper[4915]: I1124 22:47:02.773405 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c98594b194b64f6f846ec8b297c1c3534324ff36b40166930bb98a3b85d85019"} err="failed to get container status \"c98594b194b64f6f846ec8b297c1c3534324ff36b40166930bb98a3b85d85019\": rpc error: code = NotFound desc = could not find container \"c98594b194b64f6f846ec8b297c1c3534324ff36b40166930bb98a3b85d85019\": container with ID starting with c98594b194b64f6f846ec8b297c1c3534324ff36b40166930bb98a3b85d85019 not found: ID does not exist" Nov 24 22:47:02 crc kubenswrapper[4915]: I1124 22:47:02.773418 4915 scope.go:117] "RemoveContainer" containerID="066e0fcaf0524a3e17d20439e204865e9363c8b845da2aecc331aae296ea338d" Nov 24 22:47:02 crc kubenswrapper[4915]: E1124 22:47:02.773759 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"066e0fcaf0524a3e17d20439e204865e9363c8b845da2aecc331aae296ea338d\": container with ID starting with 066e0fcaf0524a3e17d20439e204865e9363c8b845da2aecc331aae296ea338d not found: ID does not exist" containerID="066e0fcaf0524a3e17d20439e204865e9363c8b845da2aecc331aae296ea338d" Nov 24 22:47:02 crc kubenswrapper[4915]: I1124 22:47:02.773893 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"066e0fcaf0524a3e17d20439e204865e9363c8b845da2aecc331aae296ea338d"} err="failed to get container status \"066e0fcaf0524a3e17d20439e204865e9363c8b845da2aecc331aae296ea338d\": rpc error: code = NotFound desc = could not find container \"066e0fcaf0524a3e17d20439e204865e9363c8b845da2aecc331aae296ea338d\": container with ID starting with 066e0fcaf0524a3e17d20439e204865e9363c8b845da2aecc331aae296ea338d not found: ID does not exist" Nov 24 22:47:04 crc kubenswrapper[4915]: I1124 22:47:04.449475 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21fc6d9f-92e2-46f5-812a-c2e6dbc66820" path="/var/lib/kubelet/pods/21fc6d9f-92e2-46f5-812a-c2e6dbc66820/volumes" Nov 24 22:47:08 crc kubenswrapper[4915]: I1124 22:47:08.238427 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9l6rj" Nov 24 22:47:08 crc kubenswrapper[4915]: I1124 22:47:08.302884 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9l6rj" Nov 24 22:47:09 crc kubenswrapper[4915]: I1124 22:47:09.028213 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9l6rj"] Nov 24 22:47:09 crc kubenswrapper[4915]: I1124 22:47:09.426953 4915 scope.go:117] "RemoveContainer" containerID="e8f55126a8f72bcde79852645be4e6a7d3de58b23503a90df0838a9583b0b7f5" Nov 24 22:47:09 crc kubenswrapper[4915]: E1124 22:47:09.428997 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:47:09 crc kubenswrapper[4915]: I1124 22:47:09.741566 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9l6rj" podUID="c68cddeb-8921-443e-aae0-8b4e74a785f0" containerName="registry-server" containerID="cri-o://c24d8d7c2946ef29e3bb2da28ef823dfd5d54c8d1c3210c04d67941933897b20" gracePeriod=2 Nov 24 22:47:10 crc kubenswrapper[4915]: I1124 22:47:10.360005 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9l6rj" Nov 24 22:47:10 crc kubenswrapper[4915]: I1124 22:47:10.478934 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c68cddeb-8921-443e-aae0-8b4e74a785f0-utilities\") pod \"c68cddeb-8921-443e-aae0-8b4e74a785f0\" (UID: \"c68cddeb-8921-443e-aae0-8b4e74a785f0\") " Nov 24 22:47:10 crc kubenswrapper[4915]: I1124 22:47:10.479191 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8c7z\" (UniqueName: \"kubernetes.io/projected/c68cddeb-8921-443e-aae0-8b4e74a785f0-kube-api-access-p8c7z\") pod \"c68cddeb-8921-443e-aae0-8b4e74a785f0\" (UID: \"c68cddeb-8921-443e-aae0-8b4e74a785f0\") " Nov 24 22:47:10 crc kubenswrapper[4915]: I1124 22:47:10.479302 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c68cddeb-8921-443e-aae0-8b4e74a785f0-catalog-content\") pod \"c68cddeb-8921-443e-aae0-8b4e74a785f0\" (UID: \"c68cddeb-8921-443e-aae0-8b4e74a785f0\") " Nov 24 22:47:10 crc kubenswrapper[4915]: I1124 22:47:10.480014 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c68cddeb-8921-443e-aae0-8b4e74a785f0-utilities" (OuterVolumeSpecName: "utilities") pod "c68cddeb-8921-443e-aae0-8b4e74a785f0" (UID: "c68cddeb-8921-443e-aae0-8b4e74a785f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:47:10 crc kubenswrapper[4915]: I1124 22:47:10.500066 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c68cddeb-8921-443e-aae0-8b4e74a785f0-kube-api-access-p8c7z" (OuterVolumeSpecName: "kube-api-access-p8c7z") pod "c68cddeb-8921-443e-aae0-8b4e74a785f0" (UID: "c68cddeb-8921-443e-aae0-8b4e74a785f0"). InnerVolumeSpecName "kube-api-access-p8c7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:47:10 crc kubenswrapper[4915]: I1124 22:47:10.587856 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c68cddeb-8921-443e-aae0-8b4e74a785f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c68cddeb-8921-443e-aae0-8b4e74a785f0" (UID: "c68cddeb-8921-443e-aae0-8b4e74a785f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:47:10 crc kubenswrapper[4915]: I1124 22:47:10.588031 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c68cddeb-8921-443e-aae0-8b4e74a785f0-catalog-content\") pod \"c68cddeb-8921-443e-aae0-8b4e74a785f0\" (UID: \"c68cddeb-8921-443e-aae0-8b4e74a785f0\") " Nov 24 22:47:10 crc kubenswrapper[4915]: I1124 22:47:10.589287 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8c7z\" (UniqueName: \"kubernetes.io/projected/c68cddeb-8921-443e-aae0-8b4e74a785f0-kube-api-access-p8c7z\") on node \"crc\" DevicePath \"\"" Nov 24 22:47:10 crc kubenswrapper[4915]: I1124 22:47:10.589331 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c68cddeb-8921-443e-aae0-8b4e74a785f0-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:47:10 crc kubenswrapper[4915]: W1124 22:47:10.590166 4915 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/c68cddeb-8921-443e-aae0-8b4e74a785f0/volumes/kubernetes.io~empty-dir/catalog-content Nov 24 22:47:10 crc kubenswrapper[4915]: I1124 22:47:10.590188 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c68cddeb-8921-443e-aae0-8b4e74a785f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c68cddeb-8921-443e-aae0-8b4e74a785f0" (UID: "c68cddeb-8921-443e-aae0-8b4e74a785f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:47:10 crc kubenswrapper[4915]: I1124 22:47:10.700762 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c68cddeb-8921-443e-aae0-8b4e74a785f0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:47:10 crc kubenswrapper[4915]: I1124 22:47:10.754552 4915 generic.go:334] "Generic (PLEG): container finished" podID="c68cddeb-8921-443e-aae0-8b4e74a785f0" containerID="c24d8d7c2946ef29e3bb2da28ef823dfd5d54c8d1c3210c04d67941933897b20" exitCode=0 Nov 24 22:47:10 crc kubenswrapper[4915]: I1124 22:47:10.754597 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9l6rj" event={"ID":"c68cddeb-8921-443e-aae0-8b4e74a785f0","Type":"ContainerDied","Data":"c24d8d7c2946ef29e3bb2da28ef823dfd5d54c8d1c3210c04d67941933897b20"} Nov 24 22:47:10 crc kubenswrapper[4915]: I1124 22:47:10.754652 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9l6rj" event={"ID":"c68cddeb-8921-443e-aae0-8b4e74a785f0","Type":"ContainerDied","Data":"840dcd55b8790919767b43615905e69aafd6d68a8b2f488741e876f3df474429"} Nov 24 22:47:10 crc kubenswrapper[4915]: I1124 22:47:10.754678 4915 scope.go:117] "RemoveContainer" containerID="c24d8d7c2946ef29e3bb2da28ef823dfd5d54c8d1c3210c04d67941933897b20" Nov 24 22:47:10 crc kubenswrapper[4915]: I1124 22:47:10.755038 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9l6rj" Nov 24 22:47:10 crc kubenswrapper[4915]: I1124 22:47:10.783087 4915 scope.go:117] "RemoveContainer" containerID="19b8bc928e7d7224507c7adecc6652e911cc8e585f6b991ad9ee5dd534f82a0d" Nov 24 22:47:10 crc kubenswrapper[4915]: I1124 22:47:10.795454 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9l6rj"] Nov 24 22:47:10 crc kubenswrapper[4915]: I1124 22:47:10.805350 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9l6rj"] Nov 24 22:47:10 crc kubenswrapper[4915]: I1124 22:47:10.810463 4915 scope.go:117] "RemoveContainer" containerID="6e19df2a36562cd93b3a33cc465e4b66d45e92dc5cdf0b110c3a16c37f94f8cb" Nov 24 22:47:10 crc kubenswrapper[4915]: I1124 22:47:10.879012 4915 scope.go:117] "RemoveContainer" containerID="c24d8d7c2946ef29e3bb2da28ef823dfd5d54c8d1c3210c04d67941933897b20" Nov 24 22:47:10 crc kubenswrapper[4915]: E1124 22:47:10.880507 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c24d8d7c2946ef29e3bb2da28ef823dfd5d54c8d1c3210c04d67941933897b20\": container with ID starting with c24d8d7c2946ef29e3bb2da28ef823dfd5d54c8d1c3210c04d67941933897b20 not found: ID does not exist" containerID="c24d8d7c2946ef29e3bb2da28ef823dfd5d54c8d1c3210c04d67941933897b20" Nov 24 22:47:10 crc kubenswrapper[4915]: I1124 22:47:10.880548 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c24d8d7c2946ef29e3bb2da28ef823dfd5d54c8d1c3210c04d67941933897b20"} err="failed to get container status \"c24d8d7c2946ef29e3bb2da28ef823dfd5d54c8d1c3210c04d67941933897b20\": rpc error: code = NotFound desc = could not find container \"c24d8d7c2946ef29e3bb2da28ef823dfd5d54c8d1c3210c04d67941933897b20\": container with ID starting with c24d8d7c2946ef29e3bb2da28ef823dfd5d54c8d1c3210c04d67941933897b20 not found: ID does not exist" Nov 24 22:47:10 crc kubenswrapper[4915]: I1124 22:47:10.880598 4915 scope.go:117] "RemoveContainer" containerID="19b8bc928e7d7224507c7adecc6652e911cc8e585f6b991ad9ee5dd534f82a0d" Nov 24 22:47:10 crc kubenswrapper[4915]: E1124 22:47:10.881025 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19b8bc928e7d7224507c7adecc6652e911cc8e585f6b991ad9ee5dd534f82a0d\": container with ID starting with 19b8bc928e7d7224507c7adecc6652e911cc8e585f6b991ad9ee5dd534f82a0d not found: ID does not exist" containerID="19b8bc928e7d7224507c7adecc6652e911cc8e585f6b991ad9ee5dd534f82a0d" Nov 24 22:47:10 crc kubenswrapper[4915]: I1124 22:47:10.881045 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19b8bc928e7d7224507c7adecc6652e911cc8e585f6b991ad9ee5dd534f82a0d"} err="failed to get container status \"19b8bc928e7d7224507c7adecc6652e911cc8e585f6b991ad9ee5dd534f82a0d\": rpc error: code = NotFound desc = could not find container \"19b8bc928e7d7224507c7adecc6652e911cc8e585f6b991ad9ee5dd534f82a0d\": container with ID starting with 19b8bc928e7d7224507c7adecc6652e911cc8e585f6b991ad9ee5dd534f82a0d not found: ID does not exist" Nov 24 22:47:10 crc kubenswrapper[4915]: I1124 22:47:10.881059 4915 scope.go:117] "RemoveContainer" containerID="6e19df2a36562cd93b3a33cc465e4b66d45e92dc5cdf0b110c3a16c37f94f8cb" Nov 24 22:47:10 crc kubenswrapper[4915]: E1124 22:47:10.881762 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e19df2a36562cd93b3a33cc465e4b66d45e92dc5cdf0b110c3a16c37f94f8cb\": container with ID starting with 6e19df2a36562cd93b3a33cc465e4b66d45e92dc5cdf0b110c3a16c37f94f8cb not found: ID does not exist" containerID="6e19df2a36562cd93b3a33cc465e4b66d45e92dc5cdf0b110c3a16c37f94f8cb" Nov 24 22:47:10 crc kubenswrapper[4915]: I1124 22:47:10.881812 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e19df2a36562cd93b3a33cc465e4b66d45e92dc5cdf0b110c3a16c37f94f8cb"} err="failed to get container status \"6e19df2a36562cd93b3a33cc465e4b66d45e92dc5cdf0b110c3a16c37f94f8cb\": rpc error: code = NotFound desc = could not find container \"6e19df2a36562cd93b3a33cc465e4b66d45e92dc5cdf0b110c3a16c37f94f8cb\": container with ID starting with 6e19df2a36562cd93b3a33cc465e4b66d45e92dc5cdf0b110c3a16c37f94f8cb not found: ID does not exist" Nov 24 22:47:12 crc kubenswrapper[4915]: I1124 22:47:12.441106 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c68cddeb-8921-443e-aae0-8b4e74a785f0" path="/var/lib/kubelet/pods/c68cddeb-8921-443e-aae0-8b4e74a785f0/volumes" Nov 24 22:47:21 crc kubenswrapper[4915]: I1124 22:47:21.428160 4915 scope.go:117] "RemoveContainer" containerID="e8f55126a8f72bcde79852645be4e6a7d3de58b23503a90df0838a9583b0b7f5" Nov 24 22:47:21 crc kubenswrapper[4915]: E1124 22:47:21.429443 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:47:32 crc kubenswrapper[4915]: I1124 22:47:32.443977 4915 scope.go:117] "RemoveContainer" containerID="e8f55126a8f72bcde79852645be4e6a7d3de58b23503a90df0838a9583b0b7f5" Nov 24 22:47:32 crc kubenswrapper[4915]: E1124 22:47:32.445049 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:47:43 crc kubenswrapper[4915]: I1124 22:47:43.427584 4915 scope.go:117] "RemoveContainer" containerID="e8f55126a8f72bcde79852645be4e6a7d3de58b23503a90df0838a9583b0b7f5" Nov 24 22:47:43 crc kubenswrapper[4915]: E1124 22:47:43.430359 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:47:54 crc kubenswrapper[4915]: I1124 22:47:54.427866 4915 scope.go:117] "RemoveContainer" containerID="e8f55126a8f72bcde79852645be4e6a7d3de58b23503a90df0838a9583b0b7f5" Nov 24 22:47:54 crc kubenswrapper[4915]: E1124 22:47:54.429331 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:48:05 crc kubenswrapper[4915]: I1124 22:48:05.427167 4915 scope.go:117] "RemoveContainer" containerID="e8f55126a8f72bcde79852645be4e6a7d3de58b23503a90df0838a9583b0b7f5" Nov 24 22:48:05 crc kubenswrapper[4915]: E1124 22:48:05.428089 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:48:19 crc kubenswrapper[4915]: I1124 22:48:19.427569 4915 scope.go:117] "RemoveContainer" containerID="e8f55126a8f72bcde79852645be4e6a7d3de58b23503a90df0838a9583b0b7f5" Nov 24 22:48:19 crc kubenswrapper[4915]: E1124 22:48:19.428965 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:48:32 crc kubenswrapper[4915]: I1124 22:48:32.438368 4915 scope.go:117] "RemoveContainer" containerID="e8f55126a8f72bcde79852645be4e6a7d3de58b23503a90df0838a9583b0b7f5" Nov 24 22:48:32 crc kubenswrapper[4915]: E1124 22:48:32.439832 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:48:47 crc kubenswrapper[4915]: I1124 22:48:47.427048 4915 scope.go:117] "RemoveContainer" containerID="e8f55126a8f72bcde79852645be4e6a7d3de58b23503a90df0838a9583b0b7f5" Nov 24 22:48:47 crc kubenswrapper[4915]: E1124 22:48:47.428050 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:48:58 crc kubenswrapper[4915]: I1124 22:48:58.427454 4915 scope.go:117] "RemoveContainer" containerID="e8f55126a8f72bcde79852645be4e6a7d3de58b23503a90df0838a9583b0b7f5" Nov 24 22:48:59 crc kubenswrapper[4915]: I1124 22:48:59.379971 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerStarted","Data":"f5c578688d5a272768d8c6515a2ff5a40bbd762310bac42ed15cc2ed4b212017"} Nov 24 22:50:07 crc kubenswrapper[4915]: I1124 22:50:07.084839 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7dsh7"] Nov 24 22:50:07 crc kubenswrapper[4915]: E1124 22:50:07.086169 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c68cddeb-8921-443e-aae0-8b4e74a785f0" containerName="extract-content" Nov 24 22:50:07 crc kubenswrapper[4915]: I1124 22:50:07.086187 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c68cddeb-8921-443e-aae0-8b4e74a785f0" containerName="extract-content" Nov 24 22:50:07 crc kubenswrapper[4915]: E1124 22:50:07.086212 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c68cddeb-8921-443e-aae0-8b4e74a785f0" containerName="extract-utilities" Nov 24 22:50:07 crc kubenswrapper[4915]: I1124 22:50:07.086222 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c68cddeb-8921-443e-aae0-8b4e74a785f0" containerName="extract-utilities" Nov 24 22:50:07 crc kubenswrapper[4915]: E1124 22:50:07.086237 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21fc6d9f-92e2-46f5-812a-c2e6dbc66820" containerName="extract-content" Nov 24 22:50:07 crc kubenswrapper[4915]: I1124 22:50:07.086245 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="21fc6d9f-92e2-46f5-812a-c2e6dbc66820" containerName="extract-content" Nov 24 22:50:07 crc kubenswrapper[4915]: E1124 22:50:07.086264 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21fc6d9f-92e2-46f5-812a-c2e6dbc66820" containerName="extract-utilities" Nov 24 22:50:07 crc kubenswrapper[4915]: I1124 22:50:07.086272 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="21fc6d9f-92e2-46f5-812a-c2e6dbc66820" containerName="extract-utilities" Nov 24 22:50:07 crc kubenswrapper[4915]: E1124 22:50:07.086312 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c68cddeb-8921-443e-aae0-8b4e74a785f0" containerName="registry-server" Nov 24 22:50:07 crc kubenswrapper[4915]: I1124 22:50:07.086320 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="c68cddeb-8921-443e-aae0-8b4e74a785f0" containerName="registry-server" Nov 24 22:50:07 crc kubenswrapper[4915]: E1124 22:50:07.086337 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21fc6d9f-92e2-46f5-812a-c2e6dbc66820" containerName="registry-server" Nov 24 22:50:07 crc kubenswrapper[4915]: I1124 22:50:07.086344 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="21fc6d9f-92e2-46f5-812a-c2e6dbc66820" containerName="registry-server" Nov 24 22:50:07 crc kubenswrapper[4915]: I1124 22:50:07.086650 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="c68cddeb-8921-443e-aae0-8b4e74a785f0" containerName="registry-server" Nov 24 22:50:07 crc kubenswrapper[4915]: I1124 22:50:07.086675 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="21fc6d9f-92e2-46f5-812a-c2e6dbc66820" containerName="registry-server" Nov 24 22:50:07 crc kubenswrapper[4915]: I1124 22:50:07.089041 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7dsh7" Nov 24 22:50:07 crc kubenswrapper[4915]: I1124 22:50:07.107193 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7dsh7"] Nov 24 22:50:07 crc kubenswrapper[4915]: I1124 22:50:07.109054 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829d455a-da77-49a6-9cc3-acbc9e568cd7-catalog-content\") pod \"redhat-marketplace-7dsh7\" (UID: \"829d455a-da77-49a6-9cc3-acbc9e568cd7\") " pod="openshift-marketplace/redhat-marketplace-7dsh7" Nov 24 22:50:07 crc kubenswrapper[4915]: I1124 22:50:07.111426 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829d455a-da77-49a6-9cc3-acbc9e568cd7-utilities\") pod \"redhat-marketplace-7dsh7\" (UID: \"829d455a-da77-49a6-9cc3-acbc9e568cd7\") " pod="openshift-marketplace/redhat-marketplace-7dsh7" Nov 24 22:50:07 crc kubenswrapper[4915]: I1124 22:50:07.111549 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s7bp\" (UniqueName: \"kubernetes.io/projected/829d455a-da77-49a6-9cc3-acbc9e568cd7-kube-api-access-9s7bp\") pod \"redhat-marketplace-7dsh7\" (UID: \"829d455a-da77-49a6-9cc3-acbc9e568cd7\") " pod="openshift-marketplace/redhat-marketplace-7dsh7" Nov 24 22:50:07 crc kubenswrapper[4915]: I1124 22:50:07.215278 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829d455a-da77-49a6-9cc3-acbc9e568cd7-catalog-content\") pod \"redhat-marketplace-7dsh7\" (UID: \"829d455a-da77-49a6-9cc3-acbc9e568cd7\") " pod="openshift-marketplace/redhat-marketplace-7dsh7" Nov 24 22:50:07 crc kubenswrapper[4915]: I1124 22:50:07.215408 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829d455a-da77-49a6-9cc3-acbc9e568cd7-utilities\") pod \"redhat-marketplace-7dsh7\" (UID: \"829d455a-da77-49a6-9cc3-acbc9e568cd7\") " pod="openshift-marketplace/redhat-marketplace-7dsh7" Nov 24 22:50:07 crc kubenswrapper[4915]: I1124 22:50:07.215538 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s7bp\" (UniqueName: \"kubernetes.io/projected/829d455a-da77-49a6-9cc3-acbc9e568cd7-kube-api-access-9s7bp\") pod \"redhat-marketplace-7dsh7\" (UID: \"829d455a-da77-49a6-9cc3-acbc9e568cd7\") " pod="openshift-marketplace/redhat-marketplace-7dsh7" Nov 24 22:50:07 crc kubenswrapper[4915]: I1124 22:50:07.215831 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829d455a-da77-49a6-9cc3-acbc9e568cd7-utilities\") pod \"redhat-marketplace-7dsh7\" (UID: \"829d455a-da77-49a6-9cc3-acbc9e568cd7\") " pod="openshift-marketplace/redhat-marketplace-7dsh7" Nov 24 22:50:07 crc kubenswrapper[4915]: I1124 22:50:07.215878 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829d455a-da77-49a6-9cc3-acbc9e568cd7-catalog-content\") pod \"redhat-marketplace-7dsh7\" (UID: \"829d455a-da77-49a6-9cc3-acbc9e568cd7\") " pod="openshift-marketplace/redhat-marketplace-7dsh7" Nov 24 22:50:07 crc kubenswrapper[4915]: I1124 22:50:07.254378 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s7bp\" (UniqueName: \"kubernetes.io/projected/829d455a-da77-49a6-9cc3-acbc9e568cd7-kube-api-access-9s7bp\") pod \"redhat-marketplace-7dsh7\" (UID: \"829d455a-da77-49a6-9cc3-acbc9e568cd7\") " pod="openshift-marketplace/redhat-marketplace-7dsh7" Nov 24 22:50:07 crc kubenswrapper[4915]: I1124 22:50:07.417703 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7dsh7" Nov 24 22:50:07 crc kubenswrapper[4915]: I1124 22:50:07.998411 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7dsh7"] Nov 24 22:50:08 crc kubenswrapper[4915]: I1124 22:50:08.362207 4915 generic.go:334] "Generic (PLEG): container finished" podID="829d455a-da77-49a6-9cc3-acbc9e568cd7" containerID="6d283b268517be76db320594a59fac1c9e446325059b3d5e461316181999377d" exitCode=0 Nov 24 22:50:08 crc kubenswrapper[4915]: I1124 22:50:08.362314 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7dsh7" event={"ID":"829d455a-da77-49a6-9cc3-acbc9e568cd7","Type":"ContainerDied","Data":"6d283b268517be76db320594a59fac1c9e446325059b3d5e461316181999377d"} Nov 24 22:50:08 crc kubenswrapper[4915]: I1124 22:50:08.362508 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7dsh7" event={"ID":"829d455a-da77-49a6-9cc3-acbc9e568cd7","Type":"ContainerStarted","Data":"a97f220c552e01af2ee97c1154637b9b7e7b426eba5b30f256bdbb77e6c5bfc9"} Nov 24 22:50:09 crc kubenswrapper[4915]: I1124 22:50:09.375677 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7dsh7" event={"ID":"829d455a-da77-49a6-9cc3-acbc9e568cd7","Type":"ContainerStarted","Data":"f4c642e807375abfd1ba1de98c1924cc2e081f235daeed7a79d5e781ef7ff45e"} Nov 24 22:50:10 crc kubenswrapper[4915]: I1124 22:50:10.391246 4915 generic.go:334] "Generic (PLEG): container finished" podID="829d455a-da77-49a6-9cc3-acbc9e568cd7" containerID="f4c642e807375abfd1ba1de98c1924cc2e081f235daeed7a79d5e781ef7ff45e" exitCode=0 Nov 24 22:50:10 crc kubenswrapper[4915]: I1124 22:50:10.391309 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7dsh7" event={"ID":"829d455a-da77-49a6-9cc3-acbc9e568cd7","Type":"ContainerDied","Data":"f4c642e807375abfd1ba1de98c1924cc2e081f235daeed7a79d5e781ef7ff45e"} Nov 24 22:50:11 crc kubenswrapper[4915]: I1124 22:50:11.408077 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7dsh7" event={"ID":"829d455a-da77-49a6-9cc3-acbc9e568cd7","Type":"ContainerStarted","Data":"6946d638af3e67bab79b17ff08a8378c7e3a9d6f28b397e0e9a6c465063377c5"} Nov 24 22:50:11 crc kubenswrapper[4915]: I1124 22:50:11.424278 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7dsh7" podStartSLOduration=1.859330114 podStartE2EDuration="4.424259971s" podCreationTimestamp="2025-11-24 22:50:07 +0000 UTC" firstStartedPulling="2025-11-24 22:50:08.364724593 +0000 UTC m=+5426.680976766" lastFinishedPulling="2025-11-24 22:50:10.92965445 +0000 UTC m=+5429.245906623" observedRunningTime="2025-11-24 22:50:11.423811029 +0000 UTC m=+5429.740063202" watchObservedRunningTime="2025-11-24 22:50:11.424259971 +0000 UTC m=+5429.740512144" Nov 24 22:50:17 crc kubenswrapper[4915]: I1124 22:50:17.418039 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7dsh7" Nov 24 22:50:17 crc kubenswrapper[4915]: I1124 22:50:17.418471 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7dsh7" Nov 24 22:50:17 crc kubenswrapper[4915]: I1124 22:50:17.465154 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7dsh7" Nov 24 22:50:17 crc kubenswrapper[4915]: I1124 22:50:17.554324 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7dsh7" Nov 24 22:50:17 crc kubenswrapper[4915]: I1124 22:50:17.709949 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7dsh7"] Nov 24 22:50:19 crc kubenswrapper[4915]: I1124 22:50:19.547987 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7dsh7" podUID="829d455a-da77-49a6-9cc3-acbc9e568cd7" containerName="registry-server" containerID="cri-o://6946d638af3e67bab79b17ff08a8378c7e3a9d6f28b397e0e9a6c465063377c5" gracePeriod=2 Nov 24 22:50:20 crc kubenswrapper[4915]: I1124 22:50:20.174991 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7dsh7" Nov 24 22:50:20 crc kubenswrapper[4915]: I1124 22:50:20.193087 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829d455a-da77-49a6-9cc3-acbc9e568cd7-catalog-content\") pod \"829d455a-da77-49a6-9cc3-acbc9e568cd7\" (UID: \"829d455a-da77-49a6-9cc3-acbc9e568cd7\") " Nov 24 22:50:20 crc kubenswrapper[4915]: I1124 22:50:20.193161 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829d455a-da77-49a6-9cc3-acbc9e568cd7-utilities\") pod \"829d455a-da77-49a6-9cc3-acbc9e568cd7\" (UID: \"829d455a-da77-49a6-9cc3-acbc9e568cd7\") " Nov 24 22:50:20 crc kubenswrapper[4915]: I1124 22:50:20.193246 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9s7bp\" (UniqueName: \"kubernetes.io/projected/829d455a-da77-49a6-9cc3-acbc9e568cd7-kube-api-access-9s7bp\") pod \"829d455a-da77-49a6-9cc3-acbc9e568cd7\" (UID: \"829d455a-da77-49a6-9cc3-acbc9e568cd7\") " Nov 24 22:50:20 crc kubenswrapper[4915]: I1124 22:50:20.194209 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/829d455a-da77-49a6-9cc3-acbc9e568cd7-utilities" (OuterVolumeSpecName: "utilities") pod "829d455a-da77-49a6-9cc3-acbc9e568cd7" (UID: "829d455a-da77-49a6-9cc3-acbc9e568cd7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:50:20 crc kubenswrapper[4915]: I1124 22:50:20.205122 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/829d455a-da77-49a6-9cc3-acbc9e568cd7-kube-api-access-9s7bp" (OuterVolumeSpecName: "kube-api-access-9s7bp") pod "829d455a-da77-49a6-9cc3-acbc9e568cd7" (UID: "829d455a-da77-49a6-9cc3-acbc9e568cd7"). InnerVolumeSpecName "kube-api-access-9s7bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:50:20 crc kubenswrapper[4915]: I1124 22:50:20.226297 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/829d455a-da77-49a6-9cc3-acbc9e568cd7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "829d455a-da77-49a6-9cc3-acbc9e568cd7" (UID: "829d455a-da77-49a6-9cc3-acbc9e568cd7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:50:20 crc kubenswrapper[4915]: I1124 22:50:20.296620 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/829d455a-da77-49a6-9cc3-acbc9e568cd7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:50:20 crc kubenswrapper[4915]: I1124 22:50:20.296673 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/829d455a-da77-49a6-9cc3-acbc9e568cd7-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:50:20 crc kubenswrapper[4915]: I1124 22:50:20.296688 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9s7bp\" (UniqueName: \"kubernetes.io/projected/829d455a-da77-49a6-9cc3-acbc9e568cd7-kube-api-access-9s7bp\") on node \"crc\" DevicePath \"\"" Nov 24 22:50:20 crc kubenswrapper[4915]: I1124 22:50:20.558440 4915 generic.go:334] "Generic (PLEG): container finished" podID="829d455a-da77-49a6-9cc3-acbc9e568cd7" containerID="6946d638af3e67bab79b17ff08a8378c7e3a9d6f28b397e0e9a6c465063377c5" exitCode=0 Nov 24 22:50:20 crc kubenswrapper[4915]: I1124 22:50:20.558469 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7dsh7" event={"ID":"829d455a-da77-49a6-9cc3-acbc9e568cd7","Type":"ContainerDied","Data":"6946d638af3e67bab79b17ff08a8378c7e3a9d6f28b397e0e9a6c465063377c5"} Nov 24 22:50:20 crc kubenswrapper[4915]: I1124 22:50:20.558508 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7dsh7" event={"ID":"829d455a-da77-49a6-9cc3-acbc9e568cd7","Type":"ContainerDied","Data":"a97f220c552e01af2ee97c1154637b9b7e7b426eba5b30f256bdbb77e6c5bfc9"} Nov 24 22:50:20 crc kubenswrapper[4915]: I1124 22:50:20.558524 4915 scope.go:117] "RemoveContainer" containerID="6946d638af3e67bab79b17ff08a8378c7e3a9d6f28b397e0e9a6c465063377c5" Nov 24 22:50:20 crc kubenswrapper[4915]: I1124 22:50:20.558520 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7dsh7" Nov 24 22:50:20 crc kubenswrapper[4915]: I1124 22:50:20.597475 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7dsh7"] Nov 24 22:50:20 crc kubenswrapper[4915]: I1124 22:50:20.606495 4915 scope.go:117] "RemoveContainer" containerID="f4c642e807375abfd1ba1de98c1924cc2e081f235daeed7a79d5e781ef7ff45e" Nov 24 22:50:20 crc kubenswrapper[4915]: I1124 22:50:20.610600 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7dsh7"] Nov 24 22:50:20 crc kubenswrapper[4915]: I1124 22:50:20.634469 4915 scope.go:117] "RemoveContainer" containerID="6d283b268517be76db320594a59fac1c9e446325059b3d5e461316181999377d" Nov 24 22:50:20 crc kubenswrapper[4915]: I1124 22:50:20.722223 4915 scope.go:117] "RemoveContainer" containerID="6946d638af3e67bab79b17ff08a8378c7e3a9d6f28b397e0e9a6c465063377c5" Nov 24 22:50:20 crc kubenswrapper[4915]: E1124 22:50:20.722723 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6946d638af3e67bab79b17ff08a8378c7e3a9d6f28b397e0e9a6c465063377c5\": container with ID starting with 6946d638af3e67bab79b17ff08a8378c7e3a9d6f28b397e0e9a6c465063377c5 not found: ID does not exist" containerID="6946d638af3e67bab79b17ff08a8378c7e3a9d6f28b397e0e9a6c465063377c5" Nov 24 22:50:20 crc kubenswrapper[4915]: I1124 22:50:20.722771 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6946d638af3e67bab79b17ff08a8378c7e3a9d6f28b397e0e9a6c465063377c5"} err="failed to get container status \"6946d638af3e67bab79b17ff08a8378c7e3a9d6f28b397e0e9a6c465063377c5\": rpc error: code = NotFound desc = could not find container \"6946d638af3e67bab79b17ff08a8378c7e3a9d6f28b397e0e9a6c465063377c5\": container with ID starting with 6946d638af3e67bab79b17ff08a8378c7e3a9d6f28b397e0e9a6c465063377c5 not found: ID does not exist" Nov 24 22:50:20 crc kubenswrapper[4915]: I1124 22:50:20.722820 4915 scope.go:117] "RemoveContainer" containerID="f4c642e807375abfd1ba1de98c1924cc2e081f235daeed7a79d5e781ef7ff45e" Nov 24 22:50:20 crc kubenswrapper[4915]: E1124 22:50:20.723178 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4c642e807375abfd1ba1de98c1924cc2e081f235daeed7a79d5e781ef7ff45e\": container with ID starting with f4c642e807375abfd1ba1de98c1924cc2e081f235daeed7a79d5e781ef7ff45e not found: ID does not exist" containerID="f4c642e807375abfd1ba1de98c1924cc2e081f235daeed7a79d5e781ef7ff45e" Nov 24 22:50:20 crc kubenswrapper[4915]: I1124 22:50:20.723211 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4c642e807375abfd1ba1de98c1924cc2e081f235daeed7a79d5e781ef7ff45e"} err="failed to get container status \"f4c642e807375abfd1ba1de98c1924cc2e081f235daeed7a79d5e781ef7ff45e\": rpc error: code = NotFound desc = could not find container \"f4c642e807375abfd1ba1de98c1924cc2e081f235daeed7a79d5e781ef7ff45e\": container with ID starting with f4c642e807375abfd1ba1de98c1924cc2e081f235daeed7a79d5e781ef7ff45e not found: ID does not exist" Nov 24 22:50:20 crc kubenswrapper[4915]: I1124 22:50:20.723234 4915 scope.go:117] "RemoveContainer" containerID="6d283b268517be76db320594a59fac1c9e446325059b3d5e461316181999377d" Nov 24 22:50:20 crc kubenswrapper[4915]: E1124 22:50:20.723477 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d283b268517be76db320594a59fac1c9e446325059b3d5e461316181999377d\": container with ID starting with 6d283b268517be76db320594a59fac1c9e446325059b3d5e461316181999377d not found: ID does not exist" containerID="6d283b268517be76db320594a59fac1c9e446325059b3d5e461316181999377d" Nov 24 22:50:20 crc kubenswrapper[4915]: I1124 22:50:20.723506 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d283b268517be76db320594a59fac1c9e446325059b3d5e461316181999377d"} err="failed to get container status \"6d283b268517be76db320594a59fac1c9e446325059b3d5e461316181999377d\": rpc error: code = NotFound desc = could not find container \"6d283b268517be76db320594a59fac1c9e446325059b3d5e461316181999377d\": container with ID starting with 6d283b268517be76db320594a59fac1c9e446325059b3d5e461316181999377d not found: ID does not exist" Nov 24 22:50:22 crc kubenswrapper[4915]: I1124 22:50:22.448169 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="829d455a-da77-49a6-9cc3-acbc9e568cd7" path="/var/lib/kubelet/pods/829d455a-da77-49a6-9cc3-acbc9e568cd7/volumes" Nov 24 22:51:06 crc kubenswrapper[4915]: I1124 22:51:06.355553 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sbsgz"] Nov 24 22:51:06 crc kubenswrapper[4915]: E1124 22:51:06.356721 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829d455a-da77-49a6-9cc3-acbc9e568cd7" containerName="extract-content" Nov 24 22:51:06 crc kubenswrapper[4915]: I1124 22:51:06.356739 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="829d455a-da77-49a6-9cc3-acbc9e568cd7" containerName="extract-content" Nov 24 22:51:06 crc kubenswrapper[4915]: E1124 22:51:06.356811 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829d455a-da77-49a6-9cc3-acbc9e568cd7" containerName="extract-utilities" Nov 24 22:51:06 crc kubenswrapper[4915]: I1124 22:51:06.356820 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="829d455a-da77-49a6-9cc3-acbc9e568cd7" containerName="extract-utilities" Nov 24 22:51:06 crc kubenswrapper[4915]: E1124 22:51:06.356853 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="829d455a-da77-49a6-9cc3-acbc9e568cd7" containerName="registry-server" Nov 24 22:51:06 crc kubenswrapper[4915]: I1124 22:51:06.356861 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="829d455a-da77-49a6-9cc3-acbc9e568cd7" containerName="registry-server" Nov 24 22:51:06 crc kubenswrapper[4915]: I1124 22:51:06.357150 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="829d455a-da77-49a6-9cc3-acbc9e568cd7" containerName="registry-server" Nov 24 22:51:06 crc kubenswrapper[4915]: I1124 22:51:06.360098 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sbsgz" Nov 24 22:51:06 crc kubenswrapper[4915]: I1124 22:51:06.377576 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sbsgz"] Nov 24 22:51:06 crc kubenswrapper[4915]: I1124 22:51:06.430624 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca0119e-175d-4693-a0fc-4b01c687f88b-utilities\") pod \"certified-operators-sbsgz\" (UID: \"2ca0119e-175d-4693-a0fc-4b01c687f88b\") " pod="openshift-marketplace/certified-operators-sbsgz" Nov 24 22:51:06 crc kubenswrapper[4915]: I1124 22:51:06.430768 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca0119e-175d-4693-a0fc-4b01c687f88b-catalog-content\") pod \"certified-operators-sbsgz\" (UID: \"2ca0119e-175d-4693-a0fc-4b01c687f88b\") " pod="openshift-marketplace/certified-operators-sbsgz" Nov 24 22:51:06 crc kubenswrapper[4915]: I1124 22:51:06.431687 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-855rc\" (UniqueName: \"kubernetes.io/projected/2ca0119e-175d-4693-a0fc-4b01c687f88b-kube-api-access-855rc\") pod \"certified-operators-sbsgz\" (UID: \"2ca0119e-175d-4693-a0fc-4b01c687f88b\") " pod="openshift-marketplace/certified-operators-sbsgz" Nov 24 22:51:06 crc kubenswrapper[4915]: I1124 22:51:06.534599 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca0119e-175d-4693-a0fc-4b01c687f88b-utilities\") pod \"certified-operators-sbsgz\" (UID: \"2ca0119e-175d-4693-a0fc-4b01c687f88b\") " pod="openshift-marketplace/certified-operators-sbsgz" Nov 24 22:51:06 crc kubenswrapper[4915]: I1124 22:51:06.534728 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca0119e-175d-4693-a0fc-4b01c687f88b-catalog-content\") pod \"certified-operators-sbsgz\" (UID: \"2ca0119e-175d-4693-a0fc-4b01c687f88b\") " pod="openshift-marketplace/certified-operators-sbsgz" Nov 24 22:51:06 crc kubenswrapper[4915]: I1124 22:51:06.534954 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-855rc\" (UniqueName: \"kubernetes.io/projected/2ca0119e-175d-4693-a0fc-4b01c687f88b-kube-api-access-855rc\") pod \"certified-operators-sbsgz\" (UID: \"2ca0119e-175d-4693-a0fc-4b01c687f88b\") " pod="openshift-marketplace/certified-operators-sbsgz" Nov 24 22:51:06 crc kubenswrapper[4915]: I1124 22:51:06.537188 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca0119e-175d-4693-a0fc-4b01c687f88b-utilities\") pod \"certified-operators-sbsgz\" (UID: \"2ca0119e-175d-4693-a0fc-4b01c687f88b\") " pod="openshift-marketplace/certified-operators-sbsgz" Nov 24 22:51:06 crc kubenswrapper[4915]: I1124 22:51:06.538056 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca0119e-175d-4693-a0fc-4b01c687f88b-catalog-content\") pod \"certified-operators-sbsgz\" (UID: \"2ca0119e-175d-4693-a0fc-4b01c687f88b\") " pod="openshift-marketplace/certified-operators-sbsgz" Nov 24 22:51:06 crc kubenswrapper[4915]: I1124 22:51:06.556514 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-855rc\" (UniqueName: \"kubernetes.io/projected/2ca0119e-175d-4693-a0fc-4b01c687f88b-kube-api-access-855rc\") pod \"certified-operators-sbsgz\" (UID: \"2ca0119e-175d-4693-a0fc-4b01c687f88b\") " pod="openshift-marketplace/certified-operators-sbsgz" Nov 24 22:51:06 crc kubenswrapper[4915]: I1124 22:51:06.712693 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sbsgz" Nov 24 22:51:07 crc kubenswrapper[4915]: I1124 22:51:07.292027 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sbsgz"] Nov 24 22:51:08 crc kubenswrapper[4915]: I1124 22:51:08.270711 4915 generic.go:334] "Generic (PLEG): container finished" podID="2ca0119e-175d-4693-a0fc-4b01c687f88b" containerID="672ec7589ff0eac92137d3e8ecebd06358e7beffada4d9d24886192197a05e14" exitCode=0 Nov 24 22:51:08 crc kubenswrapper[4915]: I1124 22:51:08.270847 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbsgz" event={"ID":"2ca0119e-175d-4693-a0fc-4b01c687f88b","Type":"ContainerDied","Data":"672ec7589ff0eac92137d3e8ecebd06358e7beffada4d9d24886192197a05e14"} Nov 24 22:51:08 crc kubenswrapper[4915]: I1124 22:51:08.271356 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbsgz" event={"ID":"2ca0119e-175d-4693-a0fc-4b01c687f88b","Type":"ContainerStarted","Data":"a602d59b43a660b11e85a0b0266210a7acc55a359cd5a474eee160ddb9f700d0"} Nov 24 22:51:10 crc kubenswrapper[4915]: I1124 22:51:10.298503 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbsgz" event={"ID":"2ca0119e-175d-4693-a0fc-4b01c687f88b","Type":"ContainerStarted","Data":"8bc530b10d65a69bc0f0ca1d093f3c9391f1c10aa874cd8261ef1f8f63b56a3c"} Nov 24 22:51:13 crc kubenswrapper[4915]: I1124 22:51:13.337806 4915 generic.go:334] "Generic (PLEG): container finished" podID="2ca0119e-175d-4693-a0fc-4b01c687f88b" containerID="8bc530b10d65a69bc0f0ca1d093f3c9391f1c10aa874cd8261ef1f8f63b56a3c" exitCode=0 Nov 24 22:51:13 crc kubenswrapper[4915]: I1124 22:51:13.337933 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbsgz" event={"ID":"2ca0119e-175d-4693-a0fc-4b01c687f88b","Type":"ContainerDied","Data":"8bc530b10d65a69bc0f0ca1d093f3c9391f1c10aa874cd8261ef1f8f63b56a3c"} Nov 24 22:51:15 crc kubenswrapper[4915]: I1124 22:51:15.359407 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbsgz" event={"ID":"2ca0119e-175d-4693-a0fc-4b01c687f88b","Type":"ContainerStarted","Data":"b33e75bc7f2b06271cf65389b8941d2eb235eeb9755e56de951007e7b2c07c33"} Nov 24 22:51:15 crc kubenswrapper[4915]: I1124 22:51:15.388344 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sbsgz" podStartSLOduration=3.428719465 podStartE2EDuration="9.388325941s" podCreationTimestamp="2025-11-24 22:51:06 +0000 UTC" firstStartedPulling="2025-11-24 22:51:08.274517881 +0000 UTC m=+5486.590770084" lastFinishedPulling="2025-11-24 22:51:14.234124387 +0000 UTC m=+5492.550376560" observedRunningTime="2025-11-24 22:51:15.376893372 +0000 UTC m=+5493.693145545" watchObservedRunningTime="2025-11-24 22:51:15.388325941 +0000 UTC m=+5493.704578114" Nov 24 22:51:16 crc kubenswrapper[4915]: I1124 22:51:16.712891 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sbsgz" Nov 24 22:51:16 crc kubenswrapper[4915]: I1124 22:51:16.713572 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sbsgz" Nov 24 22:51:16 crc kubenswrapper[4915]: I1124 22:51:16.776660 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sbsgz" Nov 24 22:51:24 crc kubenswrapper[4915]: I1124 22:51:24.327397 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:51:24 crc kubenswrapper[4915]: I1124 22:51:24.328996 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:51:27 crc kubenswrapper[4915]: I1124 22:51:27.109274 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sbsgz" Nov 24 22:51:27 crc kubenswrapper[4915]: I1124 22:51:27.179362 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sbsgz"] Nov 24 22:51:27 crc kubenswrapper[4915]: I1124 22:51:27.518679 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sbsgz" podUID="2ca0119e-175d-4693-a0fc-4b01c687f88b" containerName="registry-server" containerID="cri-o://b33e75bc7f2b06271cf65389b8941d2eb235eeb9755e56de951007e7b2c07c33" gracePeriod=2 Nov 24 22:51:28 crc kubenswrapper[4915]: I1124 22:51:28.113974 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sbsgz" Nov 24 22:51:28 crc kubenswrapper[4915]: I1124 22:51:28.197638 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-855rc\" (UniqueName: \"kubernetes.io/projected/2ca0119e-175d-4693-a0fc-4b01c687f88b-kube-api-access-855rc\") pod \"2ca0119e-175d-4693-a0fc-4b01c687f88b\" (UID: \"2ca0119e-175d-4693-a0fc-4b01c687f88b\") " Nov 24 22:51:28 crc kubenswrapper[4915]: I1124 22:51:28.197689 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca0119e-175d-4693-a0fc-4b01c687f88b-catalog-content\") pod \"2ca0119e-175d-4693-a0fc-4b01c687f88b\" (UID: \"2ca0119e-175d-4693-a0fc-4b01c687f88b\") " Nov 24 22:51:28 crc kubenswrapper[4915]: I1124 22:51:28.197848 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca0119e-175d-4693-a0fc-4b01c687f88b-utilities\") pod \"2ca0119e-175d-4693-a0fc-4b01c687f88b\" (UID: \"2ca0119e-175d-4693-a0fc-4b01c687f88b\") " Nov 24 22:51:28 crc kubenswrapper[4915]: I1124 22:51:28.199032 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ca0119e-175d-4693-a0fc-4b01c687f88b-utilities" (OuterVolumeSpecName: "utilities") pod "2ca0119e-175d-4693-a0fc-4b01c687f88b" (UID: "2ca0119e-175d-4693-a0fc-4b01c687f88b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:51:28 crc kubenswrapper[4915]: I1124 22:51:28.208173 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ca0119e-175d-4693-a0fc-4b01c687f88b-kube-api-access-855rc" (OuterVolumeSpecName: "kube-api-access-855rc") pod "2ca0119e-175d-4693-a0fc-4b01c687f88b" (UID: "2ca0119e-175d-4693-a0fc-4b01c687f88b"). InnerVolumeSpecName "kube-api-access-855rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:51:28 crc kubenswrapper[4915]: I1124 22:51:28.247503 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ca0119e-175d-4693-a0fc-4b01c687f88b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2ca0119e-175d-4693-a0fc-4b01c687f88b" (UID: "2ca0119e-175d-4693-a0fc-4b01c687f88b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:51:28 crc kubenswrapper[4915]: I1124 22:51:28.299192 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ca0119e-175d-4693-a0fc-4b01c687f88b-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:51:28 crc kubenswrapper[4915]: I1124 22:51:28.299546 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-855rc\" (UniqueName: \"kubernetes.io/projected/2ca0119e-175d-4693-a0fc-4b01c687f88b-kube-api-access-855rc\") on node \"crc\" DevicePath \"\"" Nov 24 22:51:28 crc kubenswrapper[4915]: I1124 22:51:28.299561 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ca0119e-175d-4693-a0fc-4b01c687f88b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:51:28 crc kubenswrapper[4915]: I1124 22:51:28.529695 4915 generic.go:334] "Generic (PLEG): container finished" podID="2ca0119e-175d-4693-a0fc-4b01c687f88b" containerID="b33e75bc7f2b06271cf65389b8941d2eb235eeb9755e56de951007e7b2c07c33" exitCode=0 Nov 24 22:51:28 crc kubenswrapper[4915]: I1124 22:51:28.529729 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sbsgz" Nov 24 22:51:28 crc kubenswrapper[4915]: I1124 22:51:28.529739 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbsgz" event={"ID":"2ca0119e-175d-4693-a0fc-4b01c687f88b","Type":"ContainerDied","Data":"b33e75bc7f2b06271cf65389b8941d2eb235eeb9755e56de951007e7b2c07c33"} Nov 24 22:51:28 crc kubenswrapper[4915]: I1124 22:51:28.529771 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sbsgz" event={"ID":"2ca0119e-175d-4693-a0fc-4b01c687f88b","Type":"ContainerDied","Data":"a602d59b43a660b11e85a0b0266210a7acc55a359cd5a474eee160ddb9f700d0"} Nov 24 22:51:28 crc kubenswrapper[4915]: I1124 22:51:28.529812 4915 scope.go:117] "RemoveContainer" containerID="b33e75bc7f2b06271cf65389b8941d2eb235eeb9755e56de951007e7b2c07c33" Nov 24 22:51:28 crc kubenswrapper[4915]: I1124 22:51:28.573868 4915 scope.go:117] "RemoveContainer" containerID="8bc530b10d65a69bc0f0ca1d093f3c9391f1c10aa874cd8261ef1f8f63b56a3c" Nov 24 22:51:28 crc kubenswrapper[4915]: I1124 22:51:28.579314 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sbsgz"] Nov 24 22:51:28 crc kubenswrapper[4915]: I1124 22:51:28.602872 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sbsgz"] Nov 24 22:51:28 crc kubenswrapper[4915]: I1124 22:51:28.609101 4915 scope.go:117] "RemoveContainer" containerID="672ec7589ff0eac92137d3e8ecebd06358e7beffada4d9d24886192197a05e14" Nov 24 22:51:28 crc kubenswrapper[4915]: I1124 22:51:28.653673 4915 scope.go:117] "RemoveContainer" containerID="b33e75bc7f2b06271cf65389b8941d2eb235eeb9755e56de951007e7b2c07c33" Nov 24 22:51:28 crc kubenswrapper[4915]: E1124 22:51:28.654183 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b33e75bc7f2b06271cf65389b8941d2eb235eeb9755e56de951007e7b2c07c33\": container with ID starting with b33e75bc7f2b06271cf65389b8941d2eb235eeb9755e56de951007e7b2c07c33 not found: ID does not exist" containerID="b33e75bc7f2b06271cf65389b8941d2eb235eeb9755e56de951007e7b2c07c33" Nov 24 22:51:28 crc kubenswrapper[4915]: I1124 22:51:28.654227 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b33e75bc7f2b06271cf65389b8941d2eb235eeb9755e56de951007e7b2c07c33"} err="failed to get container status \"b33e75bc7f2b06271cf65389b8941d2eb235eeb9755e56de951007e7b2c07c33\": rpc error: code = NotFound desc = could not find container \"b33e75bc7f2b06271cf65389b8941d2eb235eeb9755e56de951007e7b2c07c33\": container with ID starting with b33e75bc7f2b06271cf65389b8941d2eb235eeb9755e56de951007e7b2c07c33 not found: ID does not exist" Nov 24 22:51:28 crc kubenswrapper[4915]: I1124 22:51:28.654254 4915 scope.go:117] "RemoveContainer" containerID="8bc530b10d65a69bc0f0ca1d093f3c9391f1c10aa874cd8261ef1f8f63b56a3c" Nov 24 22:51:28 crc kubenswrapper[4915]: E1124 22:51:28.654730 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bc530b10d65a69bc0f0ca1d093f3c9391f1c10aa874cd8261ef1f8f63b56a3c\": container with ID starting with 8bc530b10d65a69bc0f0ca1d093f3c9391f1c10aa874cd8261ef1f8f63b56a3c not found: ID does not exist" containerID="8bc530b10d65a69bc0f0ca1d093f3c9391f1c10aa874cd8261ef1f8f63b56a3c" Nov 24 22:51:28 crc kubenswrapper[4915]: I1124 22:51:28.654841 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bc530b10d65a69bc0f0ca1d093f3c9391f1c10aa874cd8261ef1f8f63b56a3c"} err="failed to get container status \"8bc530b10d65a69bc0f0ca1d093f3c9391f1c10aa874cd8261ef1f8f63b56a3c\": rpc error: code = NotFound desc = could not find container \"8bc530b10d65a69bc0f0ca1d093f3c9391f1c10aa874cd8261ef1f8f63b56a3c\": container with ID starting with 8bc530b10d65a69bc0f0ca1d093f3c9391f1c10aa874cd8261ef1f8f63b56a3c not found: ID does not exist" Nov 24 22:51:28 crc kubenswrapper[4915]: I1124 22:51:28.654876 4915 scope.go:117] "RemoveContainer" containerID="672ec7589ff0eac92137d3e8ecebd06358e7beffada4d9d24886192197a05e14" Nov 24 22:51:28 crc kubenswrapper[4915]: E1124 22:51:28.655130 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"672ec7589ff0eac92137d3e8ecebd06358e7beffada4d9d24886192197a05e14\": container with ID starting with 672ec7589ff0eac92137d3e8ecebd06358e7beffada4d9d24886192197a05e14 not found: ID does not exist" containerID="672ec7589ff0eac92137d3e8ecebd06358e7beffada4d9d24886192197a05e14" Nov 24 22:51:28 crc kubenswrapper[4915]: I1124 22:51:28.655161 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"672ec7589ff0eac92137d3e8ecebd06358e7beffada4d9d24886192197a05e14"} err="failed to get container status \"672ec7589ff0eac92137d3e8ecebd06358e7beffada4d9d24886192197a05e14\": rpc error: code = NotFound desc = could not find container \"672ec7589ff0eac92137d3e8ecebd06358e7beffada4d9d24886192197a05e14\": container with ID starting with 672ec7589ff0eac92137d3e8ecebd06358e7beffada4d9d24886192197a05e14 not found: ID does not exist" Nov 24 22:51:30 crc kubenswrapper[4915]: I1124 22:51:30.455693 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ca0119e-175d-4693-a0fc-4b01c687f88b" path="/var/lib/kubelet/pods/2ca0119e-175d-4693-a0fc-4b01c687f88b/volumes" Nov 24 22:51:38 crc kubenswrapper[4915]: I1124 22:51:38.285042 4915 trace.go:236] Trace[1671602826]: "Calculate volume metrics of registry-storage for pod openshift-image-registry/image-registry-66df7c8f76-h77kv" (24-Nov-2025 22:51:37.235) (total time: 1049ms): Nov 24 22:51:38 crc kubenswrapper[4915]: Trace[1671602826]: [1.04951916s] [1.04951916s] END Nov 24 22:51:54 crc kubenswrapper[4915]: I1124 22:51:54.327649 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:51:54 crc kubenswrapper[4915]: I1124 22:51:54.328378 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:52:24 crc kubenswrapper[4915]: I1124 22:52:24.327395 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:52:24 crc kubenswrapper[4915]: I1124 22:52:24.329452 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:52:24 crc kubenswrapper[4915]: I1124 22:52:24.329612 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 22:52:24 crc kubenswrapper[4915]: I1124 22:52:24.331015 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f5c578688d5a272768d8c6515a2ff5a40bbd762310bac42ed15cc2ed4b212017"} pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:52:24 crc kubenswrapper[4915]: I1124 22:52:24.331213 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" containerID="cri-o://f5c578688d5a272768d8c6515a2ff5a40bbd762310bac42ed15cc2ed4b212017" gracePeriod=600 Nov 24 22:52:25 crc kubenswrapper[4915]: I1124 22:52:25.220311 4915 generic.go:334] "Generic (PLEG): container finished" podID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerID="f5c578688d5a272768d8c6515a2ff5a40bbd762310bac42ed15cc2ed4b212017" exitCode=0 Nov 24 22:52:25 crc kubenswrapper[4915]: I1124 22:52:25.220381 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerDied","Data":"f5c578688d5a272768d8c6515a2ff5a40bbd762310bac42ed15cc2ed4b212017"} Nov 24 22:52:25 crc kubenswrapper[4915]: I1124 22:52:25.221097 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerStarted","Data":"afd63409dc5ed0c5f5785cc3b821d65e5da68d775440d86d9a2e885312a6da80"} Nov 24 22:52:25 crc kubenswrapper[4915]: I1124 22:52:25.221128 4915 scope.go:117] "RemoveContainer" containerID="e8f55126a8f72bcde79852645be4e6a7d3de58b23503a90df0838a9583b0b7f5" Nov 24 22:54:24 crc kubenswrapper[4915]: I1124 22:54:24.328087 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:54:24 crc kubenswrapper[4915]: I1124 22:54:24.328662 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:54:54 crc kubenswrapper[4915]: I1124 22:54:54.327690 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:54:54 crc kubenswrapper[4915]: I1124 22:54:54.328246 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:55:24 crc kubenswrapper[4915]: I1124 22:55:24.327570 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 22:55:24 crc kubenswrapper[4915]: I1124 22:55:24.328374 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 22:55:24 crc kubenswrapper[4915]: I1124 22:55:24.328451 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 22:55:24 crc kubenswrapper[4915]: I1124 22:55:24.329884 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"afd63409dc5ed0c5f5785cc3b821d65e5da68d775440d86d9a2e885312a6da80"} pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 22:55:24 crc kubenswrapper[4915]: I1124 22:55:24.329980 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" containerID="cri-o://afd63409dc5ed0c5f5785cc3b821d65e5da68d775440d86d9a2e885312a6da80" gracePeriod=600 Nov 24 22:55:24 crc kubenswrapper[4915]: E1124 22:55:24.547413 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:55:25 crc kubenswrapper[4915]: I1124 22:55:25.250681 4915 generic.go:334] "Generic (PLEG): container finished" podID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerID="afd63409dc5ed0c5f5785cc3b821d65e5da68d775440d86d9a2e885312a6da80" exitCode=0 Nov 24 22:55:25 crc kubenswrapper[4915]: I1124 22:55:25.250803 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerDied","Data":"afd63409dc5ed0c5f5785cc3b821d65e5da68d775440d86d9a2e885312a6da80"} Nov 24 22:55:25 crc kubenswrapper[4915]: I1124 22:55:25.250895 4915 scope.go:117] "RemoveContainer" containerID="f5c578688d5a272768d8c6515a2ff5a40bbd762310bac42ed15cc2ed4b212017" Nov 24 22:55:25 crc kubenswrapper[4915]: I1124 22:55:25.252497 4915 scope.go:117] "RemoveContainer" containerID="afd63409dc5ed0c5f5785cc3b821d65e5da68d775440d86d9a2e885312a6da80" Nov 24 22:55:25 crc kubenswrapper[4915]: E1124 22:55:25.253697 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:55:40 crc kubenswrapper[4915]: I1124 22:55:40.427122 4915 scope.go:117] "RemoveContainer" containerID="afd63409dc5ed0c5f5785cc3b821d65e5da68d775440d86d9a2e885312a6da80" Nov 24 22:55:40 crc kubenswrapper[4915]: E1124 22:55:40.429475 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:55:52 crc kubenswrapper[4915]: I1124 22:55:52.437677 4915 scope.go:117] "RemoveContainer" containerID="afd63409dc5ed0c5f5785cc3b821d65e5da68d775440d86d9a2e885312a6da80" Nov 24 22:55:52 crc kubenswrapper[4915]: E1124 22:55:52.438766 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:56:03 crc kubenswrapper[4915]: I1124 22:56:03.428431 4915 scope.go:117] "RemoveContainer" containerID="afd63409dc5ed0c5f5785cc3b821d65e5da68d775440d86d9a2e885312a6da80" Nov 24 22:56:03 crc kubenswrapper[4915]: E1124 22:56:03.429465 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:56:18 crc kubenswrapper[4915]: I1124 22:56:18.427438 4915 scope.go:117] "RemoveContainer" containerID="afd63409dc5ed0c5f5785cc3b821d65e5da68d775440d86d9a2e885312a6da80" Nov 24 22:56:18 crc kubenswrapper[4915]: E1124 22:56:18.428462 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:56:30 crc kubenswrapper[4915]: I1124 22:56:30.427382 4915 scope.go:117] "RemoveContainer" containerID="afd63409dc5ed0c5f5785cc3b821d65e5da68d775440d86d9a2e885312a6da80" Nov 24 22:56:30 crc kubenswrapper[4915]: E1124 22:56:30.428379 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:56:43 crc kubenswrapper[4915]: I1124 22:56:43.426705 4915 scope.go:117] "RemoveContainer" containerID="afd63409dc5ed0c5f5785cc3b821d65e5da68d775440d86d9a2e885312a6da80" Nov 24 22:56:43 crc kubenswrapper[4915]: E1124 22:56:43.427538 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:56:57 crc kubenswrapper[4915]: I1124 22:56:57.427404 4915 scope.go:117] "RemoveContainer" containerID="afd63409dc5ed0c5f5785cc3b821d65e5da68d775440d86d9a2e885312a6da80" Nov 24 22:56:57 crc kubenswrapper[4915]: E1124 22:56:57.428474 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:57:12 crc kubenswrapper[4915]: I1124 22:57:12.448364 4915 scope.go:117] "RemoveContainer" containerID="afd63409dc5ed0c5f5785cc3b821d65e5da68d775440d86d9a2e885312a6da80" Nov 24 22:57:12 crc kubenswrapper[4915]: E1124 22:57:12.449830 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:57:23 crc kubenswrapper[4915]: I1124 22:57:23.427560 4915 scope.go:117] "RemoveContainer" containerID="afd63409dc5ed0c5f5785cc3b821d65e5da68d775440d86d9a2e885312a6da80" Nov 24 22:57:23 crc kubenswrapper[4915]: E1124 22:57:23.428860 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:57:34 crc kubenswrapper[4915]: I1124 22:57:34.426798 4915 scope.go:117] "RemoveContainer" containerID="afd63409dc5ed0c5f5785cc3b821d65e5da68d775440d86d9a2e885312a6da80" Nov 24 22:57:34 crc kubenswrapper[4915]: E1124 22:57:34.430383 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:57:46 crc kubenswrapper[4915]: I1124 22:57:46.426870 4915 scope.go:117] "RemoveContainer" containerID="afd63409dc5ed0c5f5785cc3b821d65e5da68d775440d86d9a2e885312a6da80" Nov 24 22:57:46 crc kubenswrapper[4915]: E1124 22:57:46.427992 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:57:57 crc kubenswrapper[4915]: I1124 22:57:57.428927 4915 scope.go:117] "RemoveContainer" containerID="afd63409dc5ed0c5f5785cc3b821d65e5da68d775440d86d9a2e885312a6da80" Nov 24 22:57:57 crc kubenswrapper[4915]: E1124 22:57:57.430350 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:58:00 crc kubenswrapper[4915]: I1124 22:58:00.372482 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9lcx4"] Nov 24 22:58:00 crc kubenswrapper[4915]: E1124 22:58:00.374177 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca0119e-175d-4693-a0fc-4b01c687f88b" containerName="registry-server" Nov 24 22:58:00 crc kubenswrapper[4915]: I1124 22:58:00.374203 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca0119e-175d-4693-a0fc-4b01c687f88b" containerName="registry-server" Nov 24 22:58:00 crc kubenswrapper[4915]: E1124 22:58:00.374249 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca0119e-175d-4693-a0fc-4b01c687f88b" containerName="extract-utilities" Nov 24 22:58:00 crc kubenswrapper[4915]: I1124 22:58:00.374262 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca0119e-175d-4693-a0fc-4b01c687f88b" containerName="extract-utilities" Nov 24 22:58:00 crc kubenswrapper[4915]: E1124 22:58:00.374298 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca0119e-175d-4693-a0fc-4b01c687f88b" containerName="extract-content" Nov 24 22:58:00 crc kubenswrapper[4915]: I1124 22:58:00.374310 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca0119e-175d-4693-a0fc-4b01c687f88b" containerName="extract-content" Nov 24 22:58:00 crc kubenswrapper[4915]: I1124 22:58:00.374774 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ca0119e-175d-4693-a0fc-4b01c687f88b" containerName="registry-server" Nov 24 22:58:00 crc kubenswrapper[4915]: I1124 22:58:00.377828 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9lcx4" Nov 24 22:58:00 crc kubenswrapper[4915]: I1124 22:58:00.397109 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9lcx4"] Nov 24 22:58:00 crc kubenswrapper[4915]: I1124 22:58:00.541150 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f87d0758-5f90-4b08-b42a-0b23ff3bcce1-utilities\") pod \"community-operators-9lcx4\" (UID: \"f87d0758-5f90-4b08-b42a-0b23ff3bcce1\") " pod="openshift-marketplace/community-operators-9lcx4" Nov 24 22:58:00 crc kubenswrapper[4915]: I1124 22:58:00.541241 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f87d0758-5f90-4b08-b42a-0b23ff3bcce1-catalog-content\") pod \"community-operators-9lcx4\" (UID: \"f87d0758-5f90-4b08-b42a-0b23ff3bcce1\") " pod="openshift-marketplace/community-operators-9lcx4" Nov 24 22:58:00 crc kubenswrapper[4915]: I1124 22:58:00.541362 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ttx8\" (UniqueName: \"kubernetes.io/projected/f87d0758-5f90-4b08-b42a-0b23ff3bcce1-kube-api-access-2ttx8\") pod \"community-operators-9lcx4\" (UID: \"f87d0758-5f90-4b08-b42a-0b23ff3bcce1\") " pod="openshift-marketplace/community-operators-9lcx4" Nov 24 22:58:00 crc kubenswrapper[4915]: I1124 22:58:00.643379 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ttx8\" (UniqueName: \"kubernetes.io/projected/f87d0758-5f90-4b08-b42a-0b23ff3bcce1-kube-api-access-2ttx8\") pod \"community-operators-9lcx4\" (UID: \"f87d0758-5f90-4b08-b42a-0b23ff3bcce1\") " pod="openshift-marketplace/community-operators-9lcx4" Nov 24 22:58:00 crc kubenswrapper[4915]: I1124 22:58:00.643532 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f87d0758-5f90-4b08-b42a-0b23ff3bcce1-utilities\") pod \"community-operators-9lcx4\" (UID: \"f87d0758-5f90-4b08-b42a-0b23ff3bcce1\") " pod="openshift-marketplace/community-operators-9lcx4" Nov 24 22:58:00 crc kubenswrapper[4915]: I1124 22:58:00.644042 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f87d0758-5f90-4b08-b42a-0b23ff3bcce1-utilities\") pod \"community-operators-9lcx4\" (UID: \"f87d0758-5f90-4b08-b42a-0b23ff3bcce1\") " pod="openshift-marketplace/community-operators-9lcx4" Nov 24 22:58:00 crc kubenswrapper[4915]: I1124 22:58:00.644340 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f87d0758-5f90-4b08-b42a-0b23ff3bcce1-catalog-content\") pod \"community-operators-9lcx4\" (UID: \"f87d0758-5f90-4b08-b42a-0b23ff3bcce1\") " pod="openshift-marketplace/community-operators-9lcx4" Nov 24 22:58:00 crc kubenswrapper[4915]: I1124 22:58:00.644131 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f87d0758-5f90-4b08-b42a-0b23ff3bcce1-catalog-content\") pod \"community-operators-9lcx4\" (UID: \"f87d0758-5f90-4b08-b42a-0b23ff3bcce1\") " pod="openshift-marketplace/community-operators-9lcx4" Nov 24 22:58:01 crc kubenswrapper[4915]: I1124 22:58:01.070355 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ttx8\" (UniqueName: \"kubernetes.io/projected/f87d0758-5f90-4b08-b42a-0b23ff3bcce1-kube-api-access-2ttx8\") pod \"community-operators-9lcx4\" (UID: \"f87d0758-5f90-4b08-b42a-0b23ff3bcce1\") " pod="openshift-marketplace/community-operators-9lcx4" Nov 24 22:58:01 crc kubenswrapper[4915]: I1124 22:58:01.330532 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9lcx4" Nov 24 22:58:01 crc kubenswrapper[4915]: I1124 22:58:01.919176 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9lcx4"] Nov 24 22:58:02 crc kubenswrapper[4915]: I1124 22:58:02.877380 4915 generic.go:334] "Generic (PLEG): container finished" podID="f87d0758-5f90-4b08-b42a-0b23ff3bcce1" containerID="fcb1c2e82d7d99eb67e871fb0e8d684b8b1b053824566c5c9dd645bc61cd3bcf" exitCode=0 Nov 24 22:58:02 crc kubenswrapper[4915]: I1124 22:58:02.877677 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lcx4" event={"ID":"f87d0758-5f90-4b08-b42a-0b23ff3bcce1","Type":"ContainerDied","Data":"fcb1c2e82d7d99eb67e871fb0e8d684b8b1b053824566c5c9dd645bc61cd3bcf"} Nov 24 22:58:02 crc kubenswrapper[4915]: I1124 22:58:02.877705 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lcx4" event={"ID":"f87d0758-5f90-4b08-b42a-0b23ff3bcce1","Type":"ContainerStarted","Data":"52b7234c4dd1fa70ed4d69332e70d8358090edcbe306c971271442dbcdba7c60"} Nov 24 22:58:02 crc kubenswrapper[4915]: I1124 22:58:02.882347 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 22:58:04 crc kubenswrapper[4915]: I1124 22:58:04.901850 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lcx4" event={"ID":"f87d0758-5f90-4b08-b42a-0b23ff3bcce1","Type":"ContainerStarted","Data":"c47e619341b1ac4c9aca28afb0ae2c55cc8af401ce0c6d0b48533f0d9d4d988e"} Nov 24 22:58:05 crc kubenswrapper[4915]: I1124 22:58:05.918972 4915 generic.go:334] "Generic (PLEG): container finished" podID="f87d0758-5f90-4b08-b42a-0b23ff3bcce1" containerID="c47e619341b1ac4c9aca28afb0ae2c55cc8af401ce0c6d0b48533f0d9d4d988e" exitCode=0 Nov 24 22:58:05 crc kubenswrapper[4915]: I1124 22:58:05.919135 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lcx4" event={"ID":"f87d0758-5f90-4b08-b42a-0b23ff3bcce1","Type":"ContainerDied","Data":"c47e619341b1ac4c9aca28afb0ae2c55cc8af401ce0c6d0b48533f0d9d4d988e"} Nov 24 22:58:06 crc kubenswrapper[4915]: I1124 22:58:06.934903 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lcx4" event={"ID":"f87d0758-5f90-4b08-b42a-0b23ff3bcce1","Type":"ContainerStarted","Data":"8556a45586c2912de237dd059c3628b8f2d2ad66fcd5c8c9ef6cdcd4de84588d"} Nov 24 22:58:06 crc kubenswrapper[4915]: I1124 22:58:06.968158 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9lcx4" podStartSLOduration=3.510974365 podStartE2EDuration="6.968127449s" podCreationTimestamp="2025-11-24 22:58:00 +0000 UTC" firstStartedPulling="2025-11-24 22:58:02.88208398 +0000 UTC m=+5901.198336153" lastFinishedPulling="2025-11-24 22:58:06.339237074 +0000 UTC m=+5904.655489237" observedRunningTime="2025-11-24 22:58:06.956241907 +0000 UTC m=+5905.272494100" watchObservedRunningTime="2025-11-24 22:58:06.968127449 +0000 UTC m=+5905.284379652" Nov 24 22:58:08 crc kubenswrapper[4915]: I1124 22:58:08.427071 4915 scope.go:117] "RemoveContainer" containerID="afd63409dc5ed0c5f5785cc3b821d65e5da68d775440d86d9a2e885312a6da80" Nov 24 22:58:08 crc kubenswrapper[4915]: E1124 22:58:08.428548 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:58:11 crc kubenswrapper[4915]: I1124 22:58:11.332340 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9lcx4" Nov 24 22:58:11 crc kubenswrapper[4915]: I1124 22:58:11.333303 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9lcx4" Nov 24 22:58:11 crc kubenswrapper[4915]: I1124 22:58:11.393023 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9lcx4" Nov 24 22:58:12 crc kubenswrapper[4915]: I1124 22:58:12.083282 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9lcx4" Nov 24 22:58:12 crc kubenswrapper[4915]: I1124 22:58:12.155751 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9lcx4"] Nov 24 22:58:14 crc kubenswrapper[4915]: I1124 22:58:14.018462 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9lcx4" podUID="f87d0758-5f90-4b08-b42a-0b23ff3bcce1" containerName="registry-server" containerID="cri-o://8556a45586c2912de237dd059c3628b8f2d2ad66fcd5c8c9ef6cdcd4de84588d" gracePeriod=2 Nov 24 22:58:14 crc kubenswrapper[4915]: I1124 22:58:14.595382 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9lcx4" Nov 24 22:58:14 crc kubenswrapper[4915]: I1124 22:58:14.613252 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ttx8\" (UniqueName: \"kubernetes.io/projected/f87d0758-5f90-4b08-b42a-0b23ff3bcce1-kube-api-access-2ttx8\") pod \"f87d0758-5f90-4b08-b42a-0b23ff3bcce1\" (UID: \"f87d0758-5f90-4b08-b42a-0b23ff3bcce1\") " Nov 24 22:58:14 crc kubenswrapper[4915]: I1124 22:58:14.613400 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f87d0758-5f90-4b08-b42a-0b23ff3bcce1-catalog-content\") pod \"f87d0758-5f90-4b08-b42a-0b23ff3bcce1\" (UID: \"f87d0758-5f90-4b08-b42a-0b23ff3bcce1\") " Nov 24 22:58:14 crc kubenswrapper[4915]: I1124 22:58:14.613503 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f87d0758-5f90-4b08-b42a-0b23ff3bcce1-utilities\") pod \"f87d0758-5f90-4b08-b42a-0b23ff3bcce1\" (UID: \"f87d0758-5f90-4b08-b42a-0b23ff3bcce1\") " Nov 24 22:58:14 crc kubenswrapper[4915]: I1124 22:58:14.618977 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f87d0758-5f90-4b08-b42a-0b23ff3bcce1-utilities" (OuterVolumeSpecName: "utilities") pod "f87d0758-5f90-4b08-b42a-0b23ff3bcce1" (UID: "f87d0758-5f90-4b08-b42a-0b23ff3bcce1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:58:14 crc kubenswrapper[4915]: I1124 22:58:14.704919 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f87d0758-5f90-4b08-b42a-0b23ff3bcce1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f87d0758-5f90-4b08-b42a-0b23ff3bcce1" (UID: "f87d0758-5f90-4b08-b42a-0b23ff3bcce1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:58:14 crc kubenswrapper[4915]: I1124 22:58:14.716916 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f87d0758-5f90-4b08-b42a-0b23ff3bcce1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:58:14 crc kubenswrapper[4915]: I1124 22:58:14.716945 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f87d0758-5f90-4b08-b42a-0b23ff3bcce1-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:58:15 crc kubenswrapper[4915]: I1124 22:58:15.040676 4915 generic.go:334] "Generic (PLEG): container finished" podID="f87d0758-5f90-4b08-b42a-0b23ff3bcce1" containerID="8556a45586c2912de237dd059c3628b8f2d2ad66fcd5c8c9ef6cdcd4de84588d" exitCode=0 Nov 24 22:58:15 crc kubenswrapper[4915]: I1124 22:58:15.040723 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lcx4" event={"ID":"f87d0758-5f90-4b08-b42a-0b23ff3bcce1","Type":"ContainerDied","Data":"8556a45586c2912de237dd059c3628b8f2d2ad66fcd5c8c9ef6cdcd4de84588d"} Nov 24 22:58:15 crc kubenswrapper[4915]: I1124 22:58:15.040757 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9lcx4" event={"ID":"f87d0758-5f90-4b08-b42a-0b23ff3bcce1","Type":"ContainerDied","Data":"52b7234c4dd1fa70ed4d69332e70d8358090edcbe306c971271442dbcdba7c60"} Nov 24 22:58:15 crc kubenswrapper[4915]: I1124 22:58:15.040796 4915 scope.go:117] "RemoveContainer" containerID="8556a45586c2912de237dd059c3628b8f2d2ad66fcd5c8c9ef6cdcd4de84588d" Nov 24 22:58:15 crc kubenswrapper[4915]: I1124 22:58:15.040918 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9lcx4" Nov 24 22:58:15 crc kubenswrapper[4915]: I1124 22:58:15.062678 4915 scope.go:117] "RemoveContainer" containerID="c47e619341b1ac4c9aca28afb0ae2c55cc8af401ce0c6d0b48533f0d9d4d988e" Nov 24 22:58:15 crc kubenswrapper[4915]: I1124 22:58:15.264122 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f87d0758-5f90-4b08-b42a-0b23ff3bcce1-kube-api-access-2ttx8" (OuterVolumeSpecName: "kube-api-access-2ttx8") pod "f87d0758-5f90-4b08-b42a-0b23ff3bcce1" (UID: "f87d0758-5f90-4b08-b42a-0b23ff3bcce1"). InnerVolumeSpecName "kube-api-access-2ttx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:58:15 crc kubenswrapper[4915]: I1124 22:58:15.280171 4915 scope.go:117] "RemoveContainer" containerID="fcb1c2e82d7d99eb67e871fb0e8d684b8b1b053824566c5c9dd645bc61cd3bcf" Nov 24 22:58:15 crc kubenswrapper[4915]: I1124 22:58:15.329373 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ttx8\" (UniqueName: \"kubernetes.io/projected/f87d0758-5f90-4b08-b42a-0b23ff3bcce1-kube-api-access-2ttx8\") on node \"crc\" DevicePath \"\"" Nov 24 22:58:15 crc kubenswrapper[4915]: I1124 22:58:15.416493 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9lcx4"] Nov 24 22:58:15 crc kubenswrapper[4915]: I1124 22:58:15.419860 4915 scope.go:117] "RemoveContainer" containerID="8556a45586c2912de237dd059c3628b8f2d2ad66fcd5c8c9ef6cdcd4de84588d" Nov 24 22:58:15 crc kubenswrapper[4915]: E1124 22:58:15.422349 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8556a45586c2912de237dd059c3628b8f2d2ad66fcd5c8c9ef6cdcd4de84588d\": container with ID starting with 8556a45586c2912de237dd059c3628b8f2d2ad66fcd5c8c9ef6cdcd4de84588d not found: ID does not exist" containerID="8556a45586c2912de237dd059c3628b8f2d2ad66fcd5c8c9ef6cdcd4de84588d" Nov 24 22:58:15 crc kubenswrapper[4915]: I1124 22:58:15.422432 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8556a45586c2912de237dd059c3628b8f2d2ad66fcd5c8c9ef6cdcd4de84588d"} err="failed to get container status \"8556a45586c2912de237dd059c3628b8f2d2ad66fcd5c8c9ef6cdcd4de84588d\": rpc error: code = NotFound desc = could not find container \"8556a45586c2912de237dd059c3628b8f2d2ad66fcd5c8c9ef6cdcd4de84588d\": container with ID starting with 8556a45586c2912de237dd059c3628b8f2d2ad66fcd5c8c9ef6cdcd4de84588d not found: ID does not exist" Nov 24 22:58:15 crc kubenswrapper[4915]: I1124 22:58:15.422474 4915 scope.go:117] "RemoveContainer" containerID="c47e619341b1ac4c9aca28afb0ae2c55cc8af401ce0c6d0b48533f0d9d4d988e" Nov 24 22:58:15 crc kubenswrapper[4915]: E1124 22:58:15.423006 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c47e619341b1ac4c9aca28afb0ae2c55cc8af401ce0c6d0b48533f0d9d4d988e\": container with ID starting with c47e619341b1ac4c9aca28afb0ae2c55cc8af401ce0c6d0b48533f0d9d4d988e not found: ID does not exist" containerID="c47e619341b1ac4c9aca28afb0ae2c55cc8af401ce0c6d0b48533f0d9d4d988e" Nov 24 22:58:15 crc kubenswrapper[4915]: I1124 22:58:15.423051 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c47e619341b1ac4c9aca28afb0ae2c55cc8af401ce0c6d0b48533f0d9d4d988e"} err="failed to get container status \"c47e619341b1ac4c9aca28afb0ae2c55cc8af401ce0c6d0b48533f0d9d4d988e\": rpc error: code = NotFound desc = could not find container \"c47e619341b1ac4c9aca28afb0ae2c55cc8af401ce0c6d0b48533f0d9d4d988e\": container with ID starting with c47e619341b1ac4c9aca28afb0ae2c55cc8af401ce0c6d0b48533f0d9d4d988e not found: ID does not exist" Nov 24 22:58:15 crc kubenswrapper[4915]: I1124 22:58:15.423078 4915 scope.go:117] "RemoveContainer" containerID="fcb1c2e82d7d99eb67e871fb0e8d684b8b1b053824566c5c9dd645bc61cd3bcf" Nov 24 22:58:15 crc kubenswrapper[4915]: E1124 22:58:15.423484 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcb1c2e82d7d99eb67e871fb0e8d684b8b1b053824566c5c9dd645bc61cd3bcf\": container with ID starting with fcb1c2e82d7d99eb67e871fb0e8d684b8b1b053824566c5c9dd645bc61cd3bcf not found: ID does not exist" containerID="fcb1c2e82d7d99eb67e871fb0e8d684b8b1b053824566c5c9dd645bc61cd3bcf" Nov 24 22:58:15 crc kubenswrapper[4915]: I1124 22:58:15.423517 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcb1c2e82d7d99eb67e871fb0e8d684b8b1b053824566c5c9dd645bc61cd3bcf"} err="failed to get container status \"fcb1c2e82d7d99eb67e871fb0e8d684b8b1b053824566c5c9dd645bc61cd3bcf\": rpc error: code = NotFound desc = could not find container \"fcb1c2e82d7d99eb67e871fb0e8d684b8b1b053824566c5c9dd645bc61cd3bcf\": container with ID starting with fcb1c2e82d7d99eb67e871fb0e8d684b8b1b053824566c5c9dd645bc61cd3bcf not found: ID does not exist" Nov 24 22:58:15 crc kubenswrapper[4915]: I1124 22:58:15.429836 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9lcx4"] Nov 24 22:58:16 crc kubenswrapper[4915]: I1124 22:58:16.440183 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f87d0758-5f90-4b08-b42a-0b23ff3bcce1" path="/var/lib/kubelet/pods/f87d0758-5f90-4b08-b42a-0b23ff3bcce1/volumes" Nov 24 22:58:21 crc kubenswrapper[4915]: I1124 22:58:21.426972 4915 scope.go:117] "RemoveContainer" containerID="afd63409dc5ed0c5f5785cc3b821d65e5da68d775440d86d9a2e885312a6da80" Nov 24 22:58:21 crc kubenswrapper[4915]: E1124 22:58:21.427842 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:58:29 crc kubenswrapper[4915]: I1124 22:58:29.052040 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jtmnn"] Nov 24 22:58:29 crc kubenswrapper[4915]: E1124 22:58:29.053282 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f87d0758-5f90-4b08-b42a-0b23ff3bcce1" containerName="extract-content" Nov 24 22:58:29 crc kubenswrapper[4915]: I1124 22:58:29.053311 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f87d0758-5f90-4b08-b42a-0b23ff3bcce1" containerName="extract-content" Nov 24 22:58:29 crc kubenswrapper[4915]: E1124 22:58:29.053338 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f87d0758-5f90-4b08-b42a-0b23ff3bcce1" containerName="extract-utilities" Nov 24 22:58:29 crc kubenswrapper[4915]: I1124 22:58:29.053347 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f87d0758-5f90-4b08-b42a-0b23ff3bcce1" containerName="extract-utilities" Nov 24 22:58:29 crc kubenswrapper[4915]: E1124 22:58:29.053389 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f87d0758-5f90-4b08-b42a-0b23ff3bcce1" containerName="registry-server" Nov 24 22:58:29 crc kubenswrapper[4915]: I1124 22:58:29.053397 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f87d0758-5f90-4b08-b42a-0b23ff3bcce1" containerName="registry-server" Nov 24 22:58:29 crc kubenswrapper[4915]: I1124 22:58:29.053676 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f87d0758-5f90-4b08-b42a-0b23ff3bcce1" containerName="registry-server" Nov 24 22:58:29 crc kubenswrapper[4915]: I1124 22:58:29.055587 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jtmnn" Nov 24 22:58:29 crc kubenswrapper[4915]: I1124 22:58:29.065350 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jtmnn"] Nov 24 22:58:29 crc kubenswrapper[4915]: I1124 22:58:29.217872 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d-catalog-content\") pod \"redhat-operators-jtmnn\" (UID: \"ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d\") " pod="openshift-marketplace/redhat-operators-jtmnn" Nov 24 22:58:29 crc kubenswrapper[4915]: I1124 22:58:29.218642 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6rkr\" (UniqueName: \"kubernetes.io/projected/ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d-kube-api-access-z6rkr\") pod \"redhat-operators-jtmnn\" (UID: \"ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d\") " pod="openshift-marketplace/redhat-operators-jtmnn" Nov 24 22:58:29 crc kubenswrapper[4915]: I1124 22:58:29.218728 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d-utilities\") pod \"redhat-operators-jtmnn\" (UID: \"ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d\") " pod="openshift-marketplace/redhat-operators-jtmnn" Nov 24 22:58:29 crc kubenswrapper[4915]: I1124 22:58:29.321447 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6rkr\" (UniqueName: \"kubernetes.io/projected/ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d-kube-api-access-z6rkr\") pod \"redhat-operators-jtmnn\" (UID: \"ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d\") " pod="openshift-marketplace/redhat-operators-jtmnn" Nov 24 22:58:29 crc kubenswrapper[4915]: I1124 22:58:29.321542 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d-utilities\") pod \"redhat-operators-jtmnn\" (UID: \"ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d\") " pod="openshift-marketplace/redhat-operators-jtmnn" Nov 24 22:58:29 crc kubenswrapper[4915]: I1124 22:58:29.321628 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d-catalog-content\") pod \"redhat-operators-jtmnn\" (UID: \"ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d\") " pod="openshift-marketplace/redhat-operators-jtmnn" Nov 24 22:58:29 crc kubenswrapper[4915]: I1124 22:58:29.322475 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d-catalog-content\") pod \"redhat-operators-jtmnn\" (UID: \"ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d\") " pod="openshift-marketplace/redhat-operators-jtmnn" Nov 24 22:58:29 crc kubenswrapper[4915]: I1124 22:58:29.322769 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d-utilities\") pod \"redhat-operators-jtmnn\" (UID: \"ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d\") " pod="openshift-marketplace/redhat-operators-jtmnn" Nov 24 22:58:29 crc kubenswrapper[4915]: I1124 22:58:29.347304 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6rkr\" (UniqueName: \"kubernetes.io/projected/ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d-kube-api-access-z6rkr\") pod \"redhat-operators-jtmnn\" (UID: \"ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d\") " pod="openshift-marketplace/redhat-operators-jtmnn" Nov 24 22:58:29 crc kubenswrapper[4915]: I1124 22:58:29.408199 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jtmnn" Nov 24 22:58:29 crc kubenswrapper[4915]: I1124 22:58:29.940614 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jtmnn"] Nov 24 22:58:30 crc kubenswrapper[4915]: I1124 22:58:30.239573 4915 generic.go:334] "Generic (PLEG): container finished" podID="ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d" containerID="7475230c28f98b4b2882c7691e2050e3a11d619e1f7e261222ee251cbef27b67" exitCode=0 Nov 24 22:58:30 crc kubenswrapper[4915]: I1124 22:58:30.239766 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtmnn" event={"ID":"ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d","Type":"ContainerDied","Data":"7475230c28f98b4b2882c7691e2050e3a11d619e1f7e261222ee251cbef27b67"} Nov 24 22:58:30 crc kubenswrapper[4915]: I1124 22:58:30.239908 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtmnn" event={"ID":"ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d","Type":"ContainerStarted","Data":"013549abe7ce8dfe2e59508cf136d0d449f6d89681747c3eff2461b57183dd12"} Nov 24 22:58:31 crc kubenswrapper[4915]: I1124 22:58:31.255562 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtmnn" event={"ID":"ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d","Type":"ContainerStarted","Data":"5b81cc8cba1e14c8a0f102fa188e526184efa4bb33b3ba1d2d8e0fe17f377920"} Nov 24 22:58:34 crc kubenswrapper[4915]: I1124 22:58:34.431841 4915 scope.go:117] "RemoveContainer" containerID="afd63409dc5ed0c5f5785cc3b821d65e5da68d775440d86d9a2e885312a6da80" Nov 24 22:58:34 crc kubenswrapper[4915]: E1124 22:58:34.432604 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:58:35 crc kubenswrapper[4915]: I1124 22:58:35.305251 4915 generic.go:334] "Generic (PLEG): container finished" podID="ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d" containerID="5b81cc8cba1e14c8a0f102fa188e526184efa4bb33b3ba1d2d8e0fe17f377920" exitCode=0 Nov 24 22:58:35 crc kubenswrapper[4915]: I1124 22:58:35.305343 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtmnn" event={"ID":"ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d","Type":"ContainerDied","Data":"5b81cc8cba1e14c8a0f102fa188e526184efa4bb33b3ba1d2d8e0fe17f377920"} Nov 24 22:58:36 crc kubenswrapper[4915]: I1124 22:58:36.319625 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtmnn" event={"ID":"ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d","Type":"ContainerStarted","Data":"d2c32ec2c5bc3809f56c763e78b10eb6fb7ddd7d5a2eb5f1d8d4e91eaf78270c"} Nov 24 22:58:36 crc kubenswrapper[4915]: I1124 22:58:36.369685 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jtmnn" podStartSLOduration=1.902591947 podStartE2EDuration="7.369655278s" podCreationTimestamp="2025-11-24 22:58:29 +0000 UTC" firstStartedPulling="2025-11-24 22:58:30.241504575 +0000 UTC m=+5928.557756748" lastFinishedPulling="2025-11-24 22:58:35.708567906 +0000 UTC m=+5934.024820079" observedRunningTime="2025-11-24 22:58:36.347203233 +0000 UTC m=+5934.663455446" watchObservedRunningTime="2025-11-24 22:58:36.369655278 +0000 UTC m=+5934.685907471" Nov 24 22:58:39 crc kubenswrapper[4915]: I1124 22:58:39.409093 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jtmnn" Nov 24 22:58:39 crc kubenswrapper[4915]: I1124 22:58:39.409684 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jtmnn" Nov 24 22:58:40 crc kubenswrapper[4915]: I1124 22:58:40.466966 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jtmnn" podUID="ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d" containerName="registry-server" probeResult="failure" output=< Nov 24 22:58:40 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Nov 24 22:58:40 crc kubenswrapper[4915]: > Nov 24 22:58:48 crc kubenswrapper[4915]: I1124 22:58:48.426996 4915 scope.go:117] "RemoveContainer" containerID="afd63409dc5ed0c5f5785cc3b821d65e5da68d775440d86d9a2e885312a6da80" Nov 24 22:58:48 crc kubenswrapper[4915]: E1124 22:58:48.427987 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:58:49 crc kubenswrapper[4915]: I1124 22:58:49.498324 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jtmnn" Nov 24 22:58:49 crc kubenswrapper[4915]: I1124 22:58:49.574673 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jtmnn" Nov 24 22:58:49 crc kubenswrapper[4915]: I1124 22:58:49.895961 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jtmnn"] Nov 24 22:58:51 crc kubenswrapper[4915]: I1124 22:58:51.496831 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jtmnn" podUID="ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d" containerName="registry-server" containerID="cri-o://d2c32ec2c5bc3809f56c763e78b10eb6fb7ddd7d5a2eb5f1d8d4e91eaf78270c" gracePeriod=2 Nov 24 22:58:52 crc kubenswrapper[4915]: I1124 22:58:52.051111 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jtmnn" Nov 24 22:58:52 crc kubenswrapper[4915]: I1124 22:58:52.123346 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d-utilities\") pod \"ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d\" (UID: \"ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d\") " Nov 24 22:58:52 crc kubenswrapper[4915]: I1124 22:58:52.123552 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6rkr\" (UniqueName: \"kubernetes.io/projected/ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d-kube-api-access-z6rkr\") pod \"ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d\" (UID: \"ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d\") " Nov 24 22:58:52 crc kubenswrapper[4915]: I1124 22:58:52.123826 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d-catalog-content\") pod \"ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d\" (UID: \"ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d\") " Nov 24 22:58:52 crc kubenswrapper[4915]: I1124 22:58:52.124203 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d-utilities" (OuterVolumeSpecName: "utilities") pod "ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d" (UID: "ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:58:52 crc kubenswrapper[4915]: I1124 22:58:52.124591 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 22:58:52 crc kubenswrapper[4915]: I1124 22:58:52.129820 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d-kube-api-access-z6rkr" (OuterVolumeSpecName: "kube-api-access-z6rkr") pod "ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d" (UID: "ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d"). InnerVolumeSpecName "kube-api-access-z6rkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 22:58:52 crc kubenswrapper[4915]: I1124 22:58:52.226441 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6rkr\" (UniqueName: \"kubernetes.io/projected/ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d-kube-api-access-z6rkr\") on node \"crc\" DevicePath \"\"" Nov 24 22:58:52 crc kubenswrapper[4915]: I1124 22:58:52.251430 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d" (UID: "ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 22:58:52 crc kubenswrapper[4915]: I1124 22:58:52.328878 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 22:58:52 crc kubenswrapper[4915]: I1124 22:58:52.512199 4915 generic.go:334] "Generic (PLEG): container finished" podID="ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d" containerID="d2c32ec2c5bc3809f56c763e78b10eb6fb7ddd7d5a2eb5f1d8d4e91eaf78270c" exitCode=0 Nov 24 22:58:52 crc kubenswrapper[4915]: I1124 22:58:52.512246 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtmnn" event={"ID":"ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d","Type":"ContainerDied","Data":"d2c32ec2c5bc3809f56c763e78b10eb6fb7ddd7d5a2eb5f1d8d4e91eaf78270c"} Nov 24 22:58:52 crc kubenswrapper[4915]: I1124 22:58:52.512284 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtmnn" event={"ID":"ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d","Type":"ContainerDied","Data":"013549abe7ce8dfe2e59508cf136d0d449f6d89681747c3eff2461b57183dd12"} Nov 24 22:58:52 crc kubenswrapper[4915]: I1124 22:58:52.512307 4915 scope.go:117] "RemoveContainer" containerID="d2c32ec2c5bc3809f56c763e78b10eb6fb7ddd7d5a2eb5f1d8d4e91eaf78270c" Nov 24 22:58:52 crc kubenswrapper[4915]: I1124 22:58:52.512345 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jtmnn" Nov 24 22:58:52 crc kubenswrapper[4915]: I1124 22:58:52.566118 4915 scope.go:117] "RemoveContainer" containerID="5b81cc8cba1e14c8a0f102fa188e526184efa4bb33b3ba1d2d8e0fe17f377920" Nov 24 22:58:52 crc kubenswrapper[4915]: I1124 22:58:52.571997 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jtmnn"] Nov 24 22:58:52 crc kubenswrapper[4915]: I1124 22:58:52.590782 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jtmnn"] Nov 24 22:58:52 crc kubenswrapper[4915]: I1124 22:58:52.609689 4915 scope.go:117] "RemoveContainer" containerID="7475230c28f98b4b2882c7691e2050e3a11d619e1f7e261222ee251cbef27b67" Nov 24 22:58:52 crc kubenswrapper[4915]: I1124 22:58:52.688694 4915 scope.go:117] "RemoveContainer" containerID="d2c32ec2c5bc3809f56c763e78b10eb6fb7ddd7d5a2eb5f1d8d4e91eaf78270c" Nov 24 22:58:52 crc kubenswrapper[4915]: E1124 22:58:52.689278 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2c32ec2c5bc3809f56c763e78b10eb6fb7ddd7d5a2eb5f1d8d4e91eaf78270c\": container with ID starting with d2c32ec2c5bc3809f56c763e78b10eb6fb7ddd7d5a2eb5f1d8d4e91eaf78270c not found: ID does not exist" containerID="d2c32ec2c5bc3809f56c763e78b10eb6fb7ddd7d5a2eb5f1d8d4e91eaf78270c" Nov 24 22:58:52 crc kubenswrapper[4915]: I1124 22:58:52.689340 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2c32ec2c5bc3809f56c763e78b10eb6fb7ddd7d5a2eb5f1d8d4e91eaf78270c"} err="failed to get container status \"d2c32ec2c5bc3809f56c763e78b10eb6fb7ddd7d5a2eb5f1d8d4e91eaf78270c\": rpc error: code = NotFound desc = could not find container \"d2c32ec2c5bc3809f56c763e78b10eb6fb7ddd7d5a2eb5f1d8d4e91eaf78270c\": container with ID starting with d2c32ec2c5bc3809f56c763e78b10eb6fb7ddd7d5a2eb5f1d8d4e91eaf78270c not found: ID does not exist" Nov 24 22:58:52 crc kubenswrapper[4915]: I1124 22:58:52.689370 4915 scope.go:117] "RemoveContainer" containerID="5b81cc8cba1e14c8a0f102fa188e526184efa4bb33b3ba1d2d8e0fe17f377920" Nov 24 22:58:52 crc kubenswrapper[4915]: E1124 22:58:52.689721 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b81cc8cba1e14c8a0f102fa188e526184efa4bb33b3ba1d2d8e0fe17f377920\": container with ID starting with 5b81cc8cba1e14c8a0f102fa188e526184efa4bb33b3ba1d2d8e0fe17f377920 not found: ID does not exist" containerID="5b81cc8cba1e14c8a0f102fa188e526184efa4bb33b3ba1d2d8e0fe17f377920" Nov 24 22:58:52 crc kubenswrapper[4915]: I1124 22:58:52.689754 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b81cc8cba1e14c8a0f102fa188e526184efa4bb33b3ba1d2d8e0fe17f377920"} err="failed to get container status \"5b81cc8cba1e14c8a0f102fa188e526184efa4bb33b3ba1d2d8e0fe17f377920\": rpc error: code = NotFound desc = could not find container \"5b81cc8cba1e14c8a0f102fa188e526184efa4bb33b3ba1d2d8e0fe17f377920\": container with ID starting with 5b81cc8cba1e14c8a0f102fa188e526184efa4bb33b3ba1d2d8e0fe17f377920 not found: ID does not exist" Nov 24 22:58:52 crc kubenswrapper[4915]: I1124 22:58:52.689862 4915 scope.go:117] "RemoveContainer" containerID="7475230c28f98b4b2882c7691e2050e3a11d619e1f7e261222ee251cbef27b67" Nov 24 22:58:52 crc kubenswrapper[4915]: E1124 22:58:52.690123 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7475230c28f98b4b2882c7691e2050e3a11d619e1f7e261222ee251cbef27b67\": container with ID starting with 7475230c28f98b4b2882c7691e2050e3a11d619e1f7e261222ee251cbef27b67 not found: ID does not exist" containerID="7475230c28f98b4b2882c7691e2050e3a11d619e1f7e261222ee251cbef27b67" Nov 24 22:58:52 crc kubenswrapper[4915]: I1124 22:58:52.690173 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7475230c28f98b4b2882c7691e2050e3a11d619e1f7e261222ee251cbef27b67"} err="failed to get container status \"7475230c28f98b4b2882c7691e2050e3a11d619e1f7e261222ee251cbef27b67\": rpc error: code = NotFound desc = could not find container \"7475230c28f98b4b2882c7691e2050e3a11d619e1f7e261222ee251cbef27b67\": container with ID starting with 7475230c28f98b4b2882c7691e2050e3a11d619e1f7e261222ee251cbef27b67 not found: ID does not exist" Nov 24 22:58:54 crc kubenswrapper[4915]: I1124 22:58:54.440307 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d" path="/var/lib/kubelet/pods/ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d/volumes" Nov 24 22:59:03 crc kubenswrapper[4915]: I1124 22:59:03.426498 4915 scope.go:117] "RemoveContainer" containerID="afd63409dc5ed0c5f5785cc3b821d65e5da68d775440d86d9a2e885312a6da80" Nov 24 22:59:03 crc kubenswrapper[4915]: E1124 22:59:03.427517 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:59:18 crc kubenswrapper[4915]: I1124 22:59:18.427362 4915 scope.go:117] "RemoveContainer" containerID="afd63409dc5ed0c5f5785cc3b821d65e5da68d775440d86d9a2e885312a6da80" Nov 24 22:59:18 crc kubenswrapper[4915]: E1124 22:59:18.429101 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:59:31 crc kubenswrapper[4915]: I1124 22:59:31.426497 4915 scope.go:117] "RemoveContainer" containerID="afd63409dc5ed0c5f5785cc3b821d65e5da68d775440d86d9a2e885312a6da80" Nov 24 22:59:31 crc kubenswrapper[4915]: E1124 22:59:31.427552 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 22:59:46 crc kubenswrapper[4915]: I1124 22:59:46.428054 4915 scope.go:117] "RemoveContainer" containerID="afd63409dc5ed0c5f5785cc3b821d65e5da68d775440d86d9a2e885312a6da80" Nov 24 22:59:46 crc kubenswrapper[4915]: E1124 22:59:46.429077 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:00:00 crc kubenswrapper[4915]: I1124 23:00:00.227845 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400420-ddrvv"] Nov 24 23:00:00 crc kubenswrapper[4915]: E1124 23:00:00.228950 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d" containerName="extract-utilities" Nov 24 23:00:00 crc kubenswrapper[4915]: I1124 23:00:00.228969 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d" containerName="extract-utilities" Nov 24 23:00:00 crc kubenswrapper[4915]: E1124 23:00:00.228990 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d" containerName="extract-content" Nov 24 23:00:00 crc kubenswrapper[4915]: I1124 23:00:00.228999 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d" containerName="extract-content" Nov 24 23:00:00 crc kubenswrapper[4915]: E1124 23:00:00.229022 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d" containerName="registry-server" Nov 24 23:00:00 crc kubenswrapper[4915]: I1124 23:00:00.229032 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d" containerName="registry-server" Nov 24 23:00:00 crc kubenswrapper[4915]: I1124 23:00:00.229318 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea6973f0-b1d7-4ef4-887f-cc1b5e5b3e8d" containerName="registry-server" Nov 24 23:00:00 crc kubenswrapper[4915]: I1124 23:00:00.230330 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-ddrvv" Nov 24 23:00:00 crc kubenswrapper[4915]: I1124 23:00:00.233023 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 23:00:00 crc kubenswrapper[4915]: I1124 23:00:00.235513 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 23:00:00 crc kubenswrapper[4915]: I1124 23:00:00.247608 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400420-ddrvv"] Nov 24 23:00:00 crc kubenswrapper[4915]: I1124 23:00:00.268256 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b89h\" (UniqueName: \"kubernetes.io/projected/792570a9-81fd-4665-a220-06528f954f96-kube-api-access-5b89h\") pod \"collect-profiles-29400420-ddrvv\" (UID: \"792570a9-81fd-4665-a220-06528f954f96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-ddrvv" Nov 24 23:00:00 crc kubenswrapper[4915]: I1124 23:00:00.268392 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/792570a9-81fd-4665-a220-06528f954f96-secret-volume\") pod \"collect-profiles-29400420-ddrvv\" (UID: \"792570a9-81fd-4665-a220-06528f954f96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-ddrvv" Nov 24 23:00:00 crc kubenswrapper[4915]: I1124 23:00:00.268614 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/792570a9-81fd-4665-a220-06528f954f96-config-volume\") pod \"collect-profiles-29400420-ddrvv\" (UID: \"792570a9-81fd-4665-a220-06528f954f96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-ddrvv" Nov 24 23:00:00 crc kubenswrapper[4915]: I1124 23:00:00.370765 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b89h\" (UniqueName: \"kubernetes.io/projected/792570a9-81fd-4665-a220-06528f954f96-kube-api-access-5b89h\") pod \"collect-profiles-29400420-ddrvv\" (UID: \"792570a9-81fd-4665-a220-06528f954f96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-ddrvv" Nov 24 23:00:00 crc kubenswrapper[4915]: I1124 23:00:00.370928 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/792570a9-81fd-4665-a220-06528f954f96-secret-volume\") pod \"collect-profiles-29400420-ddrvv\" (UID: \"792570a9-81fd-4665-a220-06528f954f96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-ddrvv" Nov 24 23:00:00 crc kubenswrapper[4915]: I1124 23:00:00.371060 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/792570a9-81fd-4665-a220-06528f954f96-config-volume\") pod \"collect-profiles-29400420-ddrvv\" (UID: \"792570a9-81fd-4665-a220-06528f954f96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-ddrvv" Nov 24 23:00:00 crc kubenswrapper[4915]: I1124 23:00:00.371938 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/792570a9-81fd-4665-a220-06528f954f96-config-volume\") pod \"collect-profiles-29400420-ddrvv\" (UID: \"792570a9-81fd-4665-a220-06528f954f96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-ddrvv" Nov 24 23:00:00 crc kubenswrapper[4915]: I1124 23:00:00.380604 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/792570a9-81fd-4665-a220-06528f954f96-secret-volume\") pod \"collect-profiles-29400420-ddrvv\" (UID: \"792570a9-81fd-4665-a220-06528f954f96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-ddrvv" Nov 24 23:00:00 crc kubenswrapper[4915]: I1124 23:00:00.386883 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b89h\" (UniqueName: \"kubernetes.io/projected/792570a9-81fd-4665-a220-06528f954f96-kube-api-access-5b89h\") pod \"collect-profiles-29400420-ddrvv\" (UID: \"792570a9-81fd-4665-a220-06528f954f96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-ddrvv" Nov 24 23:00:00 crc kubenswrapper[4915]: I1124 23:00:00.584804 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-ddrvv" Nov 24 23:00:01 crc kubenswrapper[4915]: I1124 23:00:01.427948 4915 scope.go:117] "RemoveContainer" containerID="afd63409dc5ed0c5f5785cc3b821d65e5da68d775440d86d9a2e885312a6da80" Nov 24 23:00:01 crc kubenswrapper[4915]: E1124 23:00:01.428863 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:00:01 crc kubenswrapper[4915]: I1124 23:00:01.776154 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400420-ddrvv"] Nov 24 23:00:01 crc kubenswrapper[4915]: W1124 23:00:01.788489 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod792570a9_81fd_4665_a220_06528f954f96.slice/crio-5025146df233184415579f1cad93d396613fb92570ac0205700955c32e73d759 WatchSource:0}: Error finding container 5025146df233184415579f1cad93d396613fb92570ac0205700955c32e73d759: Status 404 returned error can't find the container with id 5025146df233184415579f1cad93d396613fb92570ac0205700955c32e73d759 Nov 24 23:00:02 crc kubenswrapper[4915]: I1124 23:00:02.524278 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-ddrvv" event={"ID":"792570a9-81fd-4665-a220-06528f954f96","Type":"ContainerStarted","Data":"d34560401a6aadaa90cc9527ab2ec33245b16ff8f721ecb26b7cfd455944bc4b"} Nov 24 23:00:02 crc kubenswrapper[4915]: I1124 23:00:02.524746 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-ddrvv" event={"ID":"792570a9-81fd-4665-a220-06528f954f96","Type":"ContainerStarted","Data":"5025146df233184415579f1cad93d396613fb92570ac0205700955c32e73d759"} Nov 24 23:00:02 crc kubenswrapper[4915]: I1124 23:00:02.542393 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-ddrvv" podStartSLOduration=2.542376432 podStartE2EDuration="2.542376432s" podCreationTimestamp="2025-11-24 23:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 23:00:02.538597939 +0000 UTC m=+6020.854850112" watchObservedRunningTime="2025-11-24 23:00:02.542376432 +0000 UTC m=+6020.858628605" Nov 24 23:00:03 crc kubenswrapper[4915]: I1124 23:00:03.538686 4915 generic.go:334] "Generic (PLEG): container finished" podID="792570a9-81fd-4665-a220-06528f954f96" containerID="d34560401a6aadaa90cc9527ab2ec33245b16ff8f721ecb26b7cfd455944bc4b" exitCode=0 Nov 24 23:00:03 crc kubenswrapper[4915]: I1124 23:00:03.538945 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-ddrvv" event={"ID":"792570a9-81fd-4665-a220-06528f954f96","Type":"ContainerDied","Data":"d34560401a6aadaa90cc9527ab2ec33245b16ff8f721ecb26b7cfd455944bc4b"} Nov 24 23:00:05 crc kubenswrapper[4915]: I1124 23:00:05.024170 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-ddrvv" Nov 24 23:00:05 crc kubenswrapper[4915]: I1124 23:00:05.081469 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/792570a9-81fd-4665-a220-06528f954f96-secret-volume\") pod \"792570a9-81fd-4665-a220-06528f954f96\" (UID: \"792570a9-81fd-4665-a220-06528f954f96\") " Nov 24 23:00:05 crc kubenswrapper[4915]: I1124 23:00:05.081599 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/792570a9-81fd-4665-a220-06528f954f96-config-volume\") pod \"792570a9-81fd-4665-a220-06528f954f96\" (UID: \"792570a9-81fd-4665-a220-06528f954f96\") " Nov 24 23:00:05 crc kubenswrapper[4915]: I1124 23:00:05.081666 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5b89h\" (UniqueName: \"kubernetes.io/projected/792570a9-81fd-4665-a220-06528f954f96-kube-api-access-5b89h\") pod \"792570a9-81fd-4665-a220-06528f954f96\" (UID: \"792570a9-81fd-4665-a220-06528f954f96\") " Nov 24 23:00:05 crc kubenswrapper[4915]: I1124 23:00:05.083879 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/792570a9-81fd-4665-a220-06528f954f96-config-volume" (OuterVolumeSpecName: "config-volume") pod "792570a9-81fd-4665-a220-06528f954f96" (UID: "792570a9-81fd-4665-a220-06528f954f96"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 23:00:05 crc kubenswrapper[4915]: I1124 23:00:05.089477 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/792570a9-81fd-4665-a220-06528f954f96-kube-api-access-5b89h" (OuterVolumeSpecName: "kube-api-access-5b89h") pod "792570a9-81fd-4665-a220-06528f954f96" (UID: "792570a9-81fd-4665-a220-06528f954f96"). InnerVolumeSpecName "kube-api-access-5b89h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:00:05 crc kubenswrapper[4915]: I1124 23:00:05.092011 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/792570a9-81fd-4665-a220-06528f954f96-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "792570a9-81fd-4665-a220-06528f954f96" (UID: "792570a9-81fd-4665-a220-06528f954f96"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:00:05 crc kubenswrapper[4915]: I1124 23:00:05.186310 4915 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/792570a9-81fd-4665-a220-06528f954f96-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 23:00:05 crc kubenswrapper[4915]: I1124 23:00:05.186358 4915 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/792570a9-81fd-4665-a220-06528f954f96-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 23:00:05 crc kubenswrapper[4915]: I1124 23:00:05.186378 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5b89h\" (UniqueName: \"kubernetes.io/projected/792570a9-81fd-4665-a220-06528f954f96-kube-api-access-5b89h\") on node \"crc\" DevicePath \"\"" Nov 24 23:00:05 crc kubenswrapper[4915]: I1124 23:00:05.568894 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-ddrvv" event={"ID":"792570a9-81fd-4665-a220-06528f954f96","Type":"ContainerDied","Data":"5025146df233184415579f1cad93d396613fb92570ac0205700955c32e73d759"} Nov 24 23:00:05 crc kubenswrapper[4915]: I1124 23:00:05.569256 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5025146df233184415579f1cad93d396613fb92570ac0205700955c32e73d759" Nov 24 23:00:05 crc kubenswrapper[4915]: I1124 23:00:05.568986 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400420-ddrvv" Nov 24 23:00:06 crc kubenswrapper[4915]: I1124 23:00:06.125225 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400375-c2dz4"] Nov 24 23:00:06 crc kubenswrapper[4915]: I1124 23:00:06.138411 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400375-c2dz4"] Nov 24 23:00:06 crc kubenswrapper[4915]: I1124 23:00:06.451768 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2289122f-8531-401d-8af4-df2ee60099c0" path="/var/lib/kubelet/pods/2289122f-8531-401d-8af4-df2ee60099c0/volumes" Nov 24 23:00:16 crc kubenswrapper[4915]: I1124 23:00:16.428195 4915 scope.go:117] "RemoveContainer" containerID="afd63409dc5ed0c5f5785cc3b821d65e5da68d775440d86d9a2e885312a6da80" Nov 24 23:00:16 crc kubenswrapper[4915]: E1124 23:00:16.429214 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:00:19 crc kubenswrapper[4915]: I1124 23:00:19.759324 4915 scope.go:117] "RemoveContainer" containerID="3f525ece29c3053b46ab2e937a7047c2cbcc13771402e73a276697815219cdec" Nov 24 23:00:27 crc kubenswrapper[4915]: I1124 23:00:27.428126 4915 scope.go:117] "RemoveContainer" containerID="afd63409dc5ed0c5f5785cc3b821d65e5da68d775440d86d9a2e885312a6da80" Nov 24 23:00:27 crc kubenswrapper[4915]: I1124 23:00:27.880224 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerStarted","Data":"952a49a7a6a5e5a33f1cb4cff4cc7ececaaecfd6b8b168edbd55891db41d2c90"} Nov 24 23:01:00 crc kubenswrapper[4915]: I1124 23:01:00.204467 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29400421-8wvcv"] Nov 24 23:01:00 crc kubenswrapper[4915]: E1124 23:01:00.205475 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="792570a9-81fd-4665-a220-06528f954f96" containerName="collect-profiles" Nov 24 23:01:00 crc kubenswrapper[4915]: I1124 23:01:00.205492 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="792570a9-81fd-4665-a220-06528f954f96" containerName="collect-profiles" Nov 24 23:01:00 crc kubenswrapper[4915]: I1124 23:01:00.205760 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="792570a9-81fd-4665-a220-06528f954f96" containerName="collect-profiles" Nov 24 23:01:00 crc kubenswrapper[4915]: I1124 23:01:00.210454 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400421-8wvcv" Nov 24 23:01:00 crc kubenswrapper[4915]: I1124 23:01:00.229450 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29400421-8wvcv"] Nov 24 23:01:00 crc kubenswrapper[4915]: I1124 23:01:00.270671 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621c0c9b-73f8-49ca-90a1-a7f3dc80382b-config-data\") pod \"keystone-cron-29400421-8wvcv\" (UID: \"621c0c9b-73f8-49ca-90a1-a7f3dc80382b\") " pod="openstack/keystone-cron-29400421-8wvcv" Nov 24 23:01:00 crc kubenswrapper[4915]: I1124 23:01:00.270951 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621c0c9b-73f8-49ca-90a1-a7f3dc80382b-combined-ca-bundle\") pod \"keystone-cron-29400421-8wvcv\" (UID: \"621c0c9b-73f8-49ca-90a1-a7f3dc80382b\") " pod="openstack/keystone-cron-29400421-8wvcv" Nov 24 23:01:00 crc kubenswrapper[4915]: I1124 23:01:00.271061 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmxqs\" (UniqueName: \"kubernetes.io/projected/621c0c9b-73f8-49ca-90a1-a7f3dc80382b-kube-api-access-bmxqs\") pod \"keystone-cron-29400421-8wvcv\" (UID: \"621c0c9b-73f8-49ca-90a1-a7f3dc80382b\") " pod="openstack/keystone-cron-29400421-8wvcv" Nov 24 23:01:00 crc kubenswrapper[4915]: I1124 23:01:00.271152 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/621c0c9b-73f8-49ca-90a1-a7f3dc80382b-fernet-keys\") pod \"keystone-cron-29400421-8wvcv\" (UID: \"621c0c9b-73f8-49ca-90a1-a7f3dc80382b\") " pod="openstack/keystone-cron-29400421-8wvcv" Nov 24 23:01:00 crc kubenswrapper[4915]: I1124 23:01:00.373252 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmxqs\" (UniqueName: \"kubernetes.io/projected/621c0c9b-73f8-49ca-90a1-a7f3dc80382b-kube-api-access-bmxqs\") pod \"keystone-cron-29400421-8wvcv\" (UID: \"621c0c9b-73f8-49ca-90a1-a7f3dc80382b\") " pod="openstack/keystone-cron-29400421-8wvcv" Nov 24 23:01:00 crc kubenswrapper[4915]: I1124 23:01:00.373300 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/621c0c9b-73f8-49ca-90a1-a7f3dc80382b-fernet-keys\") pod \"keystone-cron-29400421-8wvcv\" (UID: \"621c0c9b-73f8-49ca-90a1-a7f3dc80382b\") " pod="openstack/keystone-cron-29400421-8wvcv" Nov 24 23:01:00 crc kubenswrapper[4915]: I1124 23:01:00.373456 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621c0c9b-73f8-49ca-90a1-a7f3dc80382b-config-data\") pod \"keystone-cron-29400421-8wvcv\" (UID: \"621c0c9b-73f8-49ca-90a1-a7f3dc80382b\") " pod="openstack/keystone-cron-29400421-8wvcv" Nov 24 23:01:00 crc kubenswrapper[4915]: I1124 23:01:00.373507 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621c0c9b-73f8-49ca-90a1-a7f3dc80382b-combined-ca-bundle\") pod \"keystone-cron-29400421-8wvcv\" (UID: \"621c0c9b-73f8-49ca-90a1-a7f3dc80382b\") " pod="openstack/keystone-cron-29400421-8wvcv" Nov 24 23:01:00 crc kubenswrapper[4915]: I1124 23:01:00.387744 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/621c0c9b-73f8-49ca-90a1-a7f3dc80382b-fernet-keys\") pod \"keystone-cron-29400421-8wvcv\" (UID: \"621c0c9b-73f8-49ca-90a1-a7f3dc80382b\") " pod="openstack/keystone-cron-29400421-8wvcv" Nov 24 23:01:00 crc kubenswrapper[4915]: I1124 23:01:00.387829 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621c0c9b-73f8-49ca-90a1-a7f3dc80382b-config-data\") pod \"keystone-cron-29400421-8wvcv\" (UID: \"621c0c9b-73f8-49ca-90a1-a7f3dc80382b\") " pod="openstack/keystone-cron-29400421-8wvcv" Nov 24 23:01:00 crc kubenswrapper[4915]: I1124 23:01:00.392289 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmxqs\" (UniqueName: \"kubernetes.io/projected/621c0c9b-73f8-49ca-90a1-a7f3dc80382b-kube-api-access-bmxqs\") pod \"keystone-cron-29400421-8wvcv\" (UID: \"621c0c9b-73f8-49ca-90a1-a7f3dc80382b\") " pod="openstack/keystone-cron-29400421-8wvcv" Nov 24 23:01:00 crc kubenswrapper[4915]: I1124 23:01:00.468604 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621c0c9b-73f8-49ca-90a1-a7f3dc80382b-combined-ca-bundle\") pod \"keystone-cron-29400421-8wvcv\" (UID: \"621c0c9b-73f8-49ca-90a1-a7f3dc80382b\") " pod="openstack/keystone-cron-29400421-8wvcv" Nov 24 23:01:00 crc kubenswrapper[4915]: I1124 23:01:00.573187 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400421-8wvcv" Nov 24 23:01:01 crc kubenswrapper[4915]: I1124 23:01:01.033410 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29400421-8wvcv"] Nov 24 23:01:01 crc kubenswrapper[4915]: I1124 23:01:01.312002 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400421-8wvcv" event={"ID":"621c0c9b-73f8-49ca-90a1-a7f3dc80382b","Type":"ContainerStarted","Data":"22923f48aa95cc09dca455970fadf0ef15a5344127b3a1638c6e69957434e054"} Nov 24 23:01:02 crc kubenswrapper[4915]: I1124 23:01:02.324646 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400421-8wvcv" event={"ID":"621c0c9b-73f8-49ca-90a1-a7f3dc80382b","Type":"ContainerStarted","Data":"4604234fed634c85d4c1beeb34e43c95605f71b5d350b929b10bd86317d0dde5"} Nov 24 23:01:02 crc kubenswrapper[4915]: I1124 23:01:02.352621 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29400421-8wvcv" podStartSLOduration=2.352605102 podStartE2EDuration="2.352605102s" podCreationTimestamp="2025-11-24 23:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 23:01:02.343178518 +0000 UTC m=+6080.659430691" watchObservedRunningTime="2025-11-24 23:01:02.352605102 +0000 UTC m=+6080.668857275" Nov 24 23:01:03 crc kubenswrapper[4915]: I1124 23:01:03.481849 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b9qw9"] Nov 24 23:01:03 crc kubenswrapper[4915]: I1124 23:01:03.489389 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b9qw9" Nov 24 23:01:03 crc kubenswrapper[4915]: I1124 23:01:03.499532 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9qw9"] Nov 24 23:01:03 crc kubenswrapper[4915]: I1124 23:01:03.553416 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1a9cc8e-21bb-4a67-adbd-de7003a61b03-utilities\") pod \"redhat-marketplace-b9qw9\" (UID: \"b1a9cc8e-21bb-4a67-adbd-de7003a61b03\") " pod="openshift-marketplace/redhat-marketplace-b9qw9" Nov 24 23:01:03 crc kubenswrapper[4915]: I1124 23:01:03.553561 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vwzv\" (UniqueName: \"kubernetes.io/projected/b1a9cc8e-21bb-4a67-adbd-de7003a61b03-kube-api-access-8vwzv\") pod \"redhat-marketplace-b9qw9\" (UID: \"b1a9cc8e-21bb-4a67-adbd-de7003a61b03\") " pod="openshift-marketplace/redhat-marketplace-b9qw9" Nov 24 23:01:03 crc kubenswrapper[4915]: I1124 23:01:03.553722 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1a9cc8e-21bb-4a67-adbd-de7003a61b03-catalog-content\") pod \"redhat-marketplace-b9qw9\" (UID: \"b1a9cc8e-21bb-4a67-adbd-de7003a61b03\") " pod="openshift-marketplace/redhat-marketplace-b9qw9" Nov 24 23:01:03 crc kubenswrapper[4915]: I1124 23:01:03.656900 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1a9cc8e-21bb-4a67-adbd-de7003a61b03-catalog-content\") pod \"redhat-marketplace-b9qw9\" (UID: \"b1a9cc8e-21bb-4a67-adbd-de7003a61b03\") " pod="openshift-marketplace/redhat-marketplace-b9qw9" Nov 24 23:01:03 crc kubenswrapper[4915]: I1124 23:01:03.657012 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1a9cc8e-21bb-4a67-adbd-de7003a61b03-utilities\") pod \"redhat-marketplace-b9qw9\" (UID: \"b1a9cc8e-21bb-4a67-adbd-de7003a61b03\") " pod="openshift-marketplace/redhat-marketplace-b9qw9" Nov 24 23:01:03 crc kubenswrapper[4915]: I1124 23:01:03.657165 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vwzv\" (UniqueName: \"kubernetes.io/projected/b1a9cc8e-21bb-4a67-adbd-de7003a61b03-kube-api-access-8vwzv\") pod \"redhat-marketplace-b9qw9\" (UID: \"b1a9cc8e-21bb-4a67-adbd-de7003a61b03\") " pod="openshift-marketplace/redhat-marketplace-b9qw9" Nov 24 23:01:03 crc kubenswrapper[4915]: I1124 23:01:03.657995 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1a9cc8e-21bb-4a67-adbd-de7003a61b03-catalog-content\") pod \"redhat-marketplace-b9qw9\" (UID: \"b1a9cc8e-21bb-4a67-adbd-de7003a61b03\") " pod="openshift-marketplace/redhat-marketplace-b9qw9" Nov 24 23:01:03 crc kubenswrapper[4915]: I1124 23:01:03.658329 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1a9cc8e-21bb-4a67-adbd-de7003a61b03-utilities\") pod \"redhat-marketplace-b9qw9\" (UID: \"b1a9cc8e-21bb-4a67-adbd-de7003a61b03\") " pod="openshift-marketplace/redhat-marketplace-b9qw9" Nov 24 23:01:03 crc kubenswrapper[4915]: I1124 23:01:03.678441 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vwzv\" (UniqueName: \"kubernetes.io/projected/b1a9cc8e-21bb-4a67-adbd-de7003a61b03-kube-api-access-8vwzv\") pod \"redhat-marketplace-b9qw9\" (UID: \"b1a9cc8e-21bb-4a67-adbd-de7003a61b03\") " pod="openshift-marketplace/redhat-marketplace-b9qw9" Nov 24 23:01:03 crc kubenswrapper[4915]: I1124 23:01:03.825061 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b9qw9" Nov 24 23:01:04 crc kubenswrapper[4915]: W1124 23:01:04.334128 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1a9cc8e_21bb_4a67_adbd_de7003a61b03.slice/crio-eb2fe30b75b80cef1282d6aed044fab3fc87a8530e80af8d1c327cd0a465a94d WatchSource:0}: Error finding container eb2fe30b75b80cef1282d6aed044fab3fc87a8530e80af8d1c327cd0a465a94d: Status 404 returned error can't find the container with id eb2fe30b75b80cef1282d6aed044fab3fc87a8530e80af8d1c327cd0a465a94d Nov 24 23:01:04 crc kubenswrapper[4915]: I1124 23:01:04.352899 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9qw9" event={"ID":"b1a9cc8e-21bb-4a67-adbd-de7003a61b03","Type":"ContainerStarted","Data":"eb2fe30b75b80cef1282d6aed044fab3fc87a8530e80af8d1c327cd0a465a94d"} Nov 24 23:01:04 crc kubenswrapper[4915]: I1124 23:01:04.363872 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9qw9"] Nov 24 23:01:05 crc kubenswrapper[4915]: I1124 23:01:05.379388 4915 generic.go:334] "Generic (PLEG): container finished" podID="621c0c9b-73f8-49ca-90a1-a7f3dc80382b" containerID="4604234fed634c85d4c1beeb34e43c95605f71b5d350b929b10bd86317d0dde5" exitCode=0 Nov 24 23:01:05 crc kubenswrapper[4915]: I1124 23:01:05.379524 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400421-8wvcv" event={"ID":"621c0c9b-73f8-49ca-90a1-a7f3dc80382b","Type":"ContainerDied","Data":"4604234fed634c85d4c1beeb34e43c95605f71b5d350b929b10bd86317d0dde5"} Nov 24 23:01:05 crc kubenswrapper[4915]: I1124 23:01:05.383926 4915 generic.go:334] "Generic (PLEG): container finished" podID="b1a9cc8e-21bb-4a67-adbd-de7003a61b03" containerID="44ef0952feed4cf7b146638c3b8f2836d7e418b69e7ac5275535750810e3f7cb" exitCode=0 Nov 24 23:01:05 crc kubenswrapper[4915]: I1124 23:01:05.384015 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9qw9" event={"ID":"b1a9cc8e-21bb-4a67-adbd-de7003a61b03","Type":"ContainerDied","Data":"44ef0952feed4cf7b146638c3b8f2836d7e418b69e7ac5275535750810e3f7cb"} Nov 24 23:01:06 crc kubenswrapper[4915]: I1124 23:01:06.899609 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400421-8wvcv" Nov 24 23:01:06 crc kubenswrapper[4915]: I1124 23:01:06.961385 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmxqs\" (UniqueName: \"kubernetes.io/projected/621c0c9b-73f8-49ca-90a1-a7f3dc80382b-kube-api-access-bmxqs\") pod \"621c0c9b-73f8-49ca-90a1-a7f3dc80382b\" (UID: \"621c0c9b-73f8-49ca-90a1-a7f3dc80382b\") " Nov 24 23:01:06 crc kubenswrapper[4915]: I1124 23:01:06.961968 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621c0c9b-73f8-49ca-90a1-a7f3dc80382b-combined-ca-bundle\") pod \"621c0c9b-73f8-49ca-90a1-a7f3dc80382b\" (UID: \"621c0c9b-73f8-49ca-90a1-a7f3dc80382b\") " Nov 24 23:01:06 crc kubenswrapper[4915]: I1124 23:01:06.962299 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621c0c9b-73f8-49ca-90a1-a7f3dc80382b-config-data\") pod \"621c0c9b-73f8-49ca-90a1-a7f3dc80382b\" (UID: \"621c0c9b-73f8-49ca-90a1-a7f3dc80382b\") " Nov 24 23:01:06 crc kubenswrapper[4915]: I1124 23:01:06.962397 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/621c0c9b-73f8-49ca-90a1-a7f3dc80382b-fernet-keys\") pod \"621c0c9b-73f8-49ca-90a1-a7f3dc80382b\" (UID: \"621c0c9b-73f8-49ca-90a1-a7f3dc80382b\") " Nov 24 23:01:06 crc kubenswrapper[4915]: I1124 23:01:06.969739 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/621c0c9b-73f8-49ca-90a1-a7f3dc80382b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "621c0c9b-73f8-49ca-90a1-a7f3dc80382b" (UID: "621c0c9b-73f8-49ca-90a1-a7f3dc80382b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:01:06 crc kubenswrapper[4915]: I1124 23:01:06.969863 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/621c0c9b-73f8-49ca-90a1-a7f3dc80382b-kube-api-access-bmxqs" (OuterVolumeSpecName: "kube-api-access-bmxqs") pod "621c0c9b-73f8-49ca-90a1-a7f3dc80382b" (UID: "621c0c9b-73f8-49ca-90a1-a7f3dc80382b"). InnerVolumeSpecName "kube-api-access-bmxqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:01:07 crc kubenswrapper[4915]: I1124 23:01:07.021664 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/621c0c9b-73f8-49ca-90a1-a7f3dc80382b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "621c0c9b-73f8-49ca-90a1-a7f3dc80382b" (UID: "621c0c9b-73f8-49ca-90a1-a7f3dc80382b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:01:07 crc kubenswrapper[4915]: I1124 23:01:07.029160 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/621c0c9b-73f8-49ca-90a1-a7f3dc80382b-config-data" (OuterVolumeSpecName: "config-data") pod "621c0c9b-73f8-49ca-90a1-a7f3dc80382b" (UID: "621c0c9b-73f8-49ca-90a1-a7f3dc80382b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:01:07 crc kubenswrapper[4915]: I1124 23:01:07.068251 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621c0c9b-73f8-49ca-90a1-a7f3dc80382b-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 23:01:07 crc kubenswrapper[4915]: I1124 23:01:07.068285 4915 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/621c0c9b-73f8-49ca-90a1-a7f3dc80382b-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 24 23:01:07 crc kubenswrapper[4915]: I1124 23:01:07.068295 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmxqs\" (UniqueName: \"kubernetes.io/projected/621c0c9b-73f8-49ca-90a1-a7f3dc80382b-kube-api-access-bmxqs\") on node \"crc\" DevicePath \"\"" Nov 24 23:01:07 crc kubenswrapper[4915]: I1124 23:01:07.068306 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621c0c9b-73f8-49ca-90a1-a7f3dc80382b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 23:01:07 crc kubenswrapper[4915]: I1124 23:01:07.418570 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9qw9" event={"ID":"b1a9cc8e-21bb-4a67-adbd-de7003a61b03","Type":"ContainerStarted","Data":"54551aeda98ed741ed0ef1bb0bbead52f88ab6ea0090ceeac5e5f44e40dcdf07"} Nov 24 23:01:07 crc kubenswrapper[4915]: I1124 23:01:07.422663 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29400421-8wvcv" event={"ID":"621c0c9b-73f8-49ca-90a1-a7f3dc80382b","Type":"ContainerDied","Data":"22923f48aa95cc09dca455970fadf0ef15a5344127b3a1638c6e69957434e054"} Nov 24 23:01:07 crc kubenswrapper[4915]: I1124 23:01:07.422712 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22923f48aa95cc09dca455970fadf0ef15a5344127b3a1638c6e69957434e054" Nov 24 23:01:07 crc kubenswrapper[4915]: I1124 23:01:07.422899 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29400421-8wvcv" Nov 24 23:01:08 crc kubenswrapper[4915]: I1124 23:01:08.445084 4915 generic.go:334] "Generic (PLEG): container finished" podID="b1a9cc8e-21bb-4a67-adbd-de7003a61b03" containerID="54551aeda98ed741ed0ef1bb0bbead52f88ab6ea0090ceeac5e5f44e40dcdf07" exitCode=0 Nov 24 23:01:08 crc kubenswrapper[4915]: I1124 23:01:08.461416 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9qw9" event={"ID":"b1a9cc8e-21bb-4a67-adbd-de7003a61b03","Type":"ContainerDied","Data":"54551aeda98ed741ed0ef1bb0bbead52f88ab6ea0090ceeac5e5f44e40dcdf07"} Nov 24 23:01:09 crc kubenswrapper[4915]: I1124 23:01:09.463483 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9qw9" event={"ID":"b1a9cc8e-21bb-4a67-adbd-de7003a61b03","Type":"ContainerStarted","Data":"77e2e0b082b5040ee26ad9b4506069cd22be3c40d51b62a346ed9b3f6aaef1ac"} Nov 24 23:01:09 crc kubenswrapper[4915]: I1124 23:01:09.493678 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b9qw9" podStartSLOduration=3.014950713 podStartE2EDuration="6.493659325s" podCreationTimestamp="2025-11-24 23:01:03 +0000 UTC" firstStartedPulling="2025-11-24 23:01:05.388687714 +0000 UTC m=+6083.704939907" lastFinishedPulling="2025-11-24 23:01:08.867396316 +0000 UTC m=+6087.183648519" observedRunningTime="2025-11-24 23:01:09.489455622 +0000 UTC m=+6087.805707845" watchObservedRunningTime="2025-11-24 23:01:09.493659325 +0000 UTC m=+6087.809911498" Nov 24 23:01:13 crc kubenswrapper[4915]: I1124 23:01:13.825835 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b9qw9" Nov 24 23:01:13 crc kubenswrapper[4915]: I1124 23:01:13.826602 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b9qw9" Nov 24 23:01:13 crc kubenswrapper[4915]: I1124 23:01:13.896627 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b9qw9" Nov 24 23:01:14 crc kubenswrapper[4915]: I1124 23:01:14.633285 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b9qw9" Nov 24 23:01:14 crc kubenswrapper[4915]: I1124 23:01:14.695149 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9qw9"] Nov 24 23:01:16 crc kubenswrapper[4915]: I1124 23:01:16.568956 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b9qw9" podUID="b1a9cc8e-21bb-4a67-adbd-de7003a61b03" containerName="registry-server" containerID="cri-o://77e2e0b082b5040ee26ad9b4506069cd22be3c40d51b62a346ed9b3f6aaef1ac" gracePeriod=2 Nov 24 23:01:17 crc kubenswrapper[4915]: I1124 23:01:17.194601 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b9qw9" Nov 24 23:01:17 crc kubenswrapper[4915]: I1124 23:01:17.260111 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1a9cc8e-21bb-4a67-adbd-de7003a61b03-catalog-content\") pod \"b1a9cc8e-21bb-4a67-adbd-de7003a61b03\" (UID: \"b1a9cc8e-21bb-4a67-adbd-de7003a61b03\") " Nov 24 23:01:17 crc kubenswrapper[4915]: I1124 23:01:17.260384 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1a9cc8e-21bb-4a67-adbd-de7003a61b03-utilities\") pod \"b1a9cc8e-21bb-4a67-adbd-de7003a61b03\" (UID: \"b1a9cc8e-21bb-4a67-adbd-de7003a61b03\") " Nov 24 23:01:17 crc kubenswrapper[4915]: I1124 23:01:17.260559 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vwzv\" (UniqueName: \"kubernetes.io/projected/b1a9cc8e-21bb-4a67-adbd-de7003a61b03-kube-api-access-8vwzv\") pod \"b1a9cc8e-21bb-4a67-adbd-de7003a61b03\" (UID: \"b1a9cc8e-21bb-4a67-adbd-de7003a61b03\") " Nov 24 23:01:17 crc kubenswrapper[4915]: I1124 23:01:17.261676 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1a9cc8e-21bb-4a67-adbd-de7003a61b03-utilities" (OuterVolumeSpecName: "utilities") pod "b1a9cc8e-21bb-4a67-adbd-de7003a61b03" (UID: "b1a9cc8e-21bb-4a67-adbd-de7003a61b03"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:01:17 crc kubenswrapper[4915]: I1124 23:01:17.266543 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1a9cc8e-21bb-4a67-adbd-de7003a61b03-kube-api-access-8vwzv" (OuterVolumeSpecName: "kube-api-access-8vwzv") pod "b1a9cc8e-21bb-4a67-adbd-de7003a61b03" (UID: "b1a9cc8e-21bb-4a67-adbd-de7003a61b03"). InnerVolumeSpecName "kube-api-access-8vwzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:01:17 crc kubenswrapper[4915]: I1124 23:01:17.301083 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1a9cc8e-21bb-4a67-adbd-de7003a61b03-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1a9cc8e-21bb-4a67-adbd-de7003a61b03" (UID: "b1a9cc8e-21bb-4a67-adbd-de7003a61b03"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:01:17 crc kubenswrapper[4915]: I1124 23:01:17.364388 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1a9cc8e-21bb-4a67-adbd-de7003a61b03-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 23:01:17 crc kubenswrapper[4915]: I1124 23:01:17.364424 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vwzv\" (UniqueName: \"kubernetes.io/projected/b1a9cc8e-21bb-4a67-adbd-de7003a61b03-kube-api-access-8vwzv\") on node \"crc\" DevicePath \"\"" Nov 24 23:01:17 crc kubenswrapper[4915]: I1124 23:01:17.364436 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1a9cc8e-21bb-4a67-adbd-de7003a61b03-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 23:01:17 crc kubenswrapper[4915]: I1124 23:01:17.596107 4915 generic.go:334] "Generic (PLEG): container finished" podID="b1a9cc8e-21bb-4a67-adbd-de7003a61b03" containerID="77e2e0b082b5040ee26ad9b4506069cd22be3c40d51b62a346ed9b3f6aaef1ac" exitCode=0 Nov 24 23:01:17 crc kubenswrapper[4915]: I1124 23:01:17.598114 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9qw9" event={"ID":"b1a9cc8e-21bb-4a67-adbd-de7003a61b03","Type":"ContainerDied","Data":"77e2e0b082b5040ee26ad9b4506069cd22be3c40d51b62a346ed9b3f6aaef1ac"} Nov 24 23:01:17 crc kubenswrapper[4915]: I1124 23:01:17.598312 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b9qw9" event={"ID":"b1a9cc8e-21bb-4a67-adbd-de7003a61b03","Type":"ContainerDied","Data":"eb2fe30b75b80cef1282d6aed044fab3fc87a8530e80af8d1c327cd0a465a94d"} Nov 24 23:01:17 crc kubenswrapper[4915]: I1124 23:01:17.598459 4915 scope.go:117] "RemoveContainer" containerID="77e2e0b082b5040ee26ad9b4506069cd22be3c40d51b62a346ed9b3f6aaef1ac" Nov 24 23:01:17 crc kubenswrapper[4915]: I1124 23:01:17.598857 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b9qw9" Nov 24 23:01:17 crc kubenswrapper[4915]: I1124 23:01:17.680368 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9qw9"] Nov 24 23:01:17 crc kubenswrapper[4915]: I1124 23:01:17.690926 4915 scope.go:117] "RemoveContainer" containerID="54551aeda98ed741ed0ef1bb0bbead52f88ab6ea0090ceeac5e5f44e40dcdf07" Nov 24 23:01:17 crc kubenswrapper[4915]: I1124 23:01:17.695891 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b9qw9"] Nov 24 23:01:17 crc kubenswrapper[4915]: I1124 23:01:17.714544 4915 scope.go:117] "RemoveContainer" containerID="44ef0952feed4cf7b146638c3b8f2836d7e418b69e7ac5275535750810e3f7cb" Nov 24 23:01:17 crc kubenswrapper[4915]: I1124 23:01:17.774527 4915 scope.go:117] "RemoveContainer" containerID="77e2e0b082b5040ee26ad9b4506069cd22be3c40d51b62a346ed9b3f6aaef1ac" Nov 24 23:01:17 crc kubenswrapper[4915]: E1124 23:01:17.774937 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77e2e0b082b5040ee26ad9b4506069cd22be3c40d51b62a346ed9b3f6aaef1ac\": container with ID starting with 77e2e0b082b5040ee26ad9b4506069cd22be3c40d51b62a346ed9b3f6aaef1ac not found: ID does not exist" containerID="77e2e0b082b5040ee26ad9b4506069cd22be3c40d51b62a346ed9b3f6aaef1ac" Nov 24 23:01:17 crc kubenswrapper[4915]: I1124 23:01:17.774985 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77e2e0b082b5040ee26ad9b4506069cd22be3c40d51b62a346ed9b3f6aaef1ac"} err="failed to get container status \"77e2e0b082b5040ee26ad9b4506069cd22be3c40d51b62a346ed9b3f6aaef1ac\": rpc error: code = NotFound desc = could not find container \"77e2e0b082b5040ee26ad9b4506069cd22be3c40d51b62a346ed9b3f6aaef1ac\": container with ID starting with 77e2e0b082b5040ee26ad9b4506069cd22be3c40d51b62a346ed9b3f6aaef1ac not found: ID does not exist" Nov 24 23:01:17 crc kubenswrapper[4915]: I1124 23:01:17.775016 4915 scope.go:117] "RemoveContainer" containerID="54551aeda98ed741ed0ef1bb0bbead52f88ab6ea0090ceeac5e5f44e40dcdf07" Nov 24 23:01:17 crc kubenswrapper[4915]: E1124 23:01:17.775362 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54551aeda98ed741ed0ef1bb0bbead52f88ab6ea0090ceeac5e5f44e40dcdf07\": container with ID starting with 54551aeda98ed741ed0ef1bb0bbead52f88ab6ea0090ceeac5e5f44e40dcdf07 not found: ID does not exist" containerID="54551aeda98ed741ed0ef1bb0bbead52f88ab6ea0090ceeac5e5f44e40dcdf07" Nov 24 23:01:17 crc kubenswrapper[4915]: I1124 23:01:17.775438 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54551aeda98ed741ed0ef1bb0bbead52f88ab6ea0090ceeac5e5f44e40dcdf07"} err="failed to get container status \"54551aeda98ed741ed0ef1bb0bbead52f88ab6ea0090ceeac5e5f44e40dcdf07\": rpc error: code = NotFound desc = could not find container \"54551aeda98ed741ed0ef1bb0bbead52f88ab6ea0090ceeac5e5f44e40dcdf07\": container with ID starting with 54551aeda98ed741ed0ef1bb0bbead52f88ab6ea0090ceeac5e5f44e40dcdf07 not found: ID does not exist" Nov 24 23:01:17 crc kubenswrapper[4915]: I1124 23:01:17.775460 4915 scope.go:117] "RemoveContainer" containerID="44ef0952feed4cf7b146638c3b8f2836d7e418b69e7ac5275535750810e3f7cb" Nov 24 23:01:17 crc kubenswrapper[4915]: E1124 23:01:17.776881 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44ef0952feed4cf7b146638c3b8f2836d7e418b69e7ac5275535750810e3f7cb\": container with ID starting with 44ef0952feed4cf7b146638c3b8f2836d7e418b69e7ac5275535750810e3f7cb not found: ID does not exist" containerID="44ef0952feed4cf7b146638c3b8f2836d7e418b69e7ac5275535750810e3f7cb" Nov 24 23:01:17 crc kubenswrapper[4915]: I1124 23:01:17.776920 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44ef0952feed4cf7b146638c3b8f2836d7e418b69e7ac5275535750810e3f7cb"} err="failed to get container status \"44ef0952feed4cf7b146638c3b8f2836d7e418b69e7ac5275535750810e3f7cb\": rpc error: code = NotFound desc = could not find container \"44ef0952feed4cf7b146638c3b8f2836d7e418b69e7ac5275535750810e3f7cb\": container with ID starting with 44ef0952feed4cf7b146638c3b8f2836d7e418b69e7ac5275535750810e3f7cb not found: ID does not exist" Nov 24 23:01:18 crc kubenswrapper[4915]: I1124 23:01:18.446254 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1a9cc8e-21bb-4a67-adbd-de7003a61b03" path="/var/lib/kubelet/pods/b1a9cc8e-21bb-4a67-adbd-de7003a61b03/volumes" Nov 24 23:02:03 crc kubenswrapper[4915]: I1124 23:02:03.320351 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jf8pf"] Nov 24 23:02:03 crc kubenswrapper[4915]: E1124 23:02:03.321238 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621c0c9b-73f8-49ca-90a1-a7f3dc80382b" containerName="keystone-cron" Nov 24 23:02:03 crc kubenswrapper[4915]: I1124 23:02:03.321250 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="621c0c9b-73f8-49ca-90a1-a7f3dc80382b" containerName="keystone-cron" Nov 24 23:02:03 crc kubenswrapper[4915]: E1124 23:02:03.321275 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1a9cc8e-21bb-4a67-adbd-de7003a61b03" containerName="registry-server" Nov 24 23:02:03 crc kubenswrapper[4915]: I1124 23:02:03.321281 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1a9cc8e-21bb-4a67-adbd-de7003a61b03" containerName="registry-server" Nov 24 23:02:03 crc kubenswrapper[4915]: E1124 23:02:03.321293 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1a9cc8e-21bb-4a67-adbd-de7003a61b03" containerName="extract-utilities" Nov 24 23:02:03 crc kubenswrapper[4915]: I1124 23:02:03.321300 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1a9cc8e-21bb-4a67-adbd-de7003a61b03" containerName="extract-utilities" Nov 24 23:02:03 crc kubenswrapper[4915]: E1124 23:02:03.321313 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1a9cc8e-21bb-4a67-adbd-de7003a61b03" containerName="extract-content" Nov 24 23:02:03 crc kubenswrapper[4915]: I1124 23:02:03.321319 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1a9cc8e-21bb-4a67-adbd-de7003a61b03" containerName="extract-content" Nov 24 23:02:03 crc kubenswrapper[4915]: I1124 23:02:03.321547 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1a9cc8e-21bb-4a67-adbd-de7003a61b03" containerName="registry-server" Nov 24 23:02:03 crc kubenswrapper[4915]: I1124 23:02:03.321564 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="621c0c9b-73f8-49ca-90a1-a7f3dc80382b" containerName="keystone-cron" Nov 24 23:02:03 crc kubenswrapper[4915]: I1124 23:02:03.323331 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jf8pf" Nov 24 23:02:03 crc kubenswrapper[4915]: I1124 23:02:03.337374 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jf8pf"] Nov 24 23:02:03 crc kubenswrapper[4915]: I1124 23:02:03.429577 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b208b3ff-1587-4f84-a81c-5acae1678183-utilities\") pod \"certified-operators-jf8pf\" (UID: \"b208b3ff-1587-4f84-a81c-5acae1678183\") " pod="openshift-marketplace/certified-operators-jf8pf" Nov 24 23:02:03 crc kubenswrapper[4915]: I1124 23:02:03.429762 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b208b3ff-1587-4f84-a81c-5acae1678183-catalog-content\") pod \"certified-operators-jf8pf\" (UID: \"b208b3ff-1587-4f84-a81c-5acae1678183\") " pod="openshift-marketplace/certified-operators-jf8pf" Nov 24 23:02:03 crc kubenswrapper[4915]: I1124 23:02:03.429867 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqt77\" (UniqueName: \"kubernetes.io/projected/b208b3ff-1587-4f84-a81c-5acae1678183-kube-api-access-pqt77\") pod \"certified-operators-jf8pf\" (UID: \"b208b3ff-1587-4f84-a81c-5acae1678183\") " pod="openshift-marketplace/certified-operators-jf8pf" Nov 24 23:02:03 crc kubenswrapper[4915]: I1124 23:02:03.532285 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b208b3ff-1587-4f84-a81c-5acae1678183-utilities\") pod \"certified-operators-jf8pf\" (UID: \"b208b3ff-1587-4f84-a81c-5acae1678183\") " pod="openshift-marketplace/certified-operators-jf8pf" Nov 24 23:02:03 crc kubenswrapper[4915]: I1124 23:02:03.532567 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b208b3ff-1587-4f84-a81c-5acae1678183-catalog-content\") pod \"certified-operators-jf8pf\" (UID: \"b208b3ff-1587-4f84-a81c-5acae1678183\") " pod="openshift-marketplace/certified-operators-jf8pf" Nov 24 23:02:03 crc kubenswrapper[4915]: I1124 23:02:03.532635 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqt77\" (UniqueName: \"kubernetes.io/projected/b208b3ff-1587-4f84-a81c-5acae1678183-kube-api-access-pqt77\") pod \"certified-operators-jf8pf\" (UID: \"b208b3ff-1587-4f84-a81c-5acae1678183\") " pod="openshift-marketplace/certified-operators-jf8pf" Nov 24 23:02:03 crc kubenswrapper[4915]: I1124 23:02:03.533833 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b208b3ff-1587-4f84-a81c-5acae1678183-utilities\") pod \"certified-operators-jf8pf\" (UID: \"b208b3ff-1587-4f84-a81c-5acae1678183\") " pod="openshift-marketplace/certified-operators-jf8pf" Nov 24 23:02:03 crc kubenswrapper[4915]: I1124 23:02:03.534073 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b208b3ff-1587-4f84-a81c-5acae1678183-catalog-content\") pod \"certified-operators-jf8pf\" (UID: \"b208b3ff-1587-4f84-a81c-5acae1678183\") " pod="openshift-marketplace/certified-operators-jf8pf" Nov 24 23:02:03 crc kubenswrapper[4915]: I1124 23:02:03.553293 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqt77\" (UniqueName: \"kubernetes.io/projected/b208b3ff-1587-4f84-a81c-5acae1678183-kube-api-access-pqt77\") pod \"certified-operators-jf8pf\" (UID: \"b208b3ff-1587-4f84-a81c-5acae1678183\") " pod="openshift-marketplace/certified-operators-jf8pf" Nov 24 23:02:03 crc kubenswrapper[4915]: I1124 23:02:03.671665 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jf8pf" Nov 24 23:02:04 crc kubenswrapper[4915]: I1124 23:02:04.218901 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jf8pf"] Nov 24 23:02:04 crc kubenswrapper[4915]: I1124 23:02:04.445091 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jf8pf" event={"ID":"b208b3ff-1587-4f84-a81c-5acae1678183","Type":"ContainerStarted","Data":"7beefe17c3dcc3ce753e643c8c7ff42c99e37df50a5fae18e74757605f8bef52"} Nov 24 23:02:05 crc kubenswrapper[4915]: I1124 23:02:05.463229 4915 generic.go:334] "Generic (PLEG): container finished" podID="b208b3ff-1587-4f84-a81c-5acae1678183" containerID="9b0ec818b5fa2cd29f6f985ac2bfe93c22fab34c9700c097d5c2cfaaf23c6bc6" exitCode=0 Nov 24 23:02:05 crc kubenswrapper[4915]: I1124 23:02:05.463609 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jf8pf" event={"ID":"b208b3ff-1587-4f84-a81c-5acae1678183","Type":"ContainerDied","Data":"9b0ec818b5fa2cd29f6f985ac2bfe93c22fab34c9700c097d5c2cfaaf23c6bc6"} Nov 24 23:02:06 crc kubenswrapper[4915]: I1124 23:02:06.477499 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jf8pf" event={"ID":"b208b3ff-1587-4f84-a81c-5acae1678183","Type":"ContainerStarted","Data":"b648a2b9b722a1be395f647287d3d4e03aa151690a6483b32f5bf0af48ad1053"} Nov 24 23:02:08 crc kubenswrapper[4915]: I1124 23:02:08.498052 4915 generic.go:334] "Generic (PLEG): container finished" podID="b208b3ff-1587-4f84-a81c-5acae1678183" containerID="b648a2b9b722a1be395f647287d3d4e03aa151690a6483b32f5bf0af48ad1053" exitCode=0 Nov 24 23:02:08 crc kubenswrapper[4915]: I1124 23:02:08.498146 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jf8pf" event={"ID":"b208b3ff-1587-4f84-a81c-5acae1678183","Type":"ContainerDied","Data":"b648a2b9b722a1be395f647287d3d4e03aa151690a6483b32f5bf0af48ad1053"} Nov 24 23:02:09 crc kubenswrapper[4915]: I1124 23:02:09.518076 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jf8pf" event={"ID":"b208b3ff-1587-4f84-a81c-5acae1678183","Type":"ContainerStarted","Data":"6bb825afd8b752ad21f9ec265ad359a16510be82d421c0fad74dd94e0e38eee2"} Nov 24 23:02:09 crc kubenswrapper[4915]: I1124 23:02:09.538894 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jf8pf" podStartSLOduration=3.116384019 podStartE2EDuration="6.538873312s" podCreationTimestamp="2025-11-24 23:02:03 +0000 UTC" firstStartedPulling="2025-11-24 23:02:05.466718808 +0000 UTC m=+6143.782971001" lastFinishedPulling="2025-11-24 23:02:08.889208121 +0000 UTC m=+6147.205460294" observedRunningTime="2025-11-24 23:02:09.535901592 +0000 UTC m=+6147.852153765" watchObservedRunningTime="2025-11-24 23:02:09.538873312 +0000 UTC m=+6147.855125495" Nov 24 23:02:13 crc kubenswrapper[4915]: I1124 23:02:13.671900 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jf8pf" Nov 24 23:02:13 crc kubenswrapper[4915]: I1124 23:02:13.672528 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jf8pf" Nov 24 23:02:13 crc kubenswrapper[4915]: I1124 23:02:13.737146 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jf8pf" Nov 24 23:02:14 crc kubenswrapper[4915]: I1124 23:02:14.642356 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jf8pf" Nov 24 23:02:14 crc kubenswrapper[4915]: I1124 23:02:14.695013 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jf8pf"] Nov 24 23:02:16 crc kubenswrapper[4915]: I1124 23:02:16.613900 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jf8pf" podUID="b208b3ff-1587-4f84-a81c-5acae1678183" containerName="registry-server" containerID="cri-o://6bb825afd8b752ad21f9ec265ad359a16510be82d421c0fad74dd94e0e38eee2" gracePeriod=2 Nov 24 23:02:17 crc kubenswrapper[4915]: I1124 23:02:17.193628 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jf8pf" Nov 24 23:02:17 crc kubenswrapper[4915]: I1124 23:02:17.301098 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b208b3ff-1587-4f84-a81c-5acae1678183-catalog-content\") pod \"b208b3ff-1587-4f84-a81c-5acae1678183\" (UID: \"b208b3ff-1587-4f84-a81c-5acae1678183\") " Nov 24 23:02:17 crc kubenswrapper[4915]: I1124 23:02:17.301347 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b208b3ff-1587-4f84-a81c-5acae1678183-utilities\") pod \"b208b3ff-1587-4f84-a81c-5acae1678183\" (UID: \"b208b3ff-1587-4f84-a81c-5acae1678183\") " Nov 24 23:02:17 crc kubenswrapper[4915]: I1124 23:02:17.301470 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqt77\" (UniqueName: \"kubernetes.io/projected/b208b3ff-1587-4f84-a81c-5acae1678183-kube-api-access-pqt77\") pod \"b208b3ff-1587-4f84-a81c-5acae1678183\" (UID: \"b208b3ff-1587-4f84-a81c-5acae1678183\") " Nov 24 23:02:17 crc kubenswrapper[4915]: I1124 23:02:17.302423 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b208b3ff-1587-4f84-a81c-5acae1678183-utilities" (OuterVolumeSpecName: "utilities") pod "b208b3ff-1587-4f84-a81c-5acae1678183" (UID: "b208b3ff-1587-4f84-a81c-5acae1678183"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:02:17 crc kubenswrapper[4915]: I1124 23:02:17.309995 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b208b3ff-1587-4f84-a81c-5acae1678183-kube-api-access-pqt77" (OuterVolumeSpecName: "kube-api-access-pqt77") pod "b208b3ff-1587-4f84-a81c-5acae1678183" (UID: "b208b3ff-1587-4f84-a81c-5acae1678183"). InnerVolumeSpecName "kube-api-access-pqt77". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:02:17 crc kubenswrapper[4915]: I1124 23:02:17.368469 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b208b3ff-1587-4f84-a81c-5acae1678183-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b208b3ff-1587-4f84-a81c-5acae1678183" (UID: "b208b3ff-1587-4f84-a81c-5acae1678183"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:02:17 crc kubenswrapper[4915]: I1124 23:02:17.405396 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b208b3ff-1587-4f84-a81c-5acae1678183-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 23:02:17 crc kubenswrapper[4915]: I1124 23:02:17.405440 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b208b3ff-1587-4f84-a81c-5acae1678183-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 23:02:17 crc kubenswrapper[4915]: I1124 23:02:17.405472 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqt77\" (UniqueName: \"kubernetes.io/projected/b208b3ff-1587-4f84-a81c-5acae1678183-kube-api-access-pqt77\") on node \"crc\" DevicePath \"\"" Nov 24 23:02:17 crc kubenswrapper[4915]: I1124 23:02:17.645980 4915 generic.go:334] "Generic (PLEG): container finished" podID="b208b3ff-1587-4f84-a81c-5acae1678183" containerID="6bb825afd8b752ad21f9ec265ad359a16510be82d421c0fad74dd94e0e38eee2" exitCode=0 Nov 24 23:02:17 crc kubenswrapper[4915]: I1124 23:02:17.646029 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jf8pf" event={"ID":"b208b3ff-1587-4f84-a81c-5acae1678183","Type":"ContainerDied","Data":"6bb825afd8b752ad21f9ec265ad359a16510be82d421c0fad74dd94e0e38eee2"} Nov 24 23:02:17 crc kubenswrapper[4915]: I1124 23:02:17.646059 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jf8pf" event={"ID":"b208b3ff-1587-4f84-a81c-5acae1678183","Type":"ContainerDied","Data":"7beefe17c3dcc3ce753e643c8c7ff42c99e37df50a5fae18e74757605f8bef52"} Nov 24 23:02:17 crc kubenswrapper[4915]: I1124 23:02:17.646082 4915 scope.go:117] "RemoveContainer" containerID="6bb825afd8b752ad21f9ec265ad359a16510be82d421c0fad74dd94e0e38eee2" Nov 24 23:02:17 crc kubenswrapper[4915]: I1124 23:02:17.646229 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jf8pf" Nov 24 23:02:17 crc kubenswrapper[4915]: I1124 23:02:17.698023 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jf8pf"] Nov 24 23:02:17 crc kubenswrapper[4915]: I1124 23:02:17.703243 4915 scope.go:117] "RemoveContainer" containerID="b648a2b9b722a1be395f647287d3d4e03aa151690a6483b32f5bf0af48ad1053" Nov 24 23:02:17 crc kubenswrapper[4915]: I1124 23:02:17.714935 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jf8pf"] Nov 24 23:02:17 crc kubenswrapper[4915]: I1124 23:02:17.743857 4915 scope.go:117] "RemoveContainer" containerID="9b0ec818b5fa2cd29f6f985ac2bfe93c22fab34c9700c097d5c2cfaaf23c6bc6" Nov 24 23:02:17 crc kubenswrapper[4915]: I1124 23:02:17.796989 4915 scope.go:117] "RemoveContainer" containerID="6bb825afd8b752ad21f9ec265ad359a16510be82d421c0fad74dd94e0e38eee2" Nov 24 23:02:17 crc kubenswrapper[4915]: E1124 23:02:17.797339 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bb825afd8b752ad21f9ec265ad359a16510be82d421c0fad74dd94e0e38eee2\": container with ID starting with 6bb825afd8b752ad21f9ec265ad359a16510be82d421c0fad74dd94e0e38eee2 not found: ID does not exist" containerID="6bb825afd8b752ad21f9ec265ad359a16510be82d421c0fad74dd94e0e38eee2" Nov 24 23:02:17 crc kubenswrapper[4915]: I1124 23:02:17.797379 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bb825afd8b752ad21f9ec265ad359a16510be82d421c0fad74dd94e0e38eee2"} err="failed to get container status \"6bb825afd8b752ad21f9ec265ad359a16510be82d421c0fad74dd94e0e38eee2\": rpc error: code = NotFound desc = could not find container \"6bb825afd8b752ad21f9ec265ad359a16510be82d421c0fad74dd94e0e38eee2\": container with ID starting with 6bb825afd8b752ad21f9ec265ad359a16510be82d421c0fad74dd94e0e38eee2 not found: ID does not exist" Nov 24 23:02:17 crc kubenswrapper[4915]: I1124 23:02:17.797405 4915 scope.go:117] "RemoveContainer" containerID="b648a2b9b722a1be395f647287d3d4e03aa151690a6483b32f5bf0af48ad1053" Nov 24 23:02:17 crc kubenswrapper[4915]: E1124 23:02:17.800020 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b648a2b9b722a1be395f647287d3d4e03aa151690a6483b32f5bf0af48ad1053\": container with ID starting with b648a2b9b722a1be395f647287d3d4e03aa151690a6483b32f5bf0af48ad1053 not found: ID does not exist" containerID="b648a2b9b722a1be395f647287d3d4e03aa151690a6483b32f5bf0af48ad1053" Nov 24 23:02:17 crc kubenswrapper[4915]: I1124 23:02:17.800078 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b648a2b9b722a1be395f647287d3d4e03aa151690a6483b32f5bf0af48ad1053"} err="failed to get container status \"b648a2b9b722a1be395f647287d3d4e03aa151690a6483b32f5bf0af48ad1053\": rpc error: code = NotFound desc = could not find container \"b648a2b9b722a1be395f647287d3d4e03aa151690a6483b32f5bf0af48ad1053\": container with ID starting with b648a2b9b722a1be395f647287d3d4e03aa151690a6483b32f5bf0af48ad1053 not found: ID does not exist" Nov 24 23:02:17 crc kubenswrapper[4915]: I1124 23:02:17.800107 4915 scope.go:117] "RemoveContainer" containerID="9b0ec818b5fa2cd29f6f985ac2bfe93c22fab34c9700c097d5c2cfaaf23c6bc6" Nov 24 23:02:17 crc kubenswrapper[4915]: E1124 23:02:17.800451 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b0ec818b5fa2cd29f6f985ac2bfe93c22fab34c9700c097d5c2cfaaf23c6bc6\": container with ID starting with 9b0ec818b5fa2cd29f6f985ac2bfe93c22fab34c9700c097d5c2cfaaf23c6bc6 not found: ID does not exist" containerID="9b0ec818b5fa2cd29f6f985ac2bfe93c22fab34c9700c097d5c2cfaaf23c6bc6" Nov 24 23:02:17 crc kubenswrapper[4915]: I1124 23:02:17.800478 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b0ec818b5fa2cd29f6f985ac2bfe93c22fab34c9700c097d5c2cfaaf23c6bc6"} err="failed to get container status \"9b0ec818b5fa2cd29f6f985ac2bfe93c22fab34c9700c097d5c2cfaaf23c6bc6\": rpc error: code = NotFound desc = could not find container \"9b0ec818b5fa2cd29f6f985ac2bfe93c22fab34c9700c097d5c2cfaaf23c6bc6\": container with ID starting with 9b0ec818b5fa2cd29f6f985ac2bfe93c22fab34c9700c097d5c2cfaaf23c6bc6 not found: ID does not exist" Nov 24 23:02:18 crc kubenswrapper[4915]: I1124 23:02:18.442373 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b208b3ff-1587-4f84-a81c-5acae1678183" path="/var/lib/kubelet/pods/b208b3ff-1587-4f84-a81c-5acae1678183/volumes" Nov 24 23:02:54 crc kubenswrapper[4915]: I1124 23:02:54.327767 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:02:54 crc kubenswrapper[4915]: I1124 23:02:54.328514 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:03:24 crc kubenswrapper[4915]: I1124 23:03:24.327212 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:03:24 crc kubenswrapper[4915]: I1124 23:03:24.327947 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:03:54 crc kubenswrapper[4915]: I1124 23:03:54.327162 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:03:54 crc kubenswrapper[4915]: I1124 23:03:54.328125 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:03:54 crc kubenswrapper[4915]: I1124 23:03:54.328188 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 23:03:54 crc kubenswrapper[4915]: I1124 23:03:54.329248 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"952a49a7a6a5e5a33f1cb4cff4cc7ececaaecfd6b8b168edbd55891db41d2c90"} pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 23:03:54 crc kubenswrapper[4915]: I1124 23:03:54.329307 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" containerID="cri-o://952a49a7a6a5e5a33f1cb4cff4cc7ececaaecfd6b8b168edbd55891db41d2c90" gracePeriod=600 Nov 24 23:03:54 crc kubenswrapper[4915]: I1124 23:03:54.977192 4915 generic.go:334] "Generic (PLEG): container finished" podID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerID="952a49a7a6a5e5a33f1cb4cff4cc7ececaaecfd6b8b168edbd55891db41d2c90" exitCode=0 Nov 24 23:03:54 crc kubenswrapper[4915]: I1124 23:03:54.977286 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerDied","Data":"952a49a7a6a5e5a33f1cb4cff4cc7ececaaecfd6b8b168edbd55891db41d2c90"} Nov 24 23:03:54 crc kubenswrapper[4915]: I1124 23:03:54.978195 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerStarted","Data":"9460b23ff5cbdbb06020658fde3647ecb1f224c454abb42963e2e7769eedaf47"} Nov 24 23:03:54 crc kubenswrapper[4915]: I1124 23:03:54.978258 4915 scope.go:117] "RemoveContainer" containerID="afd63409dc5ed0c5f5785cc3b821d65e5da68d775440d86d9a2e885312a6da80" Nov 24 23:05:54 crc kubenswrapper[4915]: I1124 23:05:54.329182 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:05:54 crc kubenswrapper[4915]: I1124 23:05:54.329731 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:06:24 crc kubenswrapper[4915]: I1124 23:06:24.327488 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:06:24 crc kubenswrapper[4915]: I1124 23:06:24.328208 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:06:50 crc kubenswrapper[4915]: E1124 23:06:50.607120 4915 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.107:38090->38.102.83.107:46247: read tcp 38.102.83.107:38090->38.102.83.107:46247: read: connection reset by peer Nov 24 23:06:54 crc kubenswrapper[4915]: I1124 23:06:54.327054 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:06:54 crc kubenswrapper[4915]: I1124 23:06:54.327646 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:06:54 crc kubenswrapper[4915]: I1124 23:06:54.327701 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 23:06:54 crc kubenswrapper[4915]: I1124 23:06:54.328940 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9460b23ff5cbdbb06020658fde3647ecb1f224c454abb42963e2e7769eedaf47"} pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 23:06:54 crc kubenswrapper[4915]: I1124 23:06:54.329030 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" containerID="cri-o://9460b23ff5cbdbb06020658fde3647ecb1f224c454abb42963e2e7769eedaf47" gracePeriod=600 Nov 24 23:06:54 crc kubenswrapper[4915]: E1124 23:06:54.456635 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:06:54 crc kubenswrapper[4915]: I1124 23:06:54.990645 4915 generic.go:334] "Generic (PLEG): container finished" podID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerID="9460b23ff5cbdbb06020658fde3647ecb1f224c454abb42963e2e7769eedaf47" exitCode=0 Nov 24 23:06:54 crc kubenswrapper[4915]: I1124 23:06:54.991025 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerDied","Data":"9460b23ff5cbdbb06020658fde3647ecb1f224c454abb42963e2e7769eedaf47"} Nov 24 23:06:54 crc kubenswrapper[4915]: I1124 23:06:54.991078 4915 scope.go:117] "RemoveContainer" containerID="952a49a7a6a5e5a33f1cb4cff4cc7ececaaecfd6b8b168edbd55891db41d2c90" Nov 24 23:06:54 crc kubenswrapper[4915]: I1124 23:06:54.992033 4915 scope.go:117] "RemoveContainer" containerID="9460b23ff5cbdbb06020658fde3647ecb1f224c454abb42963e2e7769eedaf47" Nov 24 23:06:54 crc kubenswrapper[4915]: E1124 23:06:54.992526 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:07:07 crc kubenswrapper[4915]: I1124 23:07:07.427322 4915 scope.go:117] "RemoveContainer" containerID="9460b23ff5cbdbb06020658fde3647ecb1f224c454abb42963e2e7769eedaf47" Nov 24 23:07:07 crc kubenswrapper[4915]: E1124 23:07:07.428527 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:07:19 crc kubenswrapper[4915]: I1124 23:07:19.428230 4915 scope.go:117] "RemoveContainer" containerID="9460b23ff5cbdbb06020658fde3647ecb1f224c454abb42963e2e7769eedaf47" Nov 24 23:07:19 crc kubenswrapper[4915]: E1124 23:07:19.428986 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:07:34 crc kubenswrapper[4915]: I1124 23:07:34.430675 4915 scope.go:117] "RemoveContainer" containerID="9460b23ff5cbdbb06020658fde3647ecb1f224c454abb42963e2e7769eedaf47" Nov 24 23:07:34 crc kubenswrapper[4915]: E1124 23:07:34.431768 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:07:49 crc kubenswrapper[4915]: I1124 23:07:49.426797 4915 scope.go:117] "RemoveContainer" containerID="9460b23ff5cbdbb06020658fde3647ecb1f224c454abb42963e2e7769eedaf47" Nov 24 23:07:49 crc kubenswrapper[4915]: E1124 23:07:49.427822 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:08:00 crc kubenswrapper[4915]: I1124 23:08:00.426943 4915 scope.go:117] "RemoveContainer" containerID="9460b23ff5cbdbb06020658fde3647ecb1f224c454abb42963e2e7769eedaf47" Nov 24 23:08:00 crc kubenswrapper[4915]: E1124 23:08:00.427950 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:08:02 crc kubenswrapper[4915]: E1124 23:08:02.487975 4915 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.107:60338->38.102.83.107:46247: write tcp 38.102.83.107:60338->38.102.83.107:46247: write: broken pipe Nov 24 23:08:11 crc kubenswrapper[4915]: I1124 23:08:11.428328 4915 scope.go:117] "RemoveContainer" containerID="9460b23ff5cbdbb06020658fde3647ecb1f224c454abb42963e2e7769eedaf47" Nov 24 23:08:11 crc kubenswrapper[4915]: E1124 23:08:11.429982 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:08:26 crc kubenswrapper[4915]: I1124 23:08:26.428117 4915 scope.go:117] "RemoveContainer" containerID="9460b23ff5cbdbb06020658fde3647ecb1f224c454abb42963e2e7769eedaf47" Nov 24 23:08:26 crc kubenswrapper[4915]: E1124 23:08:26.429523 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:08:38 crc kubenswrapper[4915]: I1124 23:08:38.427323 4915 scope.go:117] "RemoveContainer" containerID="9460b23ff5cbdbb06020658fde3647ecb1f224c454abb42963e2e7769eedaf47" Nov 24 23:08:38 crc kubenswrapper[4915]: E1124 23:08:38.428442 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:08:49 crc kubenswrapper[4915]: I1124 23:08:49.427347 4915 scope.go:117] "RemoveContainer" containerID="9460b23ff5cbdbb06020658fde3647ecb1f224c454abb42963e2e7769eedaf47" Nov 24 23:08:49 crc kubenswrapper[4915]: E1124 23:08:49.428671 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:08:57 crc kubenswrapper[4915]: I1124 23:08:57.071445 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-62sgq"] Nov 24 23:08:57 crc kubenswrapper[4915]: E1124 23:08:57.073195 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b208b3ff-1587-4f84-a81c-5acae1678183" containerName="extract-utilities" Nov 24 23:08:57 crc kubenswrapper[4915]: I1124 23:08:57.073222 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b208b3ff-1587-4f84-a81c-5acae1678183" containerName="extract-utilities" Nov 24 23:08:57 crc kubenswrapper[4915]: E1124 23:08:57.073259 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b208b3ff-1587-4f84-a81c-5acae1678183" containerName="extract-content" Nov 24 23:08:57 crc kubenswrapper[4915]: I1124 23:08:57.073274 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b208b3ff-1587-4f84-a81c-5acae1678183" containerName="extract-content" Nov 24 23:08:57 crc kubenswrapper[4915]: E1124 23:08:57.073297 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b208b3ff-1587-4f84-a81c-5acae1678183" containerName="registry-server" Nov 24 23:08:57 crc kubenswrapper[4915]: I1124 23:08:57.073311 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b208b3ff-1587-4f84-a81c-5acae1678183" containerName="registry-server" Nov 24 23:08:57 crc kubenswrapper[4915]: I1124 23:08:57.073773 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="b208b3ff-1587-4f84-a81c-5acae1678183" containerName="registry-server" Nov 24 23:08:57 crc kubenswrapper[4915]: I1124 23:08:57.081343 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-62sgq" Nov 24 23:08:57 crc kubenswrapper[4915]: I1124 23:08:57.104102 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-62sgq"] Nov 24 23:08:57 crc kubenswrapper[4915]: I1124 23:08:57.208469 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/206301ad-7a6e-4d3c-9bd8-dbf7bad623c8-catalog-content\") pod \"community-operators-62sgq\" (UID: \"206301ad-7a6e-4d3c-9bd8-dbf7bad623c8\") " pod="openshift-marketplace/community-operators-62sgq" Nov 24 23:08:57 crc kubenswrapper[4915]: I1124 23:08:57.208523 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9wlj\" (UniqueName: \"kubernetes.io/projected/206301ad-7a6e-4d3c-9bd8-dbf7bad623c8-kube-api-access-f9wlj\") pod \"community-operators-62sgq\" (UID: \"206301ad-7a6e-4d3c-9bd8-dbf7bad623c8\") " pod="openshift-marketplace/community-operators-62sgq" Nov 24 23:08:57 crc kubenswrapper[4915]: I1124 23:08:57.209580 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/206301ad-7a6e-4d3c-9bd8-dbf7bad623c8-utilities\") pod \"community-operators-62sgq\" (UID: \"206301ad-7a6e-4d3c-9bd8-dbf7bad623c8\") " pod="openshift-marketplace/community-operators-62sgq" Nov 24 23:08:57 crc kubenswrapper[4915]: I1124 23:08:57.312024 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/206301ad-7a6e-4d3c-9bd8-dbf7bad623c8-catalog-content\") pod \"community-operators-62sgq\" (UID: \"206301ad-7a6e-4d3c-9bd8-dbf7bad623c8\") " pod="openshift-marketplace/community-operators-62sgq" Nov 24 23:08:57 crc kubenswrapper[4915]: I1124 23:08:57.312085 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9wlj\" (UniqueName: \"kubernetes.io/projected/206301ad-7a6e-4d3c-9bd8-dbf7bad623c8-kube-api-access-f9wlj\") pod \"community-operators-62sgq\" (UID: \"206301ad-7a6e-4d3c-9bd8-dbf7bad623c8\") " pod="openshift-marketplace/community-operators-62sgq" Nov 24 23:08:57 crc kubenswrapper[4915]: I1124 23:08:57.312277 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/206301ad-7a6e-4d3c-9bd8-dbf7bad623c8-utilities\") pod \"community-operators-62sgq\" (UID: \"206301ad-7a6e-4d3c-9bd8-dbf7bad623c8\") " pod="openshift-marketplace/community-operators-62sgq" Nov 24 23:08:57 crc kubenswrapper[4915]: I1124 23:08:57.312654 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/206301ad-7a6e-4d3c-9bd8-dbf7bad623c8-catalog-content\") pod \"community-operators-62sgq\" (UID: \"206301ad-7a6e-4d3c-9bd8-dbf7bad623c8\") " pod="openshift-marketplace/community-operators-62sgq" Nov 24 23:08:57 crc kubenswrapper[4915]: I1124 23:08:57.312832 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/206301ad-7a6e-4d3c-9bd8-dbf7bad623c8-utilities\") pod \"community-operators-62sgq\" (UID: \"206301ad-7a6e-4d3c-9bd8-dbf7bad623c8\") " pod="openshift-marketplace/community-operators-62sgq" Nov 24 23:08:57 crc kubenswrapper[4915]: I1124 23:08:57.339988 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9wlj\" (UniqueName: \"kubernetes.io/projected/206301ad-7a6e-4d3c-9bd8-dbf7bad623c8-kube-api-access-f9wlj\") pod \"community-operators-62sgq\" (UID: \"206301ad-7a6e-4d3c-9bd8-dbf7bad623c8\") " pod="openshift-marketplace/community-operators-62sgq" Nov 24 23:08:57 crc kubenswrapper[4915]: I1124 23:08:57.413037 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-62sgq" Nov 24 23:08:57 crc kubenswrapper[4915]: I1124 23:08:57.951009 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-62sgq"] Nov 24 23:08:58 crc kubenswrapper[4915]: I1124 23:08:58.606583 4915 generic.go:334] "Generic (PLEG): container finished" podID="206301ad-7a6e-4d3c-9bd8-dbf7bad623c8" containerID="e0d86a934adfa74a1b21ef35768c0599ef302e1cdff426306304e5b4b6ae9c67" exitCode=0 Nov 24 23:08:58 crc kubenswrapper[4915]: I1124 23:08:58.606697 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62sgq" event={"ID":"206301ad-7a6e-4d3c-9bd8-dbf7bad623c8","Type":"ContainerDied","Data":"e0d86a934adfa74a1b21ef35768c0599ef302e1cdff426306304e5b4b6ae9c67"} Nov 24 23:08:58 crc kubenswrapper[4915]: I1124 23:08:58.606938 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62sgq" event={"ID":"206301ad-7a6e-4d3c-9bd8-dbf7bad623c8","Type":"ContainerStarted","Data":"648dc393bbfcbd0e0819bdd219cf1e5ba4b98d0da510fc8ec4a16d67ab1e026a"} Nov 24 23:08:58 crc kubenswrapper[4915]: I1124 23:08:58.608690 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 23:08:59 crc kubenswrapper[4915]: I1124 23:08:59.629440 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62sgq" event={"ID":"206301ad-7a6e-4d3c-9bd8-dbf7bad623c8","Type":"ContainerStarted","Data":"da58134e8c6f449adadd70e6c894b8980fb478baf41a16ed3afa4bed41e61063"} Nov 24 23:09:00 crc kubenswrapper[4915]: I1124 23:09:00.644222 4915 generic.go:334] "Generic (PLEG): container finished" podID="206301ad-7a6e-4d3c-9bd8-dbf7bad623c8" containerID="da58134e8c6f449adadd70e6c894b8980fb478baf41a16ed3afa4bed41e61063" exitCode=0 Nov 24 23:09:00 crc kubenswrapper[4915]: I1124 23:09:00.644352 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62sgq" event={"ID":"206301ad-7a6e-4d3c-9bd8-dbf7bad623c8","Type":"ContainerDied","Data":"da58134e8c6f449adadd70e6c894b8980fb478baf41a16ed3afa4bed41e61063"} Nov 24 23:09:01 crc kubenswrapper[4915]: I1124 23:09:01.663562 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62sgq" event={"ID":"206301ad-7a6e-4d3c-9bd8-dbf7bad623c8","Type":"ContainerStarted","Data":"cf323dbee840edf8775d762fc96202e963b9cb54421dde99facbeac8e472248f"} Nov 24 23:09:01 crc kubenswrapper[4915]: I1124 23:09:01.686731 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-62sgq" podStartSLOduration=2.256351003 podStartE2EDuration="4.686703991s" podCreationTimestamp="2025-11-24 23:08:57 +0000 UTC" firstStartedPulling="2025-11-24 23:08:58.608379459 +0000 UTC m=+6556.924631642" lastFinishedPulling="2025-11-24 23:09:01.038732427 +0000 UTC m=+6559.354984630" observedRunningTime="2025-11-24 23:09:01.685285023 +0000 UTC m=+6560.001537266" watchObservedRunningTime="2025-11-24 23:09:01.686703991 +0000 UTC m=+6560.002956194" Nov 24 23:09:02 crc kubenswrapper[4915]: I1124 23:09:02.453972 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2sd4d"] Nov 24 23:09:02 crc kubenswrapper[4915]: I1124 23:09:02.457277 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2sd4d" Nov 24 23:09:02 crc kubenswrapper[4915]: I1124 23:09:02.505305 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2sd4d"] Nov 24 23:09:02 crc kubenswrapper[4915]: I1124 23:09:02.650010 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b32f5580-67dc-442c-a5f6-1273a69f4a66-utilities\") pod \"redhat-operators-2sd4d\" (UID: \"b32f5580-67dc-442c-a5f6-1273a69f4a66\") " pod="openshift-marketplace/redhat-operators-2sd4d" Nov 24 23:09:02 crc kubenswrapper[4915]: I1124 23:09:02.650298 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b32f5580-67dc-442c-a5f6-1273a69f4a66-catalog-content\") pod \"redhat-operators-2sd4d\" (UID: \"b32f5580-67dc-442c-a5f6-1273a69f4a66\") " pod="openshift-marketplace/redhat-operators-2sd4d" Nov 24 23:09:02 crc kubenswrapper[4915]: I1124 23:09:02.650335 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmwxw\" (UniqueName: \"kubernetes.io/projected/b32f5580-67dc-442c-a5f6-1273a69f4a66-kube-api-access-cmwxw\") pod \"redhat-operators-2sd4d\" (UID: \"b32f5580-67dc-442c-a5f6-1273a69f4a66\") " pod="openshift-marketplace/redhat-operators-2sd4d" Nov 24 23:09:02 crc kubenswrapper[4915]: I1124 23:09:02.752594 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b32f5580-67dc-442c-a5f6-1273a69f4a66-utilities\") pod \"redhat-operators-2sd4d\" (UID: \"b32f5580-67dc-442c-a5f6-1273a69f4a66\") " pod="openshift-marketplace/redhat-operators-2sd4d" Nov 24 23:09:02 crc kubenswrapper[4915]: I1124 23:09:02.752804 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b32f5580-67dc-442c-a5f6-1273a69f4a66-catalog-content\") pod \"redhat-operators-2sd4d\" (UID: \"b32f5580-67dc-442c-a5f6-1273a69f4a66\") " pod="openshift-marketplace/redhat-operators-2sd4d" Nov 24 23:09:02 crc kubenswrapper[4915]: I1124 23:09:02.752863 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmwxw\" (UniqueName: \"kubernetes.io/projected/b32f5580-67dc-442c-a5f6-1273a69f4a66-kube-api-access-cmwxw\") pod \"redhat-operators-2sd4d\" (UID: \"b32f5580-67dc-442c-a5f6-1273a69f4a66\") " pod="openshift-marketplace/redhat-operators-2sd4d" Nov 24 23:09:02 crc kubenswrapper[4915]: I1124 23:09:02.753166 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b32f5580-67dc-442c-a5f6-1273a69f4a66-utilities\") pod \"redhat-operators-2sd4d\" (UID: \"b32f5580-67dc-442c-a5f6-1273a69f4a66\") " pod="openshift-marketplace/redhat-operators-2sd4d" Nov 24 23:09:02 crc kubenswrapper[4915]: I1124 23:09:02.753223 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b32f5580-67dc-442c-a5f6-1273a69f4a66-catalog-content\") pod \"redhat-operators-2sd4d\" (UID: \"b32f5580-67dc-442c-a5f6-1273a69f4a66\") " pod="openshift-marketplace/redhat-operators-2sd4d" Nov 24 23:09:02 crc kubenswrapper[4915]: I1124 23:09:02.778701 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmwxw\" (UniqueName: \"kubernetes.io/projected/b32f5580-67dc-442c-a5f6-1273a69f4a66-kube-api-access-cmwxw\") pod \"redhat-operators-2sd4d\" (UID: \"b32f5580-67dc-442c-a5f6-1273a69f4a66\") " pod="openshift-marketplace/redhat-operators-2sd4d" Nov 24 23:09:02 crc kubenswrapper[4915]: I1124 23:09:02.806370 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2sd4d" Nov 24 23:09:03 crc kubenswrapper[4915]: I1124 23:09:03.310632 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2sd4d"] Nov 24 23:09:03 crc kubenswrapper[4915]: I1124 23:09:03.739634 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sd4d" event={"ID":"b32f5580-67dc-442c-a5f6-1273a69f4a66","Type":"ContainerStarted","Data":"977c2ad14480cca453aad3345b28f741c9aa286fd52747485358b32bbc282c66"} Nov 24 23:09:03 crc kubenswrapper[4915]: I1124 23:09:03.739901 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sd4d" event={"ID":"b32f5580-67dc-442c-a5f6-1273a69f4a66","Type":"ContainerStarted","Data":"52d878bd454bf5145685edd895e8e61bcf1af60d225071a228f42faeae5a18a3"} Nov 24 23:09:04 crc kubenswrapper[4915]: I1124 23:09:04.427678 4915 scope.go:117] "RemoveContainer" containerID="9460b23ff5cbdbb06020658fde3647ecb1f224c454abb42963e2e7769eedaf47" Nov 24 23:09:04 crc kubenswrapper[4915]: E1124 23:09:04.428350 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:09:04 crc kubenswrapper[4915]: I1124 23:09:04.756825 4915 generic.go:334] "Generic (PLEG): container finished" podID="b32f5580-67dc-442c-a5f6-1273a69f4a66" containerID="977c2ad14480cca453aad3345b28f741c9aa286fd52747485358b32bbc282c66" exitCode=0 Nov 24 23:09:04 crc kubenswrapper[4915]: I1124 23:09:04.756904 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sd4d" event={"ID":"b32f5580-67dc-442c-a5f6-1273a69f4a66","Type":"ContainerDied","Data":"977c2ad14480cca453aad3345b28f741c9aa286fd52747485358b32bbc282c66"} Nov 24 23:09:05 crc kubenswrapper[4915]: I1124 23:09:05.776827 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sd4d" event={"ID":"b32f5580-67dc-442c-a5f6-1273a69f4a66","Type":"ContainerStarted","Data":"5ee172caa18ee1fc57b74dacaf4eece3194b8a58ef10210c75744c4bf5e73469"} Nov 24 23:09:07 crc kubenswrapper[4915]: I1124 23:09:07.413839 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-62sgq" Nov 24 23:09:07 crc kubenswrapper[4915]: I1124 23:09:07.414386 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-62sgq" Nov 24 23:09:07 crc kubenswrapper[4915]: I1124 23:09:07.498396 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-62sgq" Nov 24 23:09:07 crc kubenswrapper[4915]: I1124 23:09:07.876180 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-62sgq" Nov 24 23:09:08 crc kubenswrapper[4915]: I1124 23:09:08.820488 4915 generic.go:334] "Generic (PLEG): container finished" podID="b32f5580-67dc-442c-a5f6-1273a69f4a66" containerID="5ee172caa18ee1fc57b74dacaf4eece3194b8a58ef10210c75744c4bf5e73469" exitCode=0 Nov 24 23:09:08 crc kubenswrapper[4915]: I1124 23:09:08.820605 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sd4d" event={"ID":"b32f5580-67dc-442c-a5f6-1273a69f4a66","Type":"ContainerDied","Data":"5ee172caa18ee1fc57b74dacaf4eece3194b8a58ef10210c75744c4bf5e73469"} Nov 24 23:09:09 crc kubenswrapper[4915]: I1124 23:09:09.836557 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sd4d" event={"ID":"b32f5580-67dc-442c-a5f6-1273a69f4a66","Type":"ContainerStarted","Data":"04a1d4b7dad6c25ec25bc8420f459e96bccdc2f32913d37610ab8a10ddbc05eb"} Nov 24 23:09:09 crc kubenswrapper[4915]: I1124 23:09:09.869797 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2sd4d" podStartSLOduration=2.348690383 podStartE2EDuration="7.869746327s" podCreationTimestamp="2025-11-24 23:09:02 +0000 UTC" firstStartedPulling="2025-11-24 23:09:03.742373363 +0000 UTC m=+6562.058625536" lastFinishedPulling="2025-11-24 23:09:09.263429307 +0000 UTC m=+6567.579681480" observedRunningTime="2025-11-24 23:09:09.857727952 +0000 UTC m=+6568.173980165" watchObservedRunningTime="2025-11-24 23:09:09.869746327 +0000 UTC m=+6568.185998540" Nov 24 23:09:10 crc kubenswrapper[4915]: I1124 23:09:10.242175 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-62sgq"] Nov 24 23:09:10 crc kubenswrapper[4915]: I1124 23:09:10.242955 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-62sgq" podUID="206301ad-7a6e-4d3c-9bd8-dbf7bad623c8" containerName="registry-server" containerID="cri-o://cf323dbee840edf8775d762fc96202e963b9cb54421dde99facbeac8e472248f" gracePeriod=2 Nov 24 23:09:10 crc kubenswrapper[4915]: I1124 23:09:10.848819 4915 generic.go:334] "Generic (PLEG): container finished" podID="206301ad-7a6e-4d3c-9bd8-dbf7bad623c8" containerID="cf323dbee840edf8775d762fc96202e963b9cb54421dde99facbeac8e472248f" exitCode=0 Nov 24 23:09:10 crc kubenswrapper[4915]: I1124 23:09:10.849054 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62sgq" event={"ID":"206301ad-7a6e-4d3c-9bd8-dbf7bad623c8","Type":"ContainerDied","Data":"cf323dbee840edf8775d762fc96202e963b9cb54421dde99facbeac8e472248f"} Nov 24 23:09:10 crc kubenswrapper[4915]: I1124 23:09:10.849079 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62sgq" event={"ID":"206301ad-7a6e-4d3c-9bd8-dbf7bad623c8","Type":"ContainerDied","Data":"648dc393bbfcbd0e0819bdd219cf1e5ba4b98d0da510fc8ec4a16d67ab1e026a"} Nov 24 23:09:10 crc kubenswrapper[4915]: I1124 23:09:10.849090 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="648dc393bbfcbd0e0819bdd219cf1e5ba4b98d0da510fc8ec4a16d67ab1e026a" Nov 24 23:09:10 crc kubenswrapper[4915]: I1124 23:09:10.867177 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-62sgq" Nov 24 23:09:10 crc kubenswrapper[4915]: I1124 23:09:10.965057 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/206301ad-7a6e-4d3c-9bd8-dbf7bad623c8-catalog-content\") pod \"206301ad-7a6e-4d3c-9bd8-dbf7bad623c8\" (UID: \"206301ad-7a6e-4d3c-9bd8-dbf7bad623c8\") " Nov 24 23:09:10 crc kubenswrapper[4915]: I1124 23:09:10.965343 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/206301ad-7a6e-4d3c-9bd8-dbf7bad623c8-utilities\") pod \"206301ad-7a6e-4d3c-9bd8-dbf7bad623c8\" (UID: \"206301ad-7a6e-4d3c-9bd8-dbf7bad623c8\") " Nov 24 23:09:10 crc kubenswrapper[4915]: I1124 23:09:10.965560 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9wlj\" (UniqueName: \"kubernetes.io/projected/206301ad-7a6e-4d3c-9bd8-dbf7bad623c8-kube-api-access-f9wlj\") pod \"206301ad-7a6e-4d3c-9bd8-dbf7bad623c8\" (UID: \"206301ad-7a6e-4d3c-9bd8-dbf7bad623c8\") " Nov 24 23:09:10 crc kubenswrapper[4915]: I1124 23:09:10.966519 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/206301ad-7a6e-4d3c-9bd8-dbf7bad623c8-utilities" (OuterVolumeSpecName: "utilities") pod "206301ad-7a6e-4d3c-9bd8-dbf7bad623c8" (UID: "206301ad-7a6e-4d3c-9bd8-dbf7bad623c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:09:10 crc kubenswrapper[4915]: I1124 23:09:10.971942 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/206301ad-7a6e-4d3c-9bd8-dbf7bad623c8-kube-api-access-f9wlj" (OuterVolumeSpecName: "kube-api-access-f9wlj") pod "206301ad-7a6e-4d3c-9bd8-dbf7bad623c8" (UID: "206301ad-7a6e-4d3c-9bd8-dbf7bad623c8"). InnerVolumeSpecName "kube-api-access-f9wlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:09:11 crc kubenswrapper[4915]: I1124 23:09:11.020473 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/206301ad-7a6e-4d3c-9bd8-dbf7bad623c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "206301ad-7a6e-4d3c-9bd8-dbf7bad623c8" (UID: "206301ad-7a6e-4d3c-9bd8-dbf7bad623c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:09:11 crc kubenswrapper[4915]: I1124 23:09:11.068552 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/206301ad-7a6e-4d3c-9bd8-dbf7bad623c8-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 23:09:11 crc kubenswrapper[4915]: I1124 23:09:11.068608 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9wlj\" (UniqueName: \"kubernetes.io/projected/206301ad-7a6e-4d3c-9bd8-dbf7bad623c8-kube-api-access-f9wlj\") on node \"crc\" DevicePath \"\"" Nov 24 23:09:11 crc kubenswrapper[4915]: I1124 23:09:11.068629 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/206301ad-7a6e-4d3c-9bd8-dbf7bad623c8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 23:09:11 crc kubenswrapper[4915]: I1124 23:09:11.864571 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-62sgq" Nov 24 23:09:11 crc kubenswrapper[4915]: I1124 23:09:11.922263 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-62sgq"] Nov 24 23:09:11 crc kubenswrapper[4915]: I1124 23:09:11.937909 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-62sgq"] Nov 24 23:09:12 crc kubenswrapper[4915]: I1124 23:09:12.455089 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="206301ad-7a6e-4d3c-9bd8-dbf7bad623c8" path="/var/lib/kubelet/pods/206301ad-7a6e-4d3c-9bd8-dbf7bad623c8/volumes" Nov 24 23:09:12 crc kubenswrapper[4915]: I1124 23:09:12.807315 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2sd4d" Nov 24 23:09:12 crc kubenswrapper[4915]: I1124 23:09:12.807438 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2sd4d" Nov 24 23:09:13 crc kubenswrapper[4915]: I1124 23:09:13.894667 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2sd4d" podUID="b32f5580-67dc-442c-a5f6-1273a69f4a66" containerName="registry-server" probeResult="failure" output=< Nov 24 23:09:13 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Nov 24 23:09:13 crc kubenswrapper[4915]: > Nov 24 23:09:16 crc kubenswrapper[4915]: I1124 23:09:16.428063 4915 scope.go:117] "RemoveContainer" containerID="9460b23ff5cbdbb06020658fde3647ecb1f224c454abb42963e2e7769eedaf47" Nov 24 23:09:16 crc kubenswrapper[4915]: E1124 23:09:16.429448 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:09:22 crc kubenswrapper[4915]: I1124 23:09:22.886619 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2sd4d" Nov 24 23:09:22 crc kubenswrapper[4915]: I1124 23:09:22.982922 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2sd4d" Nov 24 23:09:23 crc kubenswrapper[4915]: I1124 23:09:23.155570 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2sd4d"] Nov 24 23:09:24 crc kubenswrapper[4915]: I1124 23:09:24.035389 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2sd4d" podUID="b32f5580-67dc-442c-a5f6-1273a69f4a66" containerName="registry-server" containerID="cri-o://04a1d4b7dad6c25ec25bc8420f459e96bccdc2f32913d37610ab8a10ddbc05eb" gracePeriod=2 Nov 24 23:09:24 crc kubenswrapper[4915]: I1124 23:09:24.600103 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2sd4d" Nov 24 23:09:24 crc kubenswrapper[4915]: I1124 23:09:24.659304 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmwxw\" (UniqueName: \"kubernetes.io/projected/b32f5580-67dc-442c-a5f6-1273a69f4a66-kube-api-access-cmwxw\") pod \"b32f5580-67dc-442c-a5f6-1273a69f4a66\" (UID: \"b32f5580-67dc-442c-a5f6-1273a69f4a66\") " Nov 24 23:09:24 crc kubenswrapper[4915]: I1124 23:09:24.659387 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b32f5580-67dc-442c-a5f6-1273a69f4a66-utilities\") pod \"b32f5580-67dc-442c-a5f6-1273a69f4a66\" (UID: \"b32f5580-67dc-442c-a5f6-1273a69f4a66\") " Nov 24 23:09:24 crc kubenswrapper[4915]: I1124 23:09:24.659792 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b32f5580-67dc-442c-a5f6-1273a69f4a66-catalog-content\") pod \"b32f5580-67dc-442c-a5f6-1273a69f4a66\" (UID: \"b32f5580-67dc-442c-a5f6-1273a69f4a66\") " Nov 24 23:09:24 crc kubenswrapper[4915]: I1124 23:09:24.660815 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b32f5580-67dc-442c-a5f6-1273a69f4a66-utilities" (OuterVolumeSpecName: "utilities") pod "b32f5580-67dc-442c-a5f6-1273a69f4a66" (UID: "b32f5580-67dc-442c-a5f6-1273a69f4a66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:09:24 crc kubenswrapper[4915]: I1124 23:09:24.665604 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b32f5580-67dc-442c-a5f6-1273a69f4a66-kube-api-access-cmwxw" (OuterVolumeSpecName: "kube-api-access-cmwxw") pod "b32f5580-67dc-442c-a5f6-1273a69f4a66" (UID: "b32f5580-67dc-442c-a5f6-1273a69f4a66"). InnerVolumeSpecName "kube-api-access-cmwxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:09:24 crc kubenswrapper[4915]: I1124 23:09:24.765486 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmwxw\" (UniqueName: \"kubernetes.io/projected/b32f5580-67dc-442c-a5f6-1273a69f4a66-kube-api-access-cmwxw\") on node \"crc\" DevicePath \"\"" Nov 24 23:09:24 crc kubenswrapper[4915]: I1124 23:09:24.765536 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b32f5580-67dc-442c-a5f6-1273a69f4a66-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 23:09:24 crc kubenswrapper[4915]: I1124 23:09:24.771506 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b32f5580-67dc-442c-a5f6-1273a69f4a66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b32f5580-67dc-442c-a5f6-1273a69f4a66" (UID: "b32f5580-67dc-442c-a5f6-1273a69f4a66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:09:24 crc kubenswrapper[4915]: I1124 23:09:24.867759 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b32f5580-67dc-442c-a5f6-1273a69f4a66-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 23:09:25 crc kubenswrapper[4915]: I1124 23:09:25.051879 4915 generic.go:334] "Generic (PLEG): container finished" podID="b32f5580-67dc-442c-a5f6-1273a69f4a66" containerID="04a1d4b7dad6c25ec25bc8420f459e96bccdc2f32913d37610ab8a10ddbc05eb" exitCode=0 Nov 24 23:09:25 crc kubenswrapper[4915]: I1124 23:09:25.051934 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sd4d" event={"ID":"b32f5580-67dc-442c-a5f6-1273a69f4a66","Type":"ContainerDied","Data":"04a1d4b7dad6c25ec25bc8420f459e96bccdc2f32913d37610ab8a10ddbc05eb"} Nov 24 23:09:25 crc kubenswrapper[4915]: I1124 23:09:25.051974 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2sd4d" event={"ID":"b32f5580-67dc-442c-a5f6-1273a69f4a66","Type":"ContainerDied","Data":"52d878bd454bf5145685edd895e8e61bcf1af60d225071a228f42faeae5a18a3"} Nov 24 23:09:25 crc kubenswrapper[4915]: I1124 23:09:25.051982 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2sd4d" Nov 24 23:09:25 crc kubenswrapper[4915]: I1124 23:09:25.051996 4915 scope.go:117] "RemoveContainer" containerID="04a1d4b7dad6c25ec25bc8420f459e96bccdc2f32913d37610ab8a10ddbc05eb" Nov 24 23:09:25 crc kubenswrapper[4915]: I1124 23:09:25.089176 4915 scope.go:117] "RemoveContainer" containerID="5ee172caa18ee1fc57b74dacaf4eece3194b8a58ef10210c75744c4bf5e73469" Nov 24 23:09:25 crc kubenswrapper[4915]: I1124 23:09:25.112505 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2sd4d"] Nov 24 23:09:25 crc kubenswrapper[4915]: I1124 23:09:25.116363 4915 scope.go:117] "RemoveContainer" containerID="977c2ad14480cca453aad3345b28f741c9aa286fd52747485358b32bbc282c66" Nov 24 23:09:25 crc kubenswrapper[4915]: I1124 23:09:25.130523 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2sd4d"] Nov 24 23:09:25 crc kubenswrapper[4915]: I1124 23:09:25.183378 4915 scope.go:117] "RemoveContainer" containerID="04a1d4b7dad6c25ec25bc8420f459e96bccdc2f32913d37610ab8a10ddbc05eb" Nov 24 23:09:25 crc kubenswrapper[4915]: E1124 23:09:25.183901 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04a1d4b7dad6c25ec25bc8420f459e96bccdc2f32913d37610ab8a10ddbc05eb\": container with ID starting with 04a1d4b7dad6c25ec25bc8420f459e96bccdc2f32913d37610ab8a10ddbc05eb not found: ID does not exist" containerID="04a1d4b7dad6c25ec25bc8420f459e96bccdc2f32913d37610ab8a10ddbc05eb" Nov 24 23:09:25 crc kubenswrapper[4915]: I1124 23:09:25.183956 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04a1d4b7dad6c25ec25bc8420f459e96bccdc2f32913d37610ab8a10ddbc05eb"} err="failed to get container status \"04a1d4b7dad6c25ec25bc8420f459e96bccdc2f32913d37610ab8a10ddbc05eb\": rpc error: code = NotFound desc = could not find container \"04a1d4b7dad6c25ec25bc8420f459e96bccdc2f32913d37610ab8a10ddbc05eb\": container with ID starting with 04a1d4b7dad6c25ec25bc8420f459e96bccdc2f32913d37610ab8a10ddbc05eb not found: ID does not exist" Nov 24 23:09:25 crc kubenswrapper[4915]: I1124 23:09:25.184001 4915 scope.go:117] "RemoveContainer" containerID="5ee172caa18ee1fc57b74dacaf4eece3194b8a58ef10210c75744c4bf5e73469" Nov 24 23:09:25 crc kubenswrapper[4915]: E1124 23:09:25.184359 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ee172caa18ee1fc57b74dacaf4eece3194b8a58ef10210c75744c4bf5e73469\": container with ID starting with 5ee172caa18ee1fc57b74dacaf4eece3194b8a58ef10210c75744c4bf5e73469 not found: ID does not exist" containerID="5ee172caa18ee1fc57b74dacaf4eece3194b8a58ef10210c75744c4bf5e73469" Nov 24 23:09:25 crc kubenswrapper[4915]: I1124 23:09:25.184384 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ee172caa18ee1fc57b74dacaf4eece3194b8a58ef10210c75744c4bf5e73469"} err="failed to get container status \"5ee172caa18ee1fc57b74dacaf4eece3194b8a58ef10210c75744c4bf5e73469\": rpc error: code = NotFound desc = could not find container \"5ee172caa18ee1fc57b74dacaf4eece3194b8a58ef10210c75744c4bf5e73469\": container with ID starting with 5ee172caa18ee1fc57b74dacaf4eece3194b8a58ef10210c75744c4bf5e73469 not found: ID does not exist" Nov 24 23:09:25 crc kubenswrapper[4915]: I1124 23:09:25.184402 4915 scope.go:117] "RemoveContainer" containerID="977c2ad14480cca453aad3345b28f741c9aa286fd52747485358b32bbc282c66" Nov 24 23:09:25 crc kubenswrapper[4915]: E1124 23:09:25.185047 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"977c2ad14480cca453aad3345b28f741c9aa286fd52747485358b32bbc282c66\": container with ID starting with 977c2ad14480cca453aad3345b28f741c9aa286fd52747485358b32bbc282c66 not found: ID does not exist" containerID="977c2ad14480cca453aad3345b28f741c9aa286fd52747485358b32bbc282c66" Nov 24 23:09:25 crc kubenswrapper[4915]: I1124 23:09:25.185089 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"977c2ad14480cca453aad3345b28f741c9aa286fd52747485358b32bbc282c66"} err="failed to get container status \"977c2ad14480cca453aad3345b28f741c9aa286fd52747485358b32bbc282c66\": rpc error: code = NotFound desc = could not find container \"977c2ad14480cca453aad3345b28f741c9aa286fd52747485358b32bbc282c66\": container with ID starting with 977c2ad14480cca453aad3345b28f741c9aa286fd52747485358b32bbc282c66 not found: ID does not exist" Nov 24 23:09:26 crc kubenswrapper[4915]: I1124 23:09:26.454014 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b32f5580-67dc-442c-a5f6-1273a69f4a66" path="/var/lib/kubelet/pods/b32f5580-67dc-442c-a5f6-1273a69f4a66/volumes" Nov 24 23:09:30 crc kubenswrapper[4915]: I1124 23:09:30.427697 4915 scope.go:117] "RemoveContainer" containerID="9460b23ff5cbdbb06020658fde3647ecb1f224c454abb42963e2e7769eedaf47" Nov 24 23:09:30 crc kubenswrapper[4915]: E1124 23:09:30.428805 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:09:45 crc kubenswrapper[4915]: I1124 23:09:45.428091 4915 scope.go:117] "RemoveContainer" containerID="9460b23ff5cbdbb06020658fde3647ecb1f224c454abb42963e2e7769eedaf47" Nov 24 23:09:45 crc kubenswrapper[4915]: E1124 23:09:45.429362 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:10:00 crc kubenswrapper[4915]: I1124 23:10:00.427223 4915 scope.go:117] "RemoveContainer" containerID="9460b23ff5cbdbb06020658fde3647ecb1f224c454abb42963e2e7769eedaf47" Nov 24 23:10:00 crc kubenswrapper[4915]: E1124 23:10:00.429001 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:10:12 crc kubenswrapper[4915]: I1124 23:10:12.440788 4915 scope.go:117] "RemoveContainer" containerID="9460b23ff5cbdbb06020658fde3647ecb1f224c454abb42963e2e7769eedaf47" Nov 24 23:10:12 crc kubenswrapper[4915]: E1124 23:10:12.441991 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:10:23 crc kubenswrapper[4915]: I1124 23:10:23.427905 4915 scope.go:117] "RemoveContainer" containerID="9460b23ff5cbdbb06020658fde3647ecb1f224c454abb42963e2e7769eedaf47" Nov 24 23:10:23 crc kubenswrapper[4915]: E1124 23:10:23.429218 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:10:38 crc kubenswrapper[4915]: I1124 23:10:38.427408 4915 scope.go:117] "RemoveContainer" containerID="9460b23ff5cbdbb06020658fde3647ecb1f224c454abb42963e2e7769eedaf47" Nov 24 23:10:38 crc kubenswrapper[4915]: E1124 23:10:38.428413 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:10:51 crc kubenswrapper[4915]: I1124 23:10:51.428106 4915 scope.go:117] "RemoveContainer" containerID="9460b23ff5cbdbb06020658fde3647ecb1f224c454abb42963e2e7769eedaf47" Nov 24 23:10:51 crc kubenswrapper[4915]: E1124 23:10:51.429149 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:10:59 crc kubenswrapper[4915]: I1124 23:10:59.994597 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-bnv6m"] Nov 24 23:10:59 crc kubenswrapper[4915]: E1124 23:10:59.995834 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="206301ad-7a6e-4d3c-9bd8-dbf7bad623c8" containerName="registry-server" Nov 24 23:10:59 crc kubenswrapper[4915]: I1124 23:10:59.995855 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="206301ad-7a6e-4d3c-9bd8-dbf7bad623c8" containerName="registry-server" Nov 24 23:10:59 crc kubenswrapper[4915]: E1124 23:10:59.995895 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b32f5580-67dc-442c-a5f6-1273a69f4a66" containerName="extract-utilities" Nov 24 23:10:59 crc kubenswrapper[4915]: I1124 23:10:59.995906 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b32f5580-67dc-442c-a5f6-1273a69f4a66" containerName="extract-utilities" Nov 24 23:10:59 crc kubenswrapper[4915]: E1124 23:10:59.995924 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="206301ad-7a6e-4d3c-9bd8-dbf7bad623c8" containerName="extract-utilities" Nov 24 23:10:59 crc kubenswrapper[4915]: I1124 23:10:59.995933 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="206301ad-7a6e-4d3c-9bd8-dbf7bad623c8" containerName="extract-utilities" Nov 24 23:10:59 crc kubenswrapper[4915]: E1124 23:10:59.995950 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="206301ad-7a6e-4d3c-9bd8-dbf7bad623c8" containerName="extract-content" Nov 24 23:10:59 crc kubenswrapper[4915]: I1124 23:10:59.995958 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="206301ad-7a6e-4d3c-9bd8-dbf7bad623c8" containerName="extract-content" Nov 24 23:10:59 crc kubenswrapper[4915]: E1124 23:10:59.995974 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b32f5580-67dc-442c-a5f6-1273a69f4a66" containerName="extract-content" Nov 24 23:10:59 crc kubenswrapper[4915]: I1124 23:10:59.995984 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b32f5580-67dc-442c-a5f6-1273a69f4a66" containerName="extract-content" Nov 24 23:10:59 crc kubenswrapper[4915]: E1124 23:10:59.996006 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b32f5580-67dc-442c-a5f6-1273a69f4a66" containerName="registry-server" Nov 24 23:10:59 crc kubenswrapper[4915]: I1124 23:10:59.996014 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="b32f5580-67dc-442c-a5f6-1273a69f4a66" containerName="registry-server" Nov 24 23:10:59 crc kubenswrapper[4915]: I1124 23:10:59.996342 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="b32f5580-67dc-442c-a5f6-1273a69f4a66" containerName="registry-server" Nov 24 23:10:59 crc kubenswrapper[4915]: I1124 23:10:59.996367 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="206301ad-7a6e-4d3c-9bd8-dbf7bad623c8" containerName="registry-server" Nov 24 23:10:59 crc kubenswrapper[4915]: I1124 23:10:59.997495 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-bnv6m" Nov 24 23:11:00 crc kubenswrapper[4915]: I1124 23:11:00.011502 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-bnv6m"] Nov 24 23:11:00 crc kubenswrapper[4915]: I1124 23:11:00.116928 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-dfc4k"] Nov 24 23:11:00 crc kubenswrapper[4915]: I1124 23:11:00.118354 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-dfc4k" Nov 24 23:11:00 crc kubenswrapper[4915]: I1124 23:11:00.123960 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 23:11:00 crc kubenswrapper[4915]: I1124 23:11:00.136061 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-dfc4k"] Nov 24 23:11:00 crc kubenswrapper[4915]: I1124 23:11:00.138345 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f948bc6-46ff-4a75-98c5-beafdbd54bcc-combined-ca-bundle\") pod \"heat-db-sync-bnv6m\" (UID: \"3f948bc6-46ff-4a75-98c5-beafdbd54bcc\") " pod="openstack/heat-db-sync-bnv6m" Nov 24 23:11:00 crc kubenswrapper[4915]: I1124 23:11:00.138395 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f948bc6-46ff-4a75-98c5-beafdbd54bcc-config-data\") pod \"heat-db-sync-bnv6m\" (UID: \"3f948bc6-46ff-4a75-98c5-beafdbd54bcc\") " pod="openstack/heat-db-sync-bnv6m" Nov 24 23:11:00 crc kubenswrapper[4915]: I1124 23:11:00.138481 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6krm\" (UniqueName: \"kubernetes.io/projected/3f948bc6-46ff-4a75-98c5-beafdbd54bcc-kube-api-access-c6krm\") pod \"heat-db-sync-bnv6m\" (UID: \"3f948bc6-46ff-4a75-98c5-beafdbd54bcc\") " pod="openstack/heat-db-sync-bnv6m" Nov 24 23:11:00 crc kubenswrapper[4915]: I1124 23:11:00.240596 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f948bc6-46ff-4a75-98c5-beafdbd54bcc-combined-ca-bundle\") pod \"heat-db-sync-bnv6m\" (UID: \"3f948bc6-46ff-4a75-98c5-beafdbd54bcc\") " pod="openstack/heat-db-sync-bnv6m" Nov 24 23:11:00 crc kubenswrapper[4915]: I1124 23:11:00.241110 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f948bc6-46ff-4a75-98c5-beafdbd54bcc-config-data\") pod \"heat-db-sync-bnv6m\" (UID: \"3f948bc6-46ff-4a75-98c5-beafdbd54bcc\") " pod="openstack/heat-db-sync-bnv6m" Nov 24 23:11:00 crc kubenswrapper[4915]: I1124 23:11:00.241183 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgmpl\" (UniqueName: \"kubernetes.io/projected/4b249c5c-fb49-4f90-9821-2c4c7b37d448-kube-api-access-bgmpl\") pod \"aodh-db-sync-dfc4k\" (UID: \"4b249c5c-fb49-4f90-9821-2c4c7b37d448\") " pod="openstack/aodh-db-sync-dfc4k" Nov 24 23:11:00 crc kubenswrapper[4915]: I1124 23:11:00.241230 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b249c5c-fb49-4f90-9821-2c4c7b37d448-scripts\") pod \"aodh-db-sync-dfc4k\" (UID: \"4b249c5c-fb49-4f90-9821-2c4c7b37d448\") " pod="openstack/aodh-db-sync-dfc4k" Nov 24 23:11:00 crc kubenswrapper[4915]: I1124 23:11:00.241319 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6krm\" (UniqueName: \"kubernetes.io/projected/3f948bc6-46ff-4a75-98c5-beafdbd54bcc-kube-api-access-c6krm\") pod \"heat-db-sync-bnv6m\" (UID: \"3f948bc6-46ff-4a75-98c5-beafdbd54bcc\") " pod="openstack/heat-db-sync-bnv6m" Nov 24 23:11:00 crc kubenswrapper[4915]: I1124 23:11:00.241389 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b249c5c-fb49-4f90-9821-2c4c7b37d448-config-data\") pod \"aodh-db-sync-dfc4k\" (UID: \"4b249c5c-fb49-4f90-9821-2c4c7b37d448\") " pod="openstack/aodh-db-sync-dfc4k" Nov 24 23:11:00 crc kubenswrapper[4915]: I1124 23:11:00.241429 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b249c5c-fb49-4f90-9821-2c4c7b37d448-combined-ca-bundle\") pod \"aodh-db-sync-dfc4k\" (UID: \"4b249c5c-fb49-4f90-9821-2c4c7b37d448\") " pod="openstack/aodh-db-sync-dfc4k" Nov 24 23:11:00 crc kubenswrapper[4915]: I1124 23:11:00.249710 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f948bc6-46ff-4a75-98c5-beafdbd54bcc-config-data\") pod \"heat-db-sync-bnv6m\" (UID: \"3f948bc6-46ff-4a75-98c5-beafdbd54bcc\") " pod="openstack/heat-db-sync-bnv6m" Nov 24 23:11:00 crc kubenswrapper[4915]: I1124 23:11:00.252254 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f948bc6-46ff-4a75-98c5-beafdbd54bcc-combined-ca-bundle\") pod \"heat-db-sync-bnv6m\" (UID: \"3f948bc6-46ff-4a75-98c5-beafdbd54bcc\") " pod="openstack/heat-db-sync-bnv6m" Nov 24 23:11:00 crc kubenswrapper[4915]: I1124 23:11:00.278408 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6krm\" (UniqueName: \"kubernetes.io/projected/3f948bc6-46ff-4a75-98c5-beafdbd54bcc-kube-api-access-c6krm\") pod \"heat-db-sync-bnv6m\" (UID: \"3f948bc6-46ff-4a75-98c5-beafdbd54bcc\") " pod="openstack/heat-db-sync-bnv6m" Nov 24 23:11:00 crc kubenswrapper[4915]: I1124 23:11:00.331306 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-bnv6m" Nov 24 23:11:00 crc kubenswrapper[4915]: I1124 23:11:00.343529 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgmpl\" (UniqueName: \"kubernetes.io/projected/4b249c5c-fb49-4f90-9821-2c4c7b37d448-kube-api-access-bgmpl\") pod \"aodh-db-sync-dfc4k\" (UID: \"4b249c5c-fb49-4f90-9821-2c4c7b37d448\") " pod="openstack/aodh-db-sync-dfc4k" Nov 24 23:11:00 crc kubenswrapper[4915]: I1124 23:11:00.343626 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b249c5c-fb49-4f90-9821-2c4c7b37d448-scripts\") pod \"aodh-db-sync-dfc4k\" (UID: \"4b249c5c-fb49-4f90-9821-2c4c7b37d448\") " pod="openstack/aodh-db-sync-dfc4k" Nov 24 23:11:00 crc kubenswrapper[4915]: I1124 23:11:00.343941 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b249c5c-fb49-4f90-9821-2c4c7b37d448-config-data\") pod \"aodh-db-sync-dfc4k\" (UID: \"4b249c5c-fb49-4f90-9821-2c4c7b37d448\") " pod="openstack/aodh-db-sync-dfc4k" Nov 24 23:11:00 crc kubenswrapper[4915]: I1124 23:11:00.343992 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b249c5c-fb49-4f90-9821-2c4c7b37d448-combined-ca-bundle\") pod \"aodh-db-sync-dfc4k\" (UID: \"4b249c5c-fb49-4f90-9821-2c4c7b37d448\") " pod="openstack/aodh-db-sync-dfc4k" Nov 24 23:11:00 crc kubenswrapper[4915]: I1124 23:11:00.349290 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b249c5c-fb49-4f90-9821-2c4c7b37d448-config-data\") pod \"aodh-db-sync-dfc4k\" (UID: \"4b249c5c-fb49-4f90-9821-2c4c7b37d448\") " pod="openstack/aodh-db-sync-dfc4k" Nov 24 23:11:00 crc kubenswrapper[4915]: I1124 23:11:00.349530 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b249c5c-fb49-4f90-9821-2c4c7b37d448-scripts\") pod \"aodh-db-sync-dfc4k\" (UID: \"4b249c5c-fb49-4f90-9821-2c4c7b37d448\") " pod="openstack/aodh-db-sync-dfc4k" Nov 24 23:11:00 crc kubenswrapper[4915]: I1124 23:11:00.350409 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b249c5c-fb49-4f90-9821-2c4c7b37d448-combined-ca-bundle\") pod \"aodh-db-sync-dfc4k\" (UID: \"4b249c5c-fb49-4f90-9821-2c4c7b37d448\") " pod="openstack/aodh-db-sync-dfc4k" Nov 24 23:11:00 crc kubenswrapper[4915]: I1124 23:11:00.362329 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgmpl\" (UniqueName: \"kubernetes.io/projected/4b249c5c-fb49-4f90-9821-2c4c7b37d448-kube-api-access-bgmpl\") pod \"aodh-db-sync-dfc4k\" (UID: \"4b249c5c-fb49-4f90-9821-2c4c7b37d448\") " pod="openstack/aodh-db-sync-dfc4k" Nov 24 23:11:00 crc kubenswrapper[4915]: I1124 23:11:00.438429 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-dfc4k" Nov 24 23:11:00 crc kubenswrapper[4915]: I1124 23:11:00.908493 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-bnv6m"] Nov 24 23:11:01 crc kubenswrapper[4915]: I1124 23:11:01.029542 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-dfc4k"] Nov 24 23:11:01 crc kubenswrapper[4915]: I1124 23:11:01.461913 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-bnv6m" event={"ID":"3f948bc6-46ff-4a75-98c5-beafdbd54bcc","Type":"ContainerStarted","Data":"c35a08293bc22a6e8aada072fb7f741dc8fa616e56cbd4d31f6935859e4e57da"} Nov 24 23:11:01 crc kubenswrapper[4915]: I1124 23:11:01.464935 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-dfc4k" event={"ID":"4b249c5c-fb49-4f90-9821-2c4c7b37d448","Type":"ContainerStarted","Data":"3de423219df149a91a26af04d2fa565f30c1abf89f5fb9a3cbc0b5e4626c25c9"} Nov 24 23:11:02 crc kubenswrapper[4915]: I1124 23:11:02.101472 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 23:11:02 crc kubenswrapper[4915]: I1124 23:11:02.102356 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75360af7-e79c-4a47-8a96-a78d7bc8804e" containerName="proxy-httpd" containerID="cri-o://9dabca5d88c714043ed268f9b66254f3b245a2fbb06b21d6b3fb55dbc1923a98" gracePeriod=30 Nov 24 23:11:02 crc kubenswrapper[4915]: I1124 23:11:02.102406 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75360af7-e79c-4a47-8a96-a78d7bc8804e" containerName="ceilometer-notification-agent" containerID="cri-o://82dcbee9b7e1fb1a9a4d25e4ee19835d5ffbc3c819d239e724e07701099583b2" gracePeriod=30 Nov 24 23:11:02 crc kubenswrapper[4915]: I1124 23:11:02.102356 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75360af7-e79c-4a47-8a96-a78d7bc8804e" containerName="sg-core" containerID="cri-o://61430822e8f31012173bc908c827a32a76a01d10caee1907bb2b9bc10d0bf8db" gracePeriod=30 Nov 24 23:11:02 crc kubenswrapper[4915]: I1124 23:11:02.102614 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75360af7-e79c-4a47-8a96-a78d7bc8804e" containerName="ceilometer-central-agent" containerID="cri-o://bf9a47410fd5a8eb428cbca71d045f2646449c29197c40e7e4d1927da0e5bf12" gracePeriod=30 Nov 24 23:11:02 crc kubenswrapper[4915]: I1124 23:11:02.487350 4915 generic.go:334] "Generic (PLEG): container finished" podID="75360af7-e79c-4a47-8a96-a78d7bc8804e" containerID="61430822e8f31012173bc908c827a32a76a01d10caee1907bb2b9bc10d0bf8db" exitCode=2 Nov 24 23:11:02 crc kubenswrapper[4915]: I1124 23:11:02.487391 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75360af7-e79c-4a47-8a96-a78d7bc8804e","Type":"ContainerDied","Data":"61430822e8f31012173bc908c827a32a76a01d10caee1907bb2b9bc10d0bf8db"} Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.378952 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.512949 4915 generic.go:334] "Generic (PLEG): container finished" podID="75360af7-e79c-4a47-8a96-a78d7bc8804e" containerID="9dabca5d88c714043ed268f9b66254f3b245a2fbb06b21d6b3fb55dbc1923a98" exitCode=0 Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.513203 4915 generic.go:334] "Generic (PLEG): container finished" podID="75360af7-e79c-4a47-8a96-a78d7bc8804e" containerID="82dcbee9b7e1fb1a9a4d25e4ee19835d5ffbc3c819d239e724e07701099583b2" exitCode=0 Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.513211 4915 generic.go:334] "Generic (PLEG): container finished" podID="75360af7-e79c-4a47-8a96-a78d7bc8804e" containerID="bf9a47410fd5a8eb428cbca71d045f2646449c29197c40e7e4d1927da0e5bf12" exitCode=0 Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.513233 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75360af7-e79c-4a47-8a96-a78d7bc8804e","Type":"ContainerDied","Data":"9dabca5d88c714043ed268f9b66254f3b245a2fbb06b21d6b3fb55dbc1923a98"} Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.513260 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75360af7-e79c-4a47-8a96-a78d7bc8804e","Type":"ContainerDied","Data":"82dcbee9b7e1fb1a9a4d25e4ee19835d5ffbc3c819d239e724e07701099583b2"} Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.513270 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75360af7-e79c-4a47-8a96-a78d7bc8804e","Type":"ContainerDied","Data":"bf9a47410fd5a8eb428cbca71d045f2646449c29197c40e7e4d1927da0e5bf12"} Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.513279 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75360af7-e79c-4a47-8a96-a78d7bc8804e","Type":"ContainerDied","Data":"484a867d9866479fe14ed331dae27f49c5c71eeb7caaac8da8bf49f2c98255f0"} Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.513276 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.513294 4915 scope.go:117] "RemoveContainer" containerID="9dabca5d88c714043ed268f9b66254f3b245a2fbb06b21d6b3fb55dbc1923a98" Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.542690 4915 scope.go:117] "RemoveContainer" containerID="61430822e8f31012173bc908c827a32a76a01d10caee1907bb2b9bc10d0bf8db" Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.543405 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75360af7-e79c-4a47-8a96-a78d7bc8804e-run-httpd\") pod \"75360af7-e79c-4a47-8a96-a78d7bc8804e\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.543463 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75360af7-e79c-4a47-8a96-a78d7bc8804e-scripts\") pod \"75360af7-e79c-4a47-8a96-a78d7bc8804e\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.543571 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t55qn\" (UniqueName: \"kubernetes.io/projected/75360af7-e79c-4a47-8a96-a78d7bc8804e-kube-api-access-t55qn\") pod \"75360af7-e79c-4a47-8a96-a78d7bc8804e\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.543697 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75360af7-e79c-4a47-8a96-a78d7bc8804e-log-httpd\") pod \"75360af7-e79c-4a47-8a96-a78d7bc8804e\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.543765 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/75360af7-e79c-4a47-8a96-a78d7bc8804e-ceilometer-tls-certs\") pod \"75360af7-e79c-4a47-8a96-a78d7bc8804e\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.543863 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75360af7-e79c-4a47-8a96-a78d7bc8804e-combined-ca-bundle\") pod \"75360af7-e79c-4a47-8a96-a78d7bc8804e\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.543863 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75360af7-e79c-4a47-8a96-a78d7bc8804e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "75360af7-e79c-4a47-8a96-a78d7bc8804e" (UID: "75360af7-e79c-4a47-8a96-a78d7bc8804e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.543937 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75360af7-e79c-4a47-8a96-a78d7bc8804e-config-data\") pod \"75360af7-e79c-4a47-8a96-a78d7bc8804e\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.543966 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75360af7-e79c-4a47-8a96-a78d7bc8804e-sg-core-conf-yaml\") pod \"75360af7-e79c-4a47-8a96-a78d7bc8804e\" (UID: \"75360af7-e79c-4a47-8a96-a78d7bc8804e\") " Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.544596 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75360af7-e79c-4a47-8a96-a78d7bc8804e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "75360af7-e79c-4a47-8a96-a78d7bc8804e" (UID: "75360af7-e79c-4a47-8a96-a78d7bc8804e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.545370 4915 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75360af7-e79c-4a47-8a96-a78d7bc8804e-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.545391 4915 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75360af7-e79c-4a47-8a96-a78d7bc8804e-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.556912 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75360af7-e79c-4a47-8a96-a78d7bc8804e-kube-api-access-t55qn" (OuterVolumeSpecName: "kube-api-access-t55qn") pod "75360af7-e79c-4a47-8a96-a78d7bc8804e" (UID: "75360af7-e79c-4a47-8a96-a78d7bc8804e"). InnerVolumeSpecName "kube-api-access-t55qn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.557200 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75360af7-e79c-4a47-8a96-a78d7bc8804e-scripts" (OuterVolumeSpecName: "scripts") pod "75360af7-e79c-4a47-8a96-a78d7bc8804e" (UID: "75360af7-e79c-4a47-8a96-a78d7bc8804e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.647525 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75360af7-e79c-4a47-8a96-a78d7bc8804e-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.647551 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t55qn\" (UniqueName: \"kubernetes.io/projected/75360af7-e79c-4a47-8a96-a78d7bc8804e-kube-api-access-t55qn\") on node \"crc\" DevicePath \"\"" Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.650023 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75360af7-e79c-4a47-8a96-a78d7bc8804e-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "75360af7-e79c-4a47-8a96-a78d7bc8804e" (UID: "75360af7-e79c-4a47-8a96-a78d7bc8804e"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.676560 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75360af7-e79c-4a47-8a96-a78d7bc8804e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "75360af7-e79c-4a47-8a96-a78d7bc8804e" (UID: "75360af7-e79c-4a47-8a96-a78d7bc8804e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.749844 4915 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/75360af7-e79c-4a47-8a96-a78d7bc8804e-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.749871 4915 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75360af7-e79c-4a47-8a96-a78d7bc8804e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.840811 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75360af7-e79c-4a47-8a96-a78d7bc8804e-config-data" (OuterVolumeSpecName: "config-data") pod "75360af7-e79c-4a47-8a96-a78d7bc8804e" (UID: "75360af7-e79c-4a47-8a96-a78d7bc8804e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.854532 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75360af7-e79c-4a47-8a96-a78d7bc8804e-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.891127 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75360af7-e79c-4a47-8a96-a78d7bc8804e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75360af7-e79c-4a47-8a96-a78d7bc8804e" (UID: "75360af7-e79c-4a47-8a96-a78d7bc8804e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.916571 4915 scope.go:117] "RemoveContainer" containerID="82dcbee9b7e1fb1a9a4d25e4ee19835d5ffbc3c819d239e724e07701099583b2" Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.962042 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75360af7-e79c-4a47-8a96-a78d7bc8804e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 23:11:03 crc kubenswrapper[4915]: I1124 23:11:03.987035 4915 scope.go:117] "RemoveContainer" containerID="bf9a47410fd5a8eb428cbca71d045f2646449c29197c40e7e4d1927da0e5bf12" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.020686 4915 scope.go:117] "RemoveContainer" containerID="9dabca5d88c714043ed268f9b66254f3b245a2fbb06b21d6b3fb55dbc1923a98" Nov 24 23:11:04 crc kubenswrapper[4915]: E1124 23:11:04.021223 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dabca5d88c714043ed268f9b66254f3b245a2fbb06b21d6b3fb55dbc1923a98\": container with ID starting with 9dabca5d88c714043ed268f9b66254f3b245a2fbb06b21d6b3fb55dbc1923a98 not found: ID does not exist" containerID="9dabca5d88c714043ed268f9b66254f3b245a2fbb06b21d6b3fb55dbc1923a98" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.021262 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dabca5d88c714043ed268f9b66254f3b245a2fbb06b21d6b3fb55dbc1923a98"} err="failed to get container status \"9dabca5d88c714043ed268f9b66254f3b245a2fbb06b21d6b3fb55dbc1923a98\": rpc error: code = NotFound desc = could not find container \"9dabca5d88c714043ed268f9b66254f3b245a2fbb06b21d6b3fb55dbc1923a98\": container with ID starting with 9dabca5d88c714043ed268f9b66254f3b245a2fbb06b21d6b3fb55dbc1923a98 not found: ID does not exist" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.021288 4915 scope.go:117] "RemoveContainer" containerID="61430822e8f31012173bc908c827a32a76a01d10caee1907bb2b9bc10d0bf8db" Nov 24 23:11:04 crc kubenswrapper[4915]: E1124 23:11:04.022864 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61430822e8f31012173bc908c827a32a76a01d10caee1907bb2b9bc10d0bf8db\": container with ID starting with 61430822e8f31012173bc908c827a32a76a01d10caee1907bb2b9bc10d0bf8db not found: ID does not exist" containerID="61430822e8f31012173bc908c827a32a76a01d10caee1907bb2b9bc10d0bf8db" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.022894 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61430822e8f31012173bc908c827a32a76a01d10caee1907bb2b9bc10d0bf8db"} err="failed to get container status \"61430822e8f31012173bc908c827a32a76a01d10caee1907bb2b9bc10d0bf8db\": rpc error: code = NotFound desc = could not find container \"61430822e8f31012173bc908c827a32a76a01d10caee1907bb2b9bc10d0bf8db\": container with ID starting with 61430822e8f31012173bc908c827a32a76a01d10caee1907bb2b9bc10d0bf8db not found: ID does not exist" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.022914 4915 scope.go:117] "RemoveContainer" containerID="82dcbee9b7e1fb1a9a4d25e4ee19835d5ffbc3c819d239e724e07701099583b2" Nov 24 23:11:04 crc kubenswrapper[4915]: E1124 23:11:04.023119 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82dcbee9b7e1fb1a9a4d25e4ee19835d5ffbc3c819d239e724e07701099583b2\": container with ID starting with 82dcbee9b7e1fb1a9a4d25e4ee19835d5ffbc3c819d239e724e07701099583b2 not found: ID does not exist" containerID="82dcbee9b7e1fb1a9a4d25e4ee19835d5ffbc3c819d239e724e07701099583b2" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.023146 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82dcbee9b7e1fb1a9a4d25e4ee19835d5ffbc3c819d239e724e07701099583b2"} err="failed to get container status \"82dcbee9b7e1fb1a9a4d25e4ee19835d5ffbc3c819d239e724e07701099583b2\": rpc error: code = NotFound desc = could not find container \"82dcbee9b7e1fb1a9a4d25e4ee19835d5ffbc3c819d239e724e07701099583b2\": container with ID starting with 82dcbee9b7e1fb1a9a4d25e4ee19835d5ffbc3c819d239e724e07701099583b2 not found: ID does not exist" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.023161 4915 scope.go:117] "RemoveContainer" containerID="bf9a47410fd5a8eb428cbca71d045f2646449c29197c40e7e4d1927da0e5bf12" Nov 24 23:11:04 crc kubenswrapper[4915]: E1124 23:11:04.023514 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf9a47410fd5a8eb428cbca71d045f2646449c29197c40e7e4d1927da0e5bf12\": container with ID starting with bf9a47410fd5a8eb428cbca71d045f2646449c29197c40e7e4d1927da0e5bf12 not found: ID does not exist" containerID="bf9a47410fd5a8eb428cbca71d045f2646449c29197c40e7e4d1927da0e5bf12" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.023535 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf9a47410fd5a8eb428cbca71d045f2646449c29197c40e7e4d1927da0e5bf12"} err="failed to get container status \"bf9a47410fd5a8eb428cbca71d045f2646449c29197c40e7e4d1927da0e5bf12\": rpc error: code = NotFound desc = could not find container \"bf9a47410fd5a8eb428cbca71d045f2646449c29197c40e7e4d1927da0e5bf12\": container with ID starting with bf9a47410fd5a8eb428cbca71d045f2646449c29197c40e7e4d1927da0e5bf12 not found: ID does not exist" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.023548 4915 scope.go:117] "RemoveContainer" containerID="9dabca5d88c714043ed268f9b66254f3b245a2fbb06b21d6b3fb55dbc1923a98" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.023757 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dabca5d88c714043ed268f9b66254f3b245a2fbb06b21d6b3fb55dbc1923a98"} err="failed to get container status \"9dabca5d88c714043ed268f9b66254f3b245a2fbb06b21d6b3fb55dbc1923a98\": rpc error: code = NotFound desc = could not find container \"9dabca5d88c714043ed268f9b66254f3b245a2fbb06b21d6b3fb55dbc1923a98\": container with ID starting with 9dabca5d88c714043ed268f9b66254f3b245a2fbb06b21d6b3fb55dbc1923a98 not found: ID does not exist" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.023791 4915 scope.go:117] "RemoveContainer" containerID="61430822e8f31012173bc908c827a32a76a01d10caee1907bb2b9bc10d0bf8db" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.024619 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61430822e8f31012173bc908c827a32a76a01d10caee1907bb2b9bc10d0bf8db"} err="failed to get container status \"61430822e8f31012173bc908c827a32a76a01d10caee1907bb2b9bc10d0bf8db\": rpc error: code = NotFound desc = could not find container \"61430822e8f31012173bc908c827a32a76a01d10caee1907bb2b9bc10d0bf8db\": container with ID starting with 61430822e8f31012173bc908c827a32a76a01d10caee1907bb2b9bc10d0bf8db not found: ID does not exist" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.024643 4915 scope.go:117] "RemoveContainer" containerID="82dcbee9b7e1fb1a9a4d25e4ee19835d5ffbc3c819d239e724e07701099583b2" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.024872 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82dcbee9b7e1fb1a9a4d25e4ee19835d5ffbc3c819d239e724e07701099583b2"} err="failed to get container status \"82dcbee9b7e1fb1a9a4d25e4ee19835d5ffbc3c819d239e724e07701099583b2\": rpc error: code = NotFound desc = could not find container \"82dcbee9b7e1fb1a9a4d25e4ee19835d5ffbc3c819d239e724e07701099583b2\": container with ID starting with 82dcbee9b7e1fb1a9a4d25e4ee19835d5ffbc3c819d239e724e07701099583b2 not found: ID does not exist" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.024893 4915 scope.go:117] "RemoveContainer" containerID="bf9a47410fd5a8eb428cbca71d045f2646449c29197c40e7e4d1927da0e5bf12" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.025256 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf9a47410fd5a8eb428cbca71d045f2646449c29197c40e7e4d1927da0e5bf12"} err="failed to get container status \"bf9a47410fd5a8eb428cbca71d045f2646449c29197c40e7e4d1927da0e5bf12\": rpc error: code = NotFound desc = could not find container \"bf9a47410fd5a8eb428cbca71d045f2646449c29197c40e7e4d1927da0e5bf12\": container with ID starting with bf9a47410fd5a8eb428cbca71d045f2646449c29197c40e7e4d1927da0e5bf12 not found: ID does not exist" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.025304 4915 scope.go:117] "RemoveContainer" containerID="9dabca5d88c714043ed268f9b66254f3b245a2fbb06b21d6b3fb55dbc1923a98" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.026585 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dabca5d88c714043ed268f9b66254f3b245a2fbb06b21d6b3fb55dbc1923a98"} err="failed to get container status \"9dabca5d88c714043ed268f9b66254f3b245a2fbb06b21d6b3fb55dbc1923a98\": rpc error: code = NotFound desc = could not find container \"9dabca5d88c714043ed268f9b66254f3b245a2fbb06b21d6b3fb55dbc1923a98\": container with ID starting with 9dabca5d88c714043ed268f9b66254f3b245a2fbb06b21d6b3fb55dbc1923a98 not found: ID does not exist" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.026606 4915 scope.go:117] "RemoveContainer" containerID="61430822e8f31012173bc908c827a32a76a01d10caee1907bb2b9bc10d0bf8db" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.026975 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61430822e8f31012173bc908c827a32a76a01d10caee1907bb2b9bc10d0bf8db"} err="failed to get container status \"61430822e8f31012173bc908c827a32a76a01d10caee1907bb2b9bc10d0bf8db\": rpc error: code = NotFound desc = could not find container \"61430822e8f31012173bc908c827a32a76a01d10caee1907bb2b9bc10d0bf8db\": container with ID starting with 61430822e8f31012173bc908c827a32a76a01d10caee1907bb2b9bc10d0bf8db not found: ID does not exist" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.026995 4915 scope.go:117] "RemoveContainer" containerID="82dcbee9b7e1fb1a9a4d25e4ee19835d5ffbc3c819d239e724e07701099583b2" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.027203 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82dcbee9b7e1fb1a9a4d25e4ee19835d5ffbc3c819d239e724e07701099583b2"} err="failed to get container status \"82dcbee9b7e1fb1a9a4d25e4ee19835d5ffbc3c819d239e724e07701099583b2\": rpc error: code = NotFound desc = could not find container \"82dcbee9b7e1fb1a9a4d25e4ee19835d5ffbc3c819d239e724e07701099583b2\": container with ID starting with 82dcbee9b7e1fb1a9a4d25e4ee19835d5ffbc3c819d239e724e07701099583b2 not found: ID does not exist" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.027220 4915 scope.go:117] "RemoveContainer" containerID="bf9a47410fd5a8eb428cbca71d045f2646449c29197c40e7e4d1927da0e5bf12" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.029314 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf9a47410fd5a8eb428cbca71d045f2646449c29197c40e7e4d1927da0e5bf12"} err="failed to get container status \"bf9a47410fd5a8eb428cbca71d045f2646449c29197c40e7e4d1927da0e5bf12\": rpc error: code = NotFound desc = could not find container \"bf9a47410fd5a8eb428cbca71d045f2646449c29197c40e7e4d1927da0e5bf12\": container with ID starting with bf9a47410fd5a8eb428cbca71d045f2646449c29197c40e7e4d1927da0e5bf12 not found: ID does not exist" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.159142 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.183812 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.198149 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 24 23:11:04 crc kubenswrapper[4915]: E1124 23:11:04.198597 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75360af7-e79c-4a47-8a96-a78d7bc8804e" containerName="ceilometer-central-agent" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.198610 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="75360af7-e79c-4a47-8a96-a78d7bc8804e" containerName="ceilometer-central-agent" Nov 24 23:11:04 crc kubenswrapper[4915]: E1124 23:11:04.198621 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75360af7-e79c-4a47-8a96-a78d7bc8804e" containerName="sg-core" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.198627 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="75360af7-e79c-4a47-8a96-a78d7bc8804e" containerName="sg-core" Nov 24 23:11:04 crc kubenswrapper[4915]: E1124 23:11:04.198656 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75360af7-e79c-4a47-8a96-a78d7bc8804e" containerName="proxy-httpd" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.198664 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="75360af7-e79c-4a47-8a96-a78d7bc8804e" containerName="proxy-httpd" Nov 24 23:11:04 crc kubenswrapper[4915]: E1124 23:11:04.198692 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75360af7-e79c-4a47-8a96-a78d7bc8804e" containerName="ceilometer-notification-agent" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.198697 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="75360af7-e79c-4a47-8a96-a78d7bc8804e" containerName="ceilometer-notification-agent" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.199093 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="75360af7-e79c-4a47-8a96-a78d7bc8804e" containerName="proxy-httpd" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.199115 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="75360af7-e79c-4a47-8a96-a78d7bc8804e" containerName="ceilometer-central-agent" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.199126 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="75360af7-e79c-4a47-8a96-a78d7bc8804e" containerName="ceilometer-notification-agent" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.199139 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="75360af7-e79c-4a47-8a96-a78d7bc8804e" containerName="sg-core" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.201246 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.206541 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.206733 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.207090 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.212234 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.268475 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e69116cf-d50e-44af-899f-2e11d16e45d1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e69116cf-d50e-44af-899f-2e11d16e45d1\") " pod="openstack/ceilometer-0" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.268548 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e69116cf-d50e-44af-899f-2e11d16e45d1-config-data\") pod \"ceilometer-0\" (UID: \"e69116cf-d50e-44af-899f-2e11d16e45d1\") " pod="openstack/ceilometer-0" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.268652 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh75m\" (UniqueName: \"kubernetes.io/projected/e69116cf-d50e-44af-899f-2e11d16e45d1-kube-api-access-lh75m\") pod \"ceilometer-0\" (UID: \"e69116cf-d50e-44af-899f-2e11d16e45d1\") " pod="openstack/ceilometer-0" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.268700 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e69116cf-d50e-44af-899f-2e11d16e45d1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e69116cf-d50e-44af-899f-2e11d16e45d1\") " pod="openstack/ceilometer-0" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.268847 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e69116cf-d50e-44af-899f-2e11d16e45d1-log-httpd\") pod \"ceilometer-0\" (UID: \"e69116cf-d50e-44af-899f-2e11d16e45d1\") " pod="openstack/ceilometer-0" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.269061 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e69116cf-d50e-44af-899f-2e11d16e45d1-run-httpd\") pod \"ceilometer-0\" (UID: \"e69116cf-d50e-44af-899f-2e11d16e45d1\") " pod="openstack/ceilometer-0" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.269083 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e69116cf-d50e-44af-899f-2e11d16e45d1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e69116cf-d50e-44af-899f-2e11d16e45d1\") " pod="openstack/ceilometer-0" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.269131 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e69116cf-d50e-44af-899f-2e11d16e45d1-scripts\") pod \"ceilometer-0\" (UID: \"e69116cf-d50e-44af-899f-2e11d16e45d1\") " pod="openstack/ceilometer-0" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.370825 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh75m\" (UniqueName: \"kubernetes.io/projected/e69116cf-d50e-44af-899f-2e11d16e45d1-kube-api-access-lh75m\") pod \"ceilometer-0\" (UID: \"e69116cf-d50e-44af-899f-2e11d16e45d1\") " pod="openstack/ceilometer-0" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.370890 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e69116cf-d50e-44af-899f-2e11d16e45d1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e69116cf-d50e-44af-899f-2e11d16e45d1\") " pod="openstack/ceilometer-0" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.370942 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e69116cf-d50e-44af-899f-2e11d16e45d1-log-httpd\") pod \"ceilometer-0\" (UID: \"e69116cf-d50e-44af-899f-2e11d16e45d1\") " pod="openstack/ceilometer-0" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.370985 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e69116cf-d50e-44af-899f-2e11d16e45d1-run-httpd\") pod \"ceilometer-0\" (UID: \"e69116cf-d50e-44af-899f-2e11d16e45d1\") " pod="openstack/ceilometer-0" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.371005 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e69116cf-d50e-44af-899f-2e11d16e45d1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e69116cf-d50e-44af-899f-2e11d16e45d1\") " pod="openstack/ceilometer-0" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.371028 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e69116cf-d50e-44af-899f-2e11d16e45d1-scripts\") pod \"ceilometer-0\" (UID: \"e69116cf-d50e-44af-899f-2e11d16e45d1\") " pod="openstack/ceilometer-0" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.371085 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e69116cf-d50e-44af-899f-2e11d16e45d1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e69116cf-d50e-44af-899f-2e11d16e45d1\") " pod="openstack/ceilometer-0" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.371117 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e69116cf-d50e-44af-899f-2e11d16e45d1-config-data\") pod \"ceilometer-0\" (UID: \"e69116cf-d50e-44af-899f-2e11d16e45d1\") " pod="openstack/ceilometer-0" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.372396 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e69116cf-d50e-44af-899f-2e11d16e45d1-run-httpd\") pod \"ceilometer-0\" (UID: \"e69116cf-d50e-44af-899f-2e11d16e45d1\") " pod="openstack/ceilometer-0" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.373432 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e69116cf-d50e-44af-899f-2e11d16e45d1-log-httpd\") pod \"ceilometer-0\" (UID: \"e69116cf-d50e-44af-899f-2e11d16e45d1\") " pod="openstack/ceilometer-0" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.375692 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e69116cf-d50e-44af-899f-2e11d16e45d1-scripts\") pod \"ceilometer-0\" (UID: \"e69116cf-d50e-44af-899f-2e11d16e45d1\") " pod="openstack/ceilometer-0" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.377286 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e69116cf-d50e-44af-899f-2e11d16e45d1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e69116cf-d50e-44af-899f-2e11d16e45d1\") " pod="openstack/ceilometer-0" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.378995 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e69116cf-d50e-44af-899f-2e11d16e45d1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e69116cf-d50e-44af-899f-2e11d16e45d1\") " pod="openstack/ceilometer-0" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.379066 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e69116cf-d50e-44af-899f-2e11d16e45d1-config-data\") pod \"ceilometer-0\" (UID: \"e69116cf-d50e-44af-899f-2e11d16e45d1\") " pod="openstack/ceilometer-0" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.381052 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e69116cf-d50e-44af-899f-2e11d16e45d1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e69116cf-d50e-44af-899f-2e11d16e45d1\") " pod="openstack/ceilometer-0" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.389184 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh75m\" (UniqueName: \"kubernetes.io/projected/e69116cf-d50e-44af-899f-2e11d16e45d1-kube-api-access-lh75m\") pod \"ceilometer-0\" (UID: \"e69116cf-d50e-44af-899f-2e11d16e45d1\") " pod="openstack/ceilometer-0" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.427033 4915 scope.go:117] "RemoveContainer" containerID="9460b23ff5cbdbb06020658fde3647ecb1f224c454abb42963e2e7769eedaf47" Nov 24 23:11:04 crc kubenswrapper[4915]: E1124 23:11:04.427346 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.445419 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75360af7-e79c-4a47-8a96-a78d7bc8804e" path="/var/lib/kubelet/pods/75360af7-e79c-4a47-8a96-a78d7bc8804e/volumes" Nov 24 23:11:04 crc kubenswrapper[4915]: I1124 23:11:04.537565 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 24 23:11:05 crc kubenswrapper[4915]: W1124 23:11:05.079289 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode69116cf_d50e_44af_899f_2e11d16e45d1.slice/crio-584d3a0ae50dacb10f7f48391a66317473aca7774249d7e72f838058d24db985 WatchSource:0}: Error finding container 584d3a0ae50dacb10f7f48391a66317473aca7774249d7e72f838058d24db985: Status 404 returned error can't find the container with id 584d3a0ae50dacb10f7f48391a66317473aca7774249d7e72f838058d24db985 Nov 24 23:11:05 crc kubenswrapper[4915]: I1124 23:11:05.081285 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 24 23:11:05 crc kubenswrapper[4915]: I1124 23:11:05.550908 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e69116cf-d50e-44af-899f-2e11d16e45d1","Type":"ContainerStarted","Data":"584d3a0ae50dacb10f7f48391a66317473aca7774249d7e72f838058d24db985"} Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.368798 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.370674 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.372930 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-rh5x7" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.373476 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.373975 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.374295 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.390305 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.429205 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5cde4f81-df73-4990-885c-690d843e90bb-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " pod="openstack/tempest-tests-tempest" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.429425 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5cde4f81-df73-4990-885c-690d843e90bb-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " pod="openstack/tempest-tests-tempest" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.429487 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5cde4f81-df73-4990-885c-690d843e90bb-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " pod="openstack/tempest-tests-tempest" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.429607 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5cde4f81-df73-4990-885c-690d843e90bb-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " pod="openstack/tempest-tests-tempest" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.429671 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5cde4f81-df73-4990-885c-690d843e90bb-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " pod="openstack/tempest-tests-tempest" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.429704 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhlkd\" (UniqueName: \"kubernetes.io/projected/5cde4f81-df73-4990-885c-690d843e90bb-kube-api-access-fhlkd\") pod \"tempest-tests-tempest\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " pod="openstack/tempest-tests-tempest" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.429757 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " pod="openstack/tempest-tests-tempest" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.429943 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5cde4f81-df73-4990-885c-690d843e90bb-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " pod="openstack/tempest-tests-tempest" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.430260 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5cde4f81-df73-4990-885c-690d843e90bb-config-data\") pod \"tempest-tests-tempest\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " pod="openstack/tempest-tests-tempest" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.533992 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5cde4f81-df73-4990-885c-690d843e90bb-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " pod="openstack/tempest-tests-tempest" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.534036 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5cde4f81-df73-4990-885c-690d843e90bb-config-data\") pod \"tempest-tests-tempest\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " pod="openstack/tempest-tests-tempest" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.534193 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5cde4f81-df73-4990-885c-690d843e90bb-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " pod="openstack/tempest-tests-tempest" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.534270 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5cde4f81-df73-4990-885c-690d843e90bb-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " pod="openstack/tempest-tests-tempest" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.534361 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5cde4f81-df73-4990-885c-690d843e90bb-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " pod="openstack/tempest-tests-tempest" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.534494 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5cde4f81-df73-4990-885c-690d843e90bb-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " pod="openstack/tempest-tests-tempest" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.534514 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5cde4f81-df73-4990-885c-690d843e90bb-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " pod="openstack/tempest-tests-tempest" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.534603 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5cde4f81-df73-4990-885c-690d843e90bb-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " pod="openstack/tempest-tests-tempest" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.534663 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhlkd\" (UniqueName: \"kubernetes.io/projected/5cde4f81-df73-4990-885c-690d843e90bb-kube-api-access-fhlkd\") pod \"tempest-tests-tempest\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " pod="openstack/tempest-tests-tempest" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.534704 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " pod="openstack/tempest-tests-tempest" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.535440 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5cde4f81-df73-4990-885c-690d843e90bb-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " pod="openstack/tempest-tests-tempest" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.535493 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5cde4f81-df73-4990-885c-690d843e90bb-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " pod="openstack/tempest-tests-tempest" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.536288 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5cde4f81-df73-4990-885c-690d843e90bb-config-data\") pod \"tempest-tests-tempest\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " pod="openstack/tempest-tests-tempest" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.539511 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/tempest-tests-tempest" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.540467 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5cde4f81-df73-4990-885c-690d843e90bb-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " pod="openstack/tempest-tests-tempest" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.540823 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5cde4f81-df73-4990-885c-690d843e90bb-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " pod="openstack/tempest-tests-tempest" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.554221 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhlkd\" (UniqueName: \"kubernetes.io/projected/5cde4f81-df73-4990-885c-690d843e90bb-kube-api-access-fhlkd\") pod \"tempest-tests-tempest\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " pod="openstack/tempest-tests-tempest" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.557550 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5cde4f81-df73-4990-885c-690d843e90bb-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " pod="openstack/tempest-tests-tempest" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.580191 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " pod="openstack/tempest-tests-tempest" Nov 24 23:11:06 crc kubenswrapper[4915]: I1124 23:11:06.713540 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 23:11:07 crc kubenswrapper[4915]: I1124 23:11:07.233442 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 24 23:11:07 crc kubenswrapper[4915]: W1124 23:11:07.233606 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cde4f81_df73_4990_885c_690d843e90bb.slice/crio-4eba24cee01cc9bf8ce52d5e195c628368fd31ad07024fad4e919c5511e7a548 WatchSource:0}: Error finding container 4eba24cee01cc9bf8ce52d5e195c628368fd31ad07024fad4e919c5511e7a548: Status 404 returned error can't find the container with id 4eba24cee01cc9bf8ce52d5e195c628368fd31ad07024fad4e919c5511e7a548 Nov 24 23:11:07 crc kubenswrapper[4915]: I1124 23:11:07.592225 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5cde4f81-df73-4990-885c-690d843e90bb","Type":"ContainerStarted","Data":"4eba24cee01cc9bf8ce52d5e195c628368fd31ad07024fad4e919c5511e7a548"} Nov 24 23:11:16 crc kubenswrapper[4915]: I1124 23:11:16.427974 4915 scope.go:117] "RemoveContainer" containerID="9460b23ff5cbdbb06020658fde3647ecb1f224c454abb42963e2e7769eedaf47" Nov 24 23:11:16 crc kubenswrapper[4915]: E1124 23:11:16.428756 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:11:24 crc kubenswrapper[4915]: E1124 23:11:24.511819 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested" Nov 24 23:11:24 crc kubenswrapper[4915]: E1124 23:11:24.512512 4915 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested" Nov 24 23:11:24 crc kubenswrapper[4915]: E1124 23:11:24.516980 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:aodh-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:AodhPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:AodhPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:aodh-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bgmpl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42402,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod aodh-db-sync-dfc4k_openstack(4b249c5c-fb49-4f90-9821-2c4c7b37d448): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 23:11:24 crc kubenswrapper[4915]: E1124 23:11:24.518297 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"aodh-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/aodh-db-sync-dfc4k" podUID="4b249c5c-fb49-4f90-9821-2c4c7b37d448" Nov 24 23:11:24 crc kubenswrapper[4915]: E1124 23:11:24.800577 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"aodh-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested\\\"\"" pod="openstack/aodh-db-sync-dfc4k" podUID="4b249c5c-fb49-4f90-9821-2c4c7b37d448" Nov 24 23:11:27 crc kubenswrapper[4915]: I1124 23:11:27.427174 4915 scope.go:117] "RemoveContainer" containerID="9460b23ff5cbdbb06020658fde3647ecb1f224c454abb42963e2e7769eedaf47" Nov 24 23:11:27 crc kubenswrapper[4915]: E1124 23:11:27.428073 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:11:38 crc kubenswrapper[4915]: I1124 23:11:38.427137 4915 scope.go:117] "RemoveContainer" containerID="9460b23ff5cbdbb06020658fde3647ecb1f224c454abb42963e2e7769eedaf47" Nov 24 23:11:38 crc kubenswrapper[4915]: E1124 23:11:38.428014 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:11:52 crc kubenswrapper[4915]: I1124 23:11:52.447411 4915 scope.go:117] "RemoveContainer" containerID="9460b23ff5cbdbb06020658fde3647ecb1f224c454abb42963e2e7769eedaf47" Nov 24 23:11:52 crc kubenswrapper[4915]: E1124 23:11:52.448955 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:11:53 crc kubenswrapper[4915]: E1124 23:11:53.078622 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 24 23:11:53 crc kubenswrapper[4915]: E1124 23:11:53.078954 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhlkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(5cde4f81-df73-4990-885c-690d843e90bb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 23:11:53 crc kubenswrapper[4915]: E1124 23:11:53.080238 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="5cde4f81-df73-4990-885c-690d843e90bb" Nov 24 23:11:53 crc kubenswrapper[4915]: E1124 23:11:53.190911 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="5cde4f81-df73-4990-885c-690d843e90bb" Nov 24 23:11:53 crc kubenswrapper[4915]: E1124 23:11:53.493082 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Nov 24 23:11:53 crc kubenswrapper[4915]: E1124 23:11:53.493142 4915 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Nov 24 23:11:53 crc kubenswrapper[4915]: E1124 23:11:53.493888 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndchbh587hcfh68dh65bhc4h75h577h77h68ch667h5d4h5bch9h5d6h55bh76h5c8h547h5d4h687h654hf5h58h9ch78h659h5c6h66fh676h5d4q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lh75m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(e69116cf-d50e-44af-899f-2e11d16e45d1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 23:11:53 crc kubenswrapper[4915]: I1124 23:11:53.599850 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m7kkh"] Nov 24 23:11:53 crc kubenswrapper[4915]: I1124 23:11:53.603703 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m7kkh" Nov 24 23:11:53 crc kubenswrapper[4915]: I1124 23:11:53.616935 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m7kkh"] Nov 24 23:11:53 crc kubenswrapper[4915]: I1124 23:11:53.715544 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c9e4d2e-d25a-42d8-a5cf-601939818b60-catalog-content\") pod \"redhat-marketplace-m7kkh\" (UID: \"5c9e4d2e-d25a-42d8-a5cf-601939818b60\") " pod="openshift-marketplace/redhat-marketplace-m7kkh" Nov 24 23:11:53 crc kubenswrapper[4915]: I1124 23:11:53.715611 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmmfc\" (UniqueName: \"kubernetes.io/projected/5c9e4d2e-d25a-42d8-a5cf-601939818b60-kube-api-access-pmmfc\") pod \"redhat-marketplace-m7kkh\" (UID: \"5c9e4d2e-d25a-42d8-a5cf-601939818b60\") " pod="openshift-marketplace/redhat-marketplace-m7kkh" Nov 24 23:11:53 crc kubenswrapper[4915]: I1124 23:11:53.715684 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c9e4d2e-d25a-42d8-a5cf-601939818b60-utilities\") pod \"redhat-marketplace-m7kkh\" (UID: \"5c9e4d2e-d25a-42d8-a5cf-601939818b60\") " pod="openshift-marketplace/redhat-marketplace-m7kkh" Nov 24 23:11:53 crc kubenswrapper[4915]: I1124 23:11:53.818523 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c9e4d2e-d25a-42d8-a5cf-601939818b60-catalog-content\") pod \"redhat-marketplace-m7kkh\" (UID: \"5c9e4d2e-d25a-42d8-a5cf-601939818b60\") " pod="openshift-marketplace/redhat-marketplace-m7kkh" Nov 24 23:11:53 crc kubenswrapper[4915]: I1124 23:11:53.818578 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmmfc\" (UniqueName: \"kubernetes.io/projected/5c9e4d2e-d25a-42d8-a5cf-601939818b60-kube-api-access-pmmfc\") pod \"redhat-marketplace-m7kkh\" (UID: \"5c9e4d2e-d25a-42d8-a5cf-601939818b60\") " pod="openshift-marketplace/redhat-marketplace-m7kkh" Nov 24 23:11:53 crc kubenswrapper[4915]: I1124 23:11:53.818610 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c9e4d2e-d25a-42d8-a5cf-601939818b60-utilities\") pod \"redhat-marketplace-m7kkh\" (UID: \"5c9e4d2e-d25a-42d8-a5cf-601939818b60\") " pod="openshift-marketplace/redhat-marketplace-m7kkh" Nov 24 23:11:53 crc kubenswrapper[4915]: I1124 23:11:53.819190 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c9e4d2e-d25a-42d8-a5cf-601939818b60-utilities\") pod \"redhat-marketplace-m7kkh\" (UID: \"5c9e4d2e-d25a-42d8-a5cf-601939818b60\") " pod="openshift-marketplace/redhat-marketplace-m7kkh" Nov 24 23:11:53 crc kubenswrapper[4915]: I1124 23:11:53.819204 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c9e4d2e-d25a-42d8-a5cf-601939818b60-catalog-content\") pod \"redhat-marketplace-m7kkh\" (UID: \"5c9e4d2e-d25a-42d8-a5cf-601939818b60\") " pod="openshift-marketplace/redhat-marketplace-m7kkh" Nov 24 23:11:53 crc kubenswrapper[4915]: I1124 23:11:53.844071 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmmfc\" (UniqueName: \"kubernetes.io/projected/5c9e4d2e-d25a-42d8-a5cf-601939818b60-kube-api-access-pmmfc\") pod \"redhat-marketplace-m7kkh\" (UID: \"5c9e4d2e-d25a-42d8-a5cf-601939818b60\") " pod="openshift-marketplace/redhat-marketplace-m7kkh" Nov 24 23:11:53 crc kubenswrapper[4915]: E1124 23:11:53.874997 4915 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Nov 24 23:11:53 crc kubenswrapper[4915]: E1124 23:11:53.875047 4915 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Nov 24 23:11:53 crc kubenswrapper[4915]: E1124 23:11:53.875166 4915 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c6krm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-bnv6m_openstack(3f948bc6-46ff-4a75-98c5-beafdbd54bcc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 24 23:11:53 crc kubenswrapper[4915]: E1124 23:11:53.876345 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-bnv6m" podUID="3f948bc6-46ff-4a75-98c5-beafdbd54bcc" Nov 24 23:11:53 crc kubenswrapper[4915]: I1124 23:11:53.907677 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 24 23:11:53 crc kubenswrapper[4915]: I1124 23:11:53.944345 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m7kkh" Nov 24 23:11:54 crc kubenswrapper[4915]: E1124 23:11:54.204401 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-bnv6m" podUID="3f948bc6-46ff-4a75-98c5-beafdbd54bcc" Nov 24 23:11:54 crc kubenswrapper[4915]: W1124 23:11:54.539206 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c9e4d2e_d25a_42d8_a5cf_601939818b60.slice/crio-b7167c128cf8cca89a2f29ae3c77d802e50846532781fa4bb0ba69af2959dd0f WatchSource:0}: Error finding container b7167c128cf8cca89a2f29ae3c77d802e50846532781fa4bb0ba69af2959dd0f: Status 404 returned error can't find the container with id b7167c128cf8cca89a2f29ae3c77d802e50846532781fa4bb0ba69af2959dd0f Nov 24 23:11:54 crc kubenswrapper[4915]: I1124 23:11:54.541743 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m7kkh"] Nov 24 23:11:55 crc kubenswrapper[4915]: I1124 23:11:55.215133 4915 generic.go:334] "Generic (PLEG): container finished" podID="5c9e4d2e-d25a-42d8-a5cf-601939818b60" containerID="bb84ee0523f450a3c3b4b7c4f8bfb7f79055214cc4142ded70d7fd933e73b131" exitCode=0 Nov 24 23:11:55 crc kubenswrapper[4915]: I1124 23:11:55.215205 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7kkh" event={"ID":"5c9e4d2e-d25a-42d8-a5cf-601939818b60","Type":"ContainerDied","Data":"bb84ee0523f450a3c3b4b7c4f8bfb7f79055214cc4142ded70d7fd933e73b131"} Nov 24 23:11:55 crc kubenswrapper[4915]: I1124 23:11:55.215233 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7kkh" event={"ID":"5c9e4d2e-d25a-42d8-a5cf-601939818b60","Type":"ContainerStarted","Data":"b7167c128cf8cca89a2f29ae3c77d802e50846532781fa4bb0ba69af2959dd0f"} Nov 24 23:11:55 crc kubenswrapper[4915]: I1124 23:11:55.216986 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-dfc4k" event={"ID":"4b249c5c-fb49-4f90-9821-2c4c7b37d448","Type":"ContainerStarted","Data":"3e2bc40d8141a053268a19df9e449dfc313ffe1e064df31601ea3f0e4980a19b"} Nov 24 23:11:55 crc kubenswrapper[4915]: I1124 23:11:55.220880 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e69116cf-d50e-44af-899f-2e11d16e45d1","Type":"ContainerStarted","Data":"24c2bc82f6f50bc63d0e3bced244f43df8e2dd7d4712e27b5ac6ebfe7ad0bd10"} Nov 24 23:11:55 crc kubenswrapper[4915]: I1124 23:11:55.266001 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-dfc4k" podStartSLOduration=2.391439123 podStartE2EDuration="55.265981685s" podCreationTimestamp="2025-11-24 23:11:00 +0000 UTC" firstStartedPulling="2025-11-24 23:11:01.030584241 +0000 UTC m=+6679.346836414" lastFinishedPulling="2025-11-24 23:11:53.905126783 +0000 UTC m=+6732.221378976" observedRunningTime="2025-11-24 23:11:55.256136249 +0000 UTC m=+6733.572388472" watchObservedRunningTime="2025-11-24 23:11:55.265981685 +0000 UTC m=+6733.582233858" Nov 24 23:11:56 crc kubenswrapper[4915]: I1124 23:11:56.237816 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e69116cf-d50e-44af-899f-2e11d16e45d1","Type":"ContainerStarted","Data":"8eda9cdadb944a35f6e278abeba4236108b16428742ae33eff79820611da94d0"} Nov 24 23:11:56 crc kubenswrapper[4915]: I1124 23:11:56.240316 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7kkh" event={"ID":"5c9e4d2e-d25a-42d8-a5cf-601939818b60","Type":"ContainerStarted","Data":"2e4f92b5b890d2a5a529db3c2f7247d9dc5be947ceef9f07b220d32961d9e543"} Nov 24 23:11:57 crc kubenswrapper[4915]: I1124 23:11:57.253070 4915 generic.go:334] "Generic (PLEG): container finished" podID="5c9e4d2e-d25a-42d8-a5cf-601939818b60" containerID="2e4f92b5b890d2a5a529db3c2f7247d9dc5be947ceef9f07b220d32961d9e543" exitCode=0 Nov 24 23:11:57 crc kubenswrapper[4915]: I1124 23:11:57.253122 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7kkh" event={"ID":"5c9e4d2e-d25a-42d8-a5cf-601939818b60","Type":"ContainerDied","Data":"2e4f92b5b890d2a5a529db3c2f7247d9dc5be947ceef9f07b220d32961d9e543"} Nov 24 23:11:58 crc kubenswrapper[4915]: E1124 23:11:58.105009 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="e69116cf-d50e-44af-899f-2e11d16e45d1" Nov 24 23:11:58 crc kubenswrapper[4915]: I1124 23:11:58.271931 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e69116cf-d50e-44af-899f-2e11d16e45d1","Type":"ContainerStarted","Data":"ac2e7e411ee846f91a0a08e40a75ce13d7661e1692896e19841bf95ccb7ef3e1"} Nov 24 23:11:58 crc kubenswrapper[4915]: I1124 23:11:58.272582 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 24 23:11:58 crc kubenswrapper[4915]: E1124 23:11:58.279149 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e69116cf-d50e-44af-899f-2e11d16e45d1" Nov 24 23:11:59 crc kubenswrapper[4915]: I1124 23:11:59.299284 4915 generic.go:334] "Generic (PLEG): container finished" podID="4b249c5c-fb49-4f90-9821-2c4c7b37d448" containerID="3e2bc40d8141a053268a19df9e449dfc313ffe1e064df31601ea3f0e4980a19b" exitCode=0 Nov 24 23:11:59 crc kubenswrapper[4915]: I1124 23:11:59.299802 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-dfc4k" event={"ID":"4b249c5c-fb49-4f90-9821-2c4c7b37d448","Type":"ContainerDied","Data":"3e2bc40d8141a053268a19df9e449dfc313ffe1e064df31601ea3f0e4980a19b"} Nov 24 23:11:59 crc kubenswrapper[4915]: I1124 23:11:59.308556 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7kkh" event={"ID":"5c9e4d2e-d25a-42d8-a5cf-601939818b60","Type":"ContainerStarted","Data":"4f2bfe55d1f373e8195c948fbc2b7c4532ebcb63cbf6260e247eefd1017d6a7d"} Nov 24 23:11:59 crc kubenswrapper[4915]: E1124 23:11:59.310632 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="e69116cf-d50e-44af-899f-2e11d16e45d1" Nov 24 23:11:59 crc kubenswrapper[4915]: I1124 23:11:59.359801 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m7kkh" podStartSLOduration=3.299841249 podStartE2EDuration="6.359757074s" podCreationTimestamp="2025-11-24 23:11:53 +0000 UTC" firstStartedPulling="2025-11-24 23:11:55.220097747 +0000 UTC m=+6733.536349920" lastFinishedPulling="2025-11-24 23:11:58.280013582 +0000 UTC m=+6736.596265745" observedRunningTime="2025-11-24 23:11:59.344356749 +0000 UTC m=+6737.660608962" watchObservedRunningTime="2025-11-24 23:11:59.359757074 +0000 UTC m=+6737.676009257" Nov 24 23:12:00 crc kubenswrapper[4915]: I1124 23:12:00.693905 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-dfc4k" Nov 24 23:12:00 crc kubenswrapper[4915]: I1124 23:12:00.806981 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b249c5c-fb49-4f90-9821-2c4c7b37d448-combined-ca-bundle\") pod \"4b249c5c-fb49-4f90-9821-2c4c7b37d448\" (UID: \"4b249c5c-fb49-4f90-9821-2c4c7b37d448\") " Nov 24 23:12:00 crc kubenswrapper[4915]: I1124 23:12:00.807059 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b249c5c-fb49-4f90-9821-2c4c7b37d448-scripts\") pod \"4b249c5c-fb49-4f90-9821-2c4c7b37d448\" (UID: \"4b249c5c-fb49-4f90-9821-2c4c7b37d448\") " Nov 24 23:12:00 crc kubenswrapper[4915]: I1124 23:12:00.807200 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b249c5c-fb49-4f90-9821-2c4c7b37d448-config-data\") pod \"4b249c5c-fb49-4f90-9821-2c4c7b37d448\" (UID: \"4b249c5c-fb49-4f90-9821-2c4c7b37d448\") " Nov 24 23:12:00 crc kubenswrapper[4915]: I1124 23:12:00.807316 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgmpl\" (UniqueName: \"kubernetes.io/projected/4b249c5c-fb49-4f90-9821-2c4c7b37d448-kube-api-access-bgmpl\") pod \"4b249c5c-fb49-4f90-9821-2c4c7b37d448\" (UID: \"4b249c5c-fb49-4f90-9821-2c4c7b37d448\") " Nov 24 23:12:00 crc kubenswrapper[4915]: I1124 23:12:00.813894 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b249c5c-fb49-4f90-9821-2c4c7b37d448-scripts" (OuterVolumeSpecName: "scripts") pod "4b249c5c-fb49-4f90-9821-2c4c7b37d448" (UID: "4b249c5c-fb49-4f90-9821-2c4c7b37d448"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:12:00 crc kubenswrapper[4915]: I1124 23:12:00.814855 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b249c5c-fb49-4f90-9821-2c4c7b37d448-kube-api-access-bgmpl" (OuterVolumeSpecName: "kube-api-access-bgmpl") pod "4b249c5c-fb49-4f90-9821-2c4c7b37d448" (UID: "4b249c5c-fb49-4f90-9821-2c4c7b37d448"). InnerVolumeSpecName "kube-api-access-bgmpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:12:00 crc kubenswrapper[4915]: I1124 23:12:00.840320 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b249c5c-fb49-4f90-9821-2c4c7b37d448-config-data" (OuterVolumeSpecName: "config-data") pod "4b249c5c-fb49-4f90-9821-2c4c7b37d448" (UID: "4b249c5c-fb49-4f90-9821-2c4c7b37d448"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:12:00 crc kubenswrapper[4915]: I1124 23:12:00.860947 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b249c5c-fb49-4f90-9821-2c4c7b37d448-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b249c5c-fb49-4f90-9821-2c4c7b37d448" (UID: "4b249c5c-fb49-4f90-9821-2c4c7b37d448"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:12:00 crc kubenswrapper[4915]: I1124 23:12:00.909638 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgmpl\" (UniqueName: \"kubernetes.io/projected/4b249c5c-fb49-4f90-9821-2c4c7b37d448-kube-api-access-bgmpl\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:00 crc kubenswrapper[4915]: I1124 23:12:00.909676 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b249c5c-fb49-4f90-9821-2c4c7b37d448-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:00 crc kubenswrapper[4915]: I1124 23:12:00.909686 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b249c5c-fb49-4f90-9821-2c4c7b37d448-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:00 crc kubenswrapper[4915]: I1124 23:12:00.909695 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b249c5c-fb49-4f90-9821-2c4c7b37d448-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:01 crc kubenswrapper[4915]: I1124 23:12:01.333914 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-dfc4k" event={"ID":"4b249c5c-fb49-4f90-9821-2c4c7b37d448","Type":"ContainerDied","Data":"3de423219df149a91a26af04d2fa565f30c1abf89f5fb9a3cbc0b5e4626c25c9"} Nov 24 23:12:01 crc kubenswrapper[4915]: I1124 23:12:01.334271 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3de423219df149a91a26af04d2fa565f30c1abf89f5fb9a3cbc0b5e4626c25c9" Nov 24 23:12:01 crc kubenswrapper[4915]: I1124 23:12:01.334014 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-dfc4k" Nov 24 23:12:03 crc kubenswrapper[4915]: I1124 23:12:03.945065 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m7kkh" Nov 24 23:12:03 crc kubenswrapper[4915]: I1124 23:12:03.945897 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m7kkh" Nov 24 23:12:04 crc kubenswrapper[4915]: I1124 23:12:04.047902 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m7kkh" Nov 24 23:12:04 crc kubenswrapper[4915]: I1124 23:12:04.426999 4915 scope.go:117] "RemoveContainer" containerID="9460b23ff5cbdbb06020658fde3647ecb1f224c454abb42963e2e7769eedaf47" Nov 24 23:12:04 crc kubenswrapper[4915]: I1124 23:12:04.484249 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m7kkh" Nov 24 23:12:04 crc kubenswrapper[4915]: I1124 23:12:04.560880 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m7kkh"] Nov 24 23:12:05 crc kubenswrapper[4915]: I1124 23:12:05.401534 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerStarted","Data":"2a89069e46a05c46d5332f059d72afceaca9bcadbe4fdf497e4e10e488952af5"} Nov 24 23:12:06 crc kubenswrapper[4915]: I1124 23:12:06.419920 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m7kkh" podUID="5c9e4d2e-d25a-42d8-a5cf-601939818b60" containerName="registry-server" containerID="cri-o://4f2bfe55d1f373e8195c948fbc2b7c4532ebcb63cbf6260e247eefd1017d6a7d" gracePeriod=2 Nov 24 23:12:06 crc kubenswrapper[4915]: I1124 23:12:06.874127 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 24 23:12:07 crc kubenswrapper[4915]: I1124 23:12:07.109885 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m7kkh" Nov 24 23:12:07 crc kubenswrapper[4915]: I1124 23:12:07.216645 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c9e4d2e-d25a-42d8-a5cf-601939818b60-catalog-content\") pod \"5c9e4d2e-d25a-42d8-a5cf-601939818b60\" (UID: \"5c9e4d2e-d25a-42d8-a5cf-601939818b60\") " Nov 24 23:12:07 crc kubenswrapper[4915]: I1124 23:12:07.216932 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c9e4d2e-d25a-42d8-a5cf-601939818b60-utilities\") pod \"5c9e4d2e-d25a-42d8-a5cf-601939818b60\" (UID: \"5c9e4d2e-d25a-42d8-a5cf-601939818b60\") " Nov 24 23:12:07 crc kubenswrapper[4915]: I1124 23:12:07.217058 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmmfc\" (UniqueName: \"kubernetes.io/projected/5c9e4d2e-d25a-42d8-a5cf-601939818b60-kube-api-access-pmmfc\") pod \"5c9e4d2e-d25a-42d8-a5cf-601939818b60\" (UID: \"5c9e4d2e-d25a-42d8-a5cf-601939818b60\") " Nov 24 23:12:07 crc kubenswrapper[4915]: I1124 23:12:07.217654 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c9e4d2e-d25a-42d8-a5cf-601939818b60-utilities" (OuterVolumeSpecName: "utilities") pod "5c9e4d2e-d25a-42d8-a5cf-601939818b60" (UID: "5c9e4d2e-d25a-42d8-a5cf-601939818b60"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:12:07 crc kubenswrapper[4915]: I1124 23:12:07.218068 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c9e4d2e-d25a-42d8-a5cf-601939818b60-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:07 crc kubenswrapper[4915]: I1124 23:12:07.224029 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c9e4d2e-d25a-42d8-a5cf-601939818b60-kube-api-access-pmmfc" (OuterVolumeSpecName: "kube-api-access-pmmfc") pod "5c9e4d2e-d25a-42d8-a5cf-601939818b60" (UID: "5c9e4d2e-d25a-42d8-a5cf-601939818b60"). InnerVolumeSpecName "kube-api-access-pmmfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:12:07 crc kubenswrapper[4915]: I1124 23:12:07.253386 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c9e4d2e-d25a-42d8-a5cf-601939818b60-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c9e4d2e-d25a-42d8-a5cf-601939818b60" (UID: "5c9e4d2e-d25a-42d8-a5cf-601939818b60"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:12:07 crc kubenswrapper[4915]: I1124 23:12:07.322596 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c9e4d2e-d25a-42d8-a5cf-601939818b60-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:07 crc kubenswrapper[4915]: I1124 23:12:07.322637 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmmfc\" (UniqueName: \"kubernetes.io/projected/5c9e4d2e-d25a-42d8-a5cf-601939818b60-kube-api-access-pmmfc\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:07 crc kubenswrapper[4915]: I1124 23:12:07.439217 4915 generic.go:334] "Generic (PLEG): container finished" podID="5c9e4d2e-d25a-42d8-a5cf-601939818b60" containerID="4f2bfe55d1f373e8195c948fbc2b7c4532ebcb63cbf6260e247eefd1017d6a7d" exitCode=0 Nov 24 23:12:07 crc kubenswrapper[4915]: I1124 23:12:07.439272 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7kkh" event={"ID":"5c9e4d2e-d25a-42d8-a5cf-601939818b60","Type":"ContainerDied","Data":"4f2bfe55d1f373e8195c948fbc2b7c4532ebcb63cbf6260e247eefd1017d6a7d"} Nov 24 23:12:07 crc kubenswrapper[4915]: I1124 23:12:07.439315 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m7kkh" event={"ID":"5c9e4d2e-d25a-42d8-a5cf-601939818b60","Type":"ContainerDied","Data":"b7167c128cf8cca89a2f29ae3c77d802e50846532781fa4bb0ba69af2959dd0f"} Nov 24 23:12:07 crc kubenswrapper[4915]: I1124 23:12:07.439317 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m7kkh" Nov 24 23:12:07 crc kubenswrapper[4915]: I1124 23:12:07.439335 4915 scope.go:117] "RemoveContainer" containerID="4f2bfe55d1f373e8195c948fbc2b7c4532ebcb63cbf6260e247eefd1017d6a7d" Nov 24 23:12:07 crc kubenswrapper[4915]: I1124 23:12:07.514317 4915 scope.go:117] "RemoveContainer" containerID="2e4f92b5b890d2a5a529db3c2f7247d9dc5be947ceef9f07b220d32961d9e543" Nov 24 23:12:07 crc kubenswrapper[4915]: I1124 23:12:07.536939 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m7kkh"] Nov 24 23:12:07 crc kubenswrapper[4915]: I1124 23:12:07.559513 4915 scope.go:117] "RemoveContainer" containerID="bb84ee0523f450a3c3b4b7c4f8bfb7f79055214cc4142ded70d7fd933e73b131" Nov 24 23:12:07 crc kubenswrapper[4915]: I1124 23:12:07.560635 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m7kkh"] Nov 24 23:12:07 crc kubenswrapper[4915]: I1124 23:12:07.623046 4915 scope.go:117] "RemoveContainer" containerID="4f2bfe55d1f373e8195c948fbc2b7c4532ebcb63cbf6260e247eefd1017d6a7d" Nov 24 23:12:07 crc kubenswrapper[4915]: E1124 23:12:07.624171 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f2bfe55d1f373e8195c948fbc2b7c4532ebcb63cbf6260e247eefd1017d6a7d\": container with ID starting with 4f2bfe55d1f373e8195c948fbc2b7c4532ebcb63cbf6260e247eefd1017d6a7d not found: ID does not exist" containerID="4f2bfe55d1f373e8195c948fbc2b7c4532ebcb63cbf6260e247eefd1017d6a7d" Nov 24 23:12:07 crc kubenswrapper[4915]: I1124 23:12:07.624201 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f2bfe55d1f373e8195c948fbc2b7c4532ebcb63cbf6260e247eefd1017d6a7d"} err="failed to get container status \"4f2bfe55d1f373e8195c948fbc2b7c4532ebcb63cbf6260e247eefd1017d6a7d\": rpc error: code = NotFound desc = could not find container \"4f2bfe55d1f373e8195c948fbc2b7c4532ebcb63cbf6260e247eefd1017d6a7d\": container with ID starting with 4f2bfe55d1f373e8195c948fbc2b7c4532ebcb63cbf6260e247eefd1017d6a7d not found: ID does not exist" Nov 24 23:12:07 crc kubenswrapper[4915]: I1124 23:12:07.624225 4915 scope.go:117] "RemoveContainer" containerID="2e4f92b5b890d2a5a529db3c2f7247d9dc5be947ceef9f07b220d32961d9e543" Nov 24 23:12:07 crc kubenswrapper[4915]: E1124 23:12:07.624455 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e4f92b5b890d2a5a529db3c2f7247d9dc5be947ceef9f07b220d32961d9e543\": container with ID starting with 2e4f92b5b890d2a5a529db3c2f7247d9dc5be947ceef9f07b220d32961d9e543 not found: ID does not exist" containerID="2e4f92b5b890d2a5a529db3c2f7247d9dc5be947ceef9f07b220d32961d9e543" Nov 24 23:12:07 crc kubenswrapper[4915]: I1124 23:12:07.624474 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e4f92b5b890d2a5a529db3c2f7247d9dc5be947ceef9f07b220d32961d9e543"} err="failed to get container status \"2e4f92b5b890d2a5a529db3c2f7247d9dc5be947ceef9f07b220d32961d9e543\": rpc error: code = NotFound desc = could not find container \"2e4f92b5b890d2a5a529db3c2f7247d9dc5be947ceef9f07b220d32961d9e543\": container with ID starting with 2e4f92b5b890d2a5a529db3c2f7247d9dc5be947ceef9f07b220d32961d9e543 not found: ID does not exist" Nov 24 23:12:07 crc kubenswrapper[4915]: I1124 23:12:07.624487 4915 scope.go:117] "RemoveContainer" containerID="bb84ee0523f450a3c3b4b7c4f8bfb7f79055214cc4142ded70d7fd933e73b131" Nov 24 23:12:07 crc kubenswrapper[4915]: E1124 23:12:07.624677 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb84ee0523f450a3c3b4b7c4f8bfb7f79055214cc4142ded70d7fd933e73b131\": container with ID starting with bb84ee0523f450a3c3b4b7c4f8bfb7f79055214cc4142ded70d7fd933e73b131 not found: ID does not exist" containerID="bb84ee0523f450a3c3b4b7c4f8bfb7f79055214cc4142ded70d7fd933e73b131" Nov 24 23:12:07 crc kubenswrapper[4915]: I1124 23:12:07.624696 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb84ee0523f450a3c3b4b7c4f8bfb7f79055214cc4142ded70d7fd933e73b131"} err="failed to get container status \"bb84ee0523f450a3c3b4b7c4f8bfb7f79055214cc4142ded70d7fd933e73b131\": rpc error: code = NotFound desc = could not find container \"bb84ee0523f450a3c3b4b7c4f8bfb7f79055214cc4142ded70d7fd933e73b131\": container with ID starting with bb84ee0523f450a3c3b4b7c4f8bfb7f79055214cc4142ded70d7fd933e73b131 not found: ID does not exist" Nov 24 23:12:08 crc kubenswrapper[4915]: I1124 23:12:08.441976 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c9e4d2e-d25a-42d8-a5cf-601939818b60" path="/var/lib/kubelet/pods/5c9e4d2e-d25a-42d8-a5cf-601939818b60/volumes" Nov 24 23:12:09 crc kubenswrapper[4915]: I1124 23:12:09.479672 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5cde4f81-df73-4990-885c-690d843e90bb","Type":"ContainerStarted","Data":"4008e1cba667dba9f161e168ea340f91b40faa821707956421ac97d16e7f4162"} Nov 24 23:12:09 crc kubenswrapper[4915]: I1124 23:12:09.516348 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.883654777 podStartE2EDuration="1m4.516328624s" podCreationTimestamp="2025-11-24 23:11:05 +0000 UTC" firstStartedPulling="2025-11-24 23:11:07.237275366 +0000 UTC m=+6685.553527539" lastFinishedPulling="2025-11-24 23:12:06.869949213 +0000 UTC m=+6745.186201386" observedRunningTime="2025-11-24 23:12:09.503702302 +0000 UTC m=+6747.819954485" watchObservedRunningTime="2025-11-24 23:12:09.516328624 +0000 UTC m=+6747.832580807" Nov 24 23:12:10 crc kubenswrapper[4915]: I1124 23:12:10.493846 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-bnv6m" event={"ID":"3f948bc6-46ff-4a75-98c5-beafdbd54bcc","Type":"ContainerStarted","Data":"b4034315af3f08e682f91b225dd9abfd4478dd5646f38ef894b68876899a8fd3"} Nov 24 23:12:12 crc kubenswrapper[4915]: I1124 23:12:12.525059 4915 generic.go:334] "Generic (PLEG): container finished" podID="3f948bc6-46ff-4a75-98c5-beafdbd54bcc" containerID="b4034315af3f08e682f91b225dd9abfd4478dd5646f38ef894b68876899a8fd3" exitCode=0 Nov 24 23:12:12 crc kubenswrapper[4915]: I1124 23:12:12.525153 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-bnv6m" event={"ID":"3f948bc6-46ff-4a75-98c5-beafdbd54bcc","Type":"ContainerDied","Data":"b4034315af3f08e682f91b225dd9abfd4478dd5646f38ef894b68876899a8fd3"} Nov 24 23:12:13 crc kubenswrapper[4915]: I1124 23:12:13.443564 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 24 23:12:13 crc kubenswrapper[4915]: I1124 23:12:13.998705 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-bnv6m" Nov 24 23:12:14 crc kubenswrapper[4915]: I1124 23:12:14.106320 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f948bc6-46ff-4a75-98c5-beafdbd54bcc-combined-ca-bundle\") pod \"3f948bc6-46ff-4a75-98c5-beafdbd54bcc\" (UID: \"3f948bc6-46ff-4a75-98c5-beafdbd54bcc\") " Nov 24 23:12:14 crc kubenswrapper[4915]: I1124 23:12:14.106420 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6krm\" (UniqueName: \"kubernetes.io/projected/3f948bc6-46ff-4a75-98c5-beafdbd54bcc-kube-api-access-c6krm\") pod \"3f948bc6-46ff-4a75-98c5-beafdbd54bcc\" (UID: \"3f948bc6-46ff-4a75-98c5-beafdbd54bcc\") " Nov 24 23:12:14 crc kubenswrapper[4915]: I1124 23:12:14.106540 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f948bc6-46ff-4a75-98c5-beafdbd54bcc-config-data\") pod \"3f948bc6-46ff-4a75-98c5-beafdbd54bcc\" (UID: \"3f948bc6-46ff-4a75-98c5-beafdbd54bcc\") " Nov 24 23:12:14 crc kubenswrapper[4915]: I1124 23:12:14.114421 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f948bc6-46ff-4a75-98c5-beafdbd54bcc-kube-api-access-c6krm" (OuterVolumeSpecName: "kube-api-access-c6krm") pod "3f948bc6-46ff-4a75-98c5-beafdbd54bcc" (UID: "3f948bc6-46ff-4a75-98c5-beafdbd54bcc"). InnerVolumeSpecName "kube-api-access-c6krm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:12:14 crc kubenswrapper[4915]: I1124 23:12:14.140201 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f948bc6-46ff-4a75-98c5-beafdbd54bcc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f948bc6-46ff-4a75-98c5-beafdbd54bcc" (UID: "3f948bc6-46ff-4a75-98c5-beafdbd54bcc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:12:14 crc kubenswrapper[4915]: I1124 23:12:14.197456 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f948bc6-46ff-4a75-98c5-beafdbd54bcc-config-data" (OuterVolumeSpecName: "config-data") pod "3f948bc6-46ff-4a75-98c5-beafdbd54bcc" (UID: "3f948bc6-46ff-4a75-98c5-beafdbd54bcc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:12:14 crc kubenswrapper[4915]: I1124 23:12:14.210875 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6krm\" (UniqueName: \"kubernetes.io/projected/3f948bc6-46ff-4a75-98c5-beafdbd54bcc-kube-api-access-c6krm\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:14 crc kubenswrapper[4915]: I1124 23:12:14.210947 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f948bc6-46ff-4a75-98c5-beafdbd54bcc-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:14 crc kubenswrapper[4915]: I1124 23:12:14.210966 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f948bc6-46ff-4a75-98c5-beafdbd54bcc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:14 crc kubenswrapper[4915]: I1124 23:12:14.586926 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e69116cf-d50e-44af-899f-2e11d16e45d1","Type":"ContainerStarted","Data":"8286978d4c6fbf134552c9f6095408726740eb7462ba03501ffcec53cae08675"} Nov 24 23:12:14 crc kubenswrapper[4915]: I1124 23:12:14.591751 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-bnv6m" event={"ID":"3f948bc6-46ff-4a75-98c5-beafdbd54bcc","Type":"ContainerDied","Data":"c35a08293bc22a6e8aada072fb7f741dc8fa616e56cbd4d31f6935859e4e57da"} Nov 24 23:12:14 crc kubenswrapper[4915]: I1124 23:12:14.591841 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c35a08293bc22a6e8aada072fb7f741dc8fa616e56cbd4d31f6935859e4e57da" Nov 24 23:12:14 crc kubenswrapper[4915]: I1124 23:12:14.591933 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-bnv6m" Nov 24 23:12:14 crc kubenswrapper[4915]: I1124 23:12:14.619836 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.059397355 podStartE2EDuration="1m10.619802883s" podCreationTimestamp="2025-11-24 23:11:04 +0000 UTC" firstStartedPulling="2025-11-24 23:11:05.082264073 +0000 UTC m=+6683.398516246" lastFinishedPulling="2025-11-24 23:12:13.642669601 +0000 UTC m=+6751.958921774" observedRunningTime="2025-11-24 23:12:14.606791952 +0000 UTC m=+6752.923044135" watchObservedRunningTime="2025-11-24 23:12:14.619802883 +0000 UTC m=+6752.936055056" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.741895 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-57b8f9ccb-vdpvp"] Nov 24 23:12:15 crc kubenswrapper[4915]: E1124 23:12:15.742718 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b249c5c-fb49-4f90-9821-2c4c7b37d448" containerName="aodh-db-sync" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.742732 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b249c5c-fb49-4f90-9821-2c4c7b37d448" containerName="aodh-db-sync" Nov 24 23:12:15 crc kubenswrapper[4915]: E1124 23:12:15.742745 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c9e4d2e-d25a-42d8-a5cf-601939818b60" containerName="extract-utilities" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.742752 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c9e4d2e-d25a-42d8-a5cf-601939818b60" containerName="extract-utilities" Nov 24 23:12:15 crc kubenswrapper[4915]: E1124 23:12:15.742768 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c9e4d2e-d25a-42d8-a5cf-601939818b60" containerName="extract-content" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.742789 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c9e4d2e-d25a-42d8-a5cf-601939818b60" containerName="extract-content" Nov 24 23:12:15 crc kubenswrapper[4915]: E1124 23:12:15.742818 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c9e4d2e-d25a-42d8-a5cf-601939818b60" containerName="registry-server" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.742824 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c9e4d2e-d25a-42d8-a5cf-601939818b60" containerName="registry-server" Nov 24 23:12:15 crc kubenswrapper[4915]: E1124 23:12:15.742866 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f948bc6-46ff-4a75-98c5-beafdbd54bcc" containerName="heat-db-sync" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.742871 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f948bc6-46ff-4a75-98c5-beafdbd54bcc" containerName="heat-db-sync" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.743110 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c9e4d2e-d25a-42d8-a5cf-601939818b60" containerName="registry-server" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.743140 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f948bc6-46ff-4a75-98c5-beafdbd54bcc" containerName="heat-db-sync" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.743156 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b249c5c-fb49-4f90-9821-2c4c7b37d448" containerName="aodh-db-sync" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.744015 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-57b8f9ccb-vdpvp" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.753654 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-766d8d5b4c-d9mzt"] Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.755384 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-766d8d5b4c-d9mzt" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.765506 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-6f89f694b6-btvwg"] Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.767345 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f89f694b6-btvwg" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.778155 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-766d8d5b4c-d9mzt"] Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.818117 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-57b8f9ccb-vdpvp"] Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.837023 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6f89f694b6-btvwg"] Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.852570 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd2ba05-2c59-4467-9374-0777760cffd3-combined-ca-bundle\") pod \"heat-engine-766d8d5b4c-d9mzt\" (UID: \"3dd2ba05-2c59-4467-9374-0777760cffd3\") " pod="openstack/heat-engine-766d8d5b4c-d9mzt" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.852614 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fde2e2c-d0d1-42a3-9436-28a1bb06112e-combined-ca-bundle\") pod \"heat-api-57b8f9ccb-vdpvp\" (UID: \"2fde2e2c-d0d1-42a3-9436-28a1bb06112e\") " pod="openstack/heat-api-57b8f9ccb-vdpvp" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.852644 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08e2ac0b-bab8-40e2-9369-9ccb8f4e377b-combined-ca-bundle\") pod \"heat-cfnapi-6f89f694b6-btvwg\" (UID: \"08e2ac0b-bab8-40e2-9369-9ccb8f4e377b\") " pod="openstack/heat-cfnapi-6f89f694b6-btvwg" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.852662 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08e2ac0b-bab8-40e2-9369-9ccb8f4e377b-config-data-custom\") pod \"heat-cfnapi-6f89f694b6-btvwg\" (UID: \"08e2ac0b-bab8-40e2-9369-9ccb8f4e377b\") " pod="openstack/heat-cfnapi-6f89f694b6-btvwg" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.852699 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhpzm\" (UniqueName: \"kubernetes.io/projected/2fde2e2c-d0d1-42a3-9436-28a1bb06112e-kube-api-access-dhpzm\") pod \"heat-api-57b8f9ccb-vdpvp\" (UID: \"2fde2e2c-d0d1-42a3-9436-28a1bb06112e\") " pod="openstack/heat-api-57b8f9ccb-vdpvp" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.852746 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3dd2ba05-2c59-4467-9374-0777760cffd3-config-data-custom\") pod \"heat-engine-766d8d5b4c-d9mzt\" (UID: \"3dd2ba05-2c59-4467-9374-0777760cffd3\") " pod="openstack/heat-engine-766d8d5b4c-d9mzt" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.852787 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dd2ba05-2c59-4467-9374-0777760cffd3-config-data\") pod \"heat-engine-766d8d5b4c-d9mzt\" (UID: \"3dd2ba05-2c59-4467-9374-0777760cffd3\") " pod="openstack/heat-engine-766d8d5b4c-d9mzt" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.853053 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4gpx\" (UniqueName: \"kubernetes.io/projected/08e2ac0b-bab8-40e2-9369-9ccb8f4e377b-kube-api-access-d4gpx\") pod \"heat-cfnapi-6f89f694b6-btvwg\" (UID: \"08e2ac0b-bab8-40e2-9369-9ccb8f4e377b\") " pod="openstack/heat-cfnapi-6f89f694b6-btvwg" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.853141 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08e2ac0b-bab8-40e2-9369-9ccb8f4e377b-public-tls-certs\") pod \"heat-cfnapi-6f89f694b6-btvwg\" (UID: \"08e2ac0b-bab8-40e2-9369-9ccb8f4e377b\") " pod="openstack/heat-cfnapi-6f89f694b6-btvwg" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.853164 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkpxq\" (UniqueName: \"kubernetes.io/projected/3dd2ba05-2c59-4467-9374-0777760cffd3-kube-api-access-gkpxq\") pod \"heat-engine-766d8d5b4c-d9mzt\" (UID: \"3dd2ba05-2c59-4467-9374-0777760cffd3\") " pod="openstack/heat-engine-766d8d5b4c-d9mzt" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.853191 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fde2e2c-d0d1-42a3-9436-28a1bb06112e-config-data\") pod \"heat-api-57b8f9ccb-vdpvp\" (UID: \"2fde2e2c-d0d1-42a3-9436-28a1bb06112e\") " pod="openstack/heat-api-57b8f9ccb-vdpvp" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.853237 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08e2ac0b-bab8-40e2-9369-9ccb8f4e377b-internal-tls-certs\") pod \"heat-cfnapi-6f89f694b6-btvwg\" (UID: \"08e2ac0b-bab8-40e2-9369-9ccb8f4e377b\") " pod="openstack/heat-cfnapi-6f89f694b6-btvwg" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.853355 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fde2e2c-d0d1-42a3-9436-28a1bb06112e-public-tls-certs\") pod \"heat-api-57b8f9ccb-vdpvp\" (UID: \"2fde2e2c-d0d1-42a3-9436-28a1bb06112e\") " pod="openstack/heat-api-57b8f9ccb-vdpvp" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.853406 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08e2ac0b-bab8-40e2-9369-9ccb8f4e377b-config-data\") pod \"heat-cfnapi-6f89f694b6-btvwg\" (UID: \"08e2ac0b-bab8-40e2-9369-9ccb8f4e377b\") " pod="openstack/heat-cfnapi-6f89f694b6-btvwg" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.853657 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fde2e2c-d0d1-42a3-9436-28a1bb06112e-internal-tls-certs\") pod \"heat-api-57b8f9ccb-vdpvp\" (UID: \"2fde2e2c-d0d1-42a3-9436-28a1bb06112e\") " pod="openstack/heat-api-57b8f9ccb-vdpvp" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.853712 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2fde2e2c-d0d1-42a3-9436-28a1bb06112e-config-data-custom\") pod \"heat-api-57b8f9ccb-vdpvp\" (UID: \"2fde2e2c-d0d1-42a3-9436-28a1bb06112e\") " pod="openstack/heat-api-57b8f9ccb-vdpvp" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.955814 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3dd2ba05-2c59-4467-9374-0777760cffd3-config-data-custom\") pod \"heat-engine-766d8d5b4c-d9mzt\" (UID: \"3dd2ba05-2c59-4467-9374-0777760cffd3\") " pod="openstack/heat-engine-766d8d5b4c-d9mzt" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.955868 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dd2ba05-2c59-4467-9374-0777760cffd3-config-data\") pod \"heat-engine-766d8d5b4c-d9mzt\" (UID: \"3dd2ba05-2c59-4467-9374-0777760cffd3\") " pod="openstack/heat-engine-766d8d5b4c-d9mzt" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.955942 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4gpx\" (UniqueName: \"kubernetes.io/projected/08e2ac0b-bab8-40e2-9369-9ccb8f4e377b-kube-api-access-d4gpx\") pod \"heat-cfnapi-6f89f694b6-btvwg\" (UID: \"08e2ac0b-bab8-40e2-9369-9ccb8f4e377b\") " pod="openstack/heat-cfnapi-6f89f694b6-btvwg" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.955979 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08e2ac0b-bab8-40e2-9369-9ccb8f4e377b-public-tls-certs\") pod \"heat-cfnapi-6f89f694b6-btvwg\" (UID: \"08e2ac0b-bab8-40e2-9369-9ccb8f4e377b\") " pod="openstack/heat-cfnapi-6f89f694b6-btvwg" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.956002 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkpxq\" (UniqueName: \"kubernetes.io/projected/3dd2ba05-2c59-4467-9374-0777760cffd3-kube-api-access-gkpxq\") pod \"heat-engine-766d8d5b4c-d9mzt\" (UID: \"3dd2ba05-2c59-4467-9374-0777760cffd3\") " pod="openstack/heat-engine-766d8d5b4c-d9mzt" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.956028 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fde2e2c-d0d1-42a3-9436-28a1bb06112e-config-data\") pod \"heat-api-57b8f9ccb-vdpvp\" (UID: \"2fde2e2c-d0d1-42a3-9436-28a1bb06112e\") " pod="openstack/heat-api-57b8f9ccb-vdpvp" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.956058 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08e2ac0b-bab8-40e2-9369-9ccb8f4e377b-internal-tls-certs\") pod \"heat-cfnapi-6f89f694b6-btvwg\" (UID: \"08e2ac0b-bab8-40e2-9369-9ccb8f4e377b\") " pod="openstack/heat-cfnapi-6f89f694b6-btvwg" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.956116 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fde2e2c-d0d1-42a3-9436-28a1bb06112e-public-tls-certs\") pod \"heat-api-57b8f9ccb-vdpvp\" (UID: \"2fde2e2c-d0d1-42a3-9436-28a1bb06112e\") " pod="openstack/heat-api-57b8f9ccb-vdpvp" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.956152 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08e2ac0b-bab8-40e2-9369-9ccb8f4e377b-config-data\") pod \"heat-cfnapi-6f89f694b6-btvwg\" (UID: \"08e2ac0b-bab8-40e2-9369-9ccb8f4e377b\") " pod="openstack/heat-cfnapi-6f89f694b6-btvwg" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.956219 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fde2e2c-d0d1-42a3-9436-28a1bb06112e-internal-tls-certs\") pod \"heat-api-57b8f9ccb-vdpvp\" (UID: \"2fde2e2c-d0d1-42a3-9436-28a1bb06112e\") " pod="openstack/heat-api-57b8f9ccb-vdpvp" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.956242 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2fde2e2c-d0d1-42a3-9436-28a1bb06112e-config-data-custom\") pod \"heat-api-57b8f9ccb-vdpvp\" (UID: \"2fde2e2c-d0d1-42a3-9436-28a1bb06112e\") " pod="openstack/heat-api-57b8f9ccb-vdpvp" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.956323 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd2ba05-2c59-4467-9374-0777760cffd3-combined-ca-bundle\") pod \"heat-engine-766d8d5b4c-d9mzt\" (UID: \"3dd2ba05-2c59-4467-9374-0777760cffd3\") " pod="openstack/heat-engine-766d8d5b4c-d9mzt" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.956365 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fde2e2c-d0d1-42a3-9436-28a1bb06112e-combined-ca-bundle\") pod \"heat-api-57b8f9ccb-vdpvp\" (UID: \"2fde2e2c-d0d1-42a3-9436-28a1bb06112e\") " pod="openstack/heat-api-57b8f9ccb-vdpvp" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.956396 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08e2ac0b-bab8-40e2-9369-9ccb8f4e377b-combined-ca-bundle\") pod \"heat-cfnapi-6f89f694b6-btvwg\" (UID: \"08e2ac0b-bab8-40e2-9369-9ccb8f4e377b\") " pod="openstack/heat-cfnapi-6f89f694b6-btvwg" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.956416 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08e2ac0b-bab8-40e2-9369-9ccb8f4e377b-config-data-custom\") pod \"heat-cfnapi-6f89f694b6-btvwg\" (UID: \"08e2ac0b-bab8-40e2-9369-9ccb8f4e377b\") " pod="openstack/heat-cfnapi-6f89f694b6-btvwg" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.956460 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhpzm\" (UniqueName: \"kubernetes.io/projected/2fde2e2c-d0d1-42a3-9436-28a1bb06112e-kube-api-access-dhpzm\") pod \"heat-api-57b8f9ccb-vdpvp\" (UID: \"2fde2e2c-d0d1-42a3-9436-28a1bb06112e\") " pod="openstack/heat-api-57b8f9ccb-vdpvp" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.964751 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08e2ac0b-bab8-40e2-9369-9ccb8f4e377b-config-data\") pod \"heat-cfnapi-6f89f694b6-btvwg\" (UID: \"08e2ac0b-bab8-40e2-9369-9ccb8f4e377b\") " pod="openstack/heat-cfnapi-6f89f694b6-btvwg" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.966966 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08e2ac0b-bab8-40e2-9369-9ccb8f4e377b-combined-ca-bundle\") pod \"heat-cfnapi-6f89f694b6-btvwg\" (UID: \"08e2ac0b-bab8-40e2-9369-9ccb8f4e377b\") " pod="openstack/heat-cfnapi-6f89f694b6-btvwg" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.967391 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08e2ac0b-bab8-40e2-9369-9ccb8f4e377b-internal-tls-certs\") pod \"heat-cfnapi-6f89f694b6-btvwg\" (UID: \"08e2ac0b-bab8-40e2-9369-9ccb8f4e377b\") " pod="openstack/heat-cfnapi-6f89f694b6-btvwg" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.970858 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fde2e2c-d0d1-42a3-9436-28a1bb06112e-config-data\") pod \"heat-api-57b8f9ccb-vdpvp\" (UID: \"2fde2e2c-d0d1-42a3-9436-28a1bb06112e\") " pod="openstack/heat-api-57b8f9ccb-vdpvp" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.972384 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3dd2ba05-2c59-4467-9374-0777760cffd3-config-data-custom\") pod \"heat-engine-766d8d5b4c-d9mzt\" (UID: \"3dd2ba05-2c59-4467-9374-0777760cffd3\") " pod="openstack/heat-engine-766d8d5b4c-d9mzt" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.972808 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fde2e2c-d0d1-42a3-9436-28a1bb06112e-public-tls-certs\") pod \"heat-api-57b8f9ccb-vdpvp\" (UID: \"2fde2e2c-d0d1-42a3-9436-28a1bb06112e\") " pod="openstack/heat-api-57b8f9ccb-vdpvp" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.973970 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dd2ba05-2c59-4467-9374-0777760cffd3-combined-ca-bundle\") pod \"heat-engine-766d8d5b4c-d9mzt\" (UID: \"3dd2ba05-2c59-4467-9374-0777760cffd3\") " pod="openstack/heat-engine-766d8d5b4c-d9mzt" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.974361 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08e2ac0b-bab8-40e2-9369-9ccb8f4e377b-public-tls-certs\") pod \"heat-cfnapi-6f89f694b6-btvwg\" (UID: \"08e2ac0b-bab8-40e2-9369-9ccb8f4e377b\") " pod="openstack/heat-cfnapi-6f89f694b6-btvwg" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.975007 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2fde2e2c-d0d1-42a3-9436-28a1bb06112e-config-data-custom\") pod \"heat-api-57b8f9ccb-vdpvp\" (UID: \"2fde2e2c-d0d1-42a3-9436-28a1bb06112e\") " pod="openstack/heat-api-57b8f9ccb-vdpvp" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.976042 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkpxq\" (UniqueName: \"kubernetes.io/projected/3dd2ba05-2c59-4467-9374-0777760cffd3-kube-api-access-gkpxq\") pod \"heat-engine-766d8d5b4c-d9mzt\" (UID: \"3dd2ba05-2c59-4467-9374-0777760cffd3\") " pod="openstack/heat-engine-766d8d5b4c-d9mzt" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.977856 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08e2ac0b-bab8-40e2-9369-9ccb8f4e377b-config-data-custom\") pod \"heat-cfnapi-6f89f694b6-btvwg\" (UID: \"08e2ac0b-bab8-40e2-9369-9ccb8f4e377b\") " pod="openstack/heat-cfnapi-6f89f694b6-btvwg" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.980173 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fde2e2c-d0d1-42a3-9436-28a1bb06112e-internal-tls-certs\") pod \"heat-api-57b8f9ccb-vdpvp\" (UID: \"2fde2e2c-d0d1-42a3-9436-28a1bb06112e\") " pod="openstack/heat-api-57b8f9ccb-vdpvp" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.984725 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dd2ba05-2c59-4467-9374-0777760cffd3-config-data\") pod \"heat-engine-766d8d5b4c-d9mzt\" (UID: \"3dd2ba05-2c59-4467-9374-0777760cffd3\") " pod="openstack/heat-engine-766d8d5b4c-d9mzt" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.985308 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fde2e2c-d0d1-42a3-9436-28a1bb06112e-combined-ca-bundle\") pod \"heat-api-57b8f9ccb-vdpvp\" (UID: \"2fde2e2c-d0d1-42a3-9436-28a1bb06112e\") " pod="openstack/heat-api-57b8f9ccb-vdpvp" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.985468 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4gpx\" (UniqueName: \"kubernetes.io/projected/08e2ac0b-bab8-40e2-9369-9ccb8f4e377b-kube-api-access-d4gpx\") pod \"heat-cfnapi-6f89f694b6-btvwg\" (UID: \"08e2ac0b-bab8-40e2-9369-9ccb8f4e377b\") " pod="openstack/heat-cfnapi-6f89f694b6-btvwg" Nov 24 23:12:15 crc kubenswrapper[4915]: I1124 23:12:15.986086 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhpzm\" (UniqueName: \"kubernetes.io/projected/2fde2e2c-d0d1-42a3-9436-28a1bb06112e-kube-api-access-dhpzm\") pod \"heat-api-57b8f9ccb-vdpvp\" (UID: \"2fde2e2c-d0d1-42a3-9436-28a1bb06112e\") " pod="openstack/heat-api-57b8f9ccb-vdpvp" Nov 24 23:12:16 crc kubenswrapper[4915]: I1124 23:12:16.062635 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-57b8f9ccb-vdpvp" Nov 24 23:12:16 crc kubenswrapper[4915]: I1124 23:12:16.072269 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-766d8d5b4c-d9mzt" Nov 24 23:12:16 crc kubenswrapper[4915]: I1124 23:12:16.085423 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f89f694b6-btvwg" Nov 24 23:12:16 crc kubenswrapper[4915]: I1124 23:12:16.673691 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-57b8f9ccb-vdpvp"] Nov 24 23:12:16 crc kubenswrapper[4915]: W1124 23:12:16.685151 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fde2e2c_d0d1_42a3_9436_28a1bb06112e.slice/crio-fe1d5c8af34a93f20dfb7449ea0feaa7c5658d72f16144fa31776fb032cea8f9 WatchSource:0}: Error finding container fe1d5c8af34a93f20dfb7449ea0feaa7c5658d72f16144fa31776fb032cea8f9: Status 404 returned error can't find the container with id fe1d5c8af34a93f20dfb7449ea0feaa7c5658d72f16144fa31776fb032cea8f9 Nov 24 23:12:16 crc kubenswrapper[4915]: I1124 23:12:16.782614 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-6f89f694b6-btvwg"] Nov 24 23:12:16 crc kubenswrapper[4915]: W1124 23:12:16.782620 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08e2ac0b_bab8_40e2_9369_9ccb8f4e377b.slice/crio-c97e8080819877906fd064162f2d8728269365d15e411fb528396f6ae6cd13fe WatchSource:0}: Error finding container c97e8080819877906fd064162f2d8728269365d15e411fb528396f6ae6cd13fe: Status 404 returned error can't find the container with id c97e8080819877906fd064162f2d8728269365d15e411fb528396f6ae6cd13fe Nov 24 23:12:16 crc kubenswrapper[4915]: I1124 23:12:16.802421 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-766d8d5b4c-d9mzt"] Nov 24 23:12:16 crc kubenswrapper[4915]: W1124 23:12:16.805232 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3dd2ba05_2c59_4467_9374_0777760cffd3.slice/crio-e88f42bc634a47e5b4c9732c453877fcbf6e6cace0035996a646337d829f96e4 WatchSource:0}: Error finding container e88f42bc634a47e5b4c9732c453877fcbf6e6cace0035996a646337d829f96e4: Status 404 returned error can't find the container with id e88f42bc634a47e5b4c9732c453877fcbf6e6cace0035996a646337d829f96e4 Nov 24 23:12:17 crc kubenswrapper[4915]: I1124 23:12:17.622917 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f89f694b6-btvwg" event={"ID":"08e2ac0b-bab8-40e2-9369-9ccb8f4e377b","Type":"ContainerStarted","Data":"c97e8080819877906fd064162f2d8728269365d15e411fb528396f6ae6cd13fe"} Nov 24 23:12:17 crc kubenswrapper[4915]: I1124 23:12:17.630619 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-57b8f9ccb-vdpvp" event={"ID":"2fde2e2c-d0d1-42a3-9436-28a1bb06112e","Type":"ContainerStarted","Data":"fe1d5c8af34a93f20dfb7449ea0feaa7c5658d72f16144fa31776fb032cea8f9"} Nov 24 23:12:17 crc kubenswrapper[4915]: I1124 23:12:17.641994 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-766d8d5b4c-d9mzt" event={"ID":"3dd2ba05-2c59-4467-9374-0777760cffd3","Type":"ContainerStarted","Data":"6f91213f0cec1214123511f2c94c407c19c8dfd9ceea24be8c3c68410cd168b5"} Nov 24 23:12:17 crc kubenswrapper[4915]: I1124 23:12:17.642051 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-766d8d5b4c-d9mzt" event={"ID":"3dd2ba05-2c59-4467-9374-0777760cffd3","Type":"ContainerStarted","Data":"e88f42bc634a47e5b4c9732c453877fcbf6e6cace0035996a646337d829f96e4"} Nov 24 23:12:17 crc kubenswrapper[4915]: I1124 23:12:17.644435 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-766d8d5b4c-d9mzt" Nov 24 23:12:17 crc kubenswrapper[4915]: I1124 23:12:17.662448 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-766d8d5b4c-d9mzt" podStartSLOduration=2.662425612 podStartE2EDuration="2.662425612s" podCreationTimestamp="2025-11-24 23:12:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 23:12:17.656240715 +0000 UTC m=+6755.972492898" watchObservedRunningTime="2025-11-24 23:12:17.662425612 +0000 UTC m=+6755.978677785" Nov 24 23:12:19 crc kubenswrapper[4915]: I1124 23:12:19.674308 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f89f694b6-btvwg" event={"ID":"08e2ac0b-bab8-40e2-9369-9ccb8f4e377b","Type":"ContainerStarted","Data":"812d28aff19b7918e76af0ddbfa3618ce5982ecc715ac43e2346d2b6c212c88b"} Nov 24 23:12:19 crc kubenswrapper[4915]: I1124 23:12:19.674830 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-6f89f694b6-btvwg" Nov 24 23:12:19 crc kubenswrapper[4915]: I1124 23:12:19.676031 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-57b8f9ccb-vdpvp" event={"ID":"2fde2e2c-d0d1-42a3-9436-28a1bb06112e","Type":"ContainerStarted","Data":"c5f64ee3ebefef395e078cb441bd467340a71f3a8e5e45e2c01c27b4cc67daad"} Nov 24 23:12:19 crc kubenswrapper[4915]: I1124 23:12:19.695027 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-6f89f694b6-btvwg" podStartSLOduration=3.190338524 podStartE2EDuration="4.695010899s" podCreationTimestamp="2025-11-24 23:12:15 +0000 UTC" firstStartedPulling="2025-11-24 23:12:16.785376922 +0000 UTC m=+6755.101629095" lastFinishedPulling="2025-11-24 23:12:18.290049297 +0000 UTC m=+6756.606301470" observedRunningTime="2025-11-24 23:12:19.690978921 +0000 UTC m=+6758.007231104" watchObservedRunningTime="2025-11-24 23:12:19.695010899 +0000 UTC m=+6758.011263072" Nov 24 23:12:19 crc kubenswrapper[4915]: I1124 23:12:19.715903 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-57b8f9ccb-vdpvp" podStartSLOduration=3.117516669 podStartE2EDuration="4.715883073s" podCreationTimestamp="2025-11-24 23:12:15 +0000 UTC" firstStartedPulling="2025-11-24 23:12:16.687026337 +0000 UTC m=+6755.003278510" lastFinishedPulling="2025-11-24 23:12:18.285392741 +0000 UTC m=+6756.601644914" observedRunningTime="2025-11-24 23:12:19.70577002 +0000 UTC m=+6758.022022193" watchObservedRunningTime="2025-11-24 23:12:19.715883073 +0000 UTC m=+6758.032135236" Nov 24 23:12:20 crc kubenswrapper[4915]: I1124 23:12:20.687315 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-57b8f9ccb-vdpvp" Nov 24 23:12:27 crc kubenswrapper[4915]: I1124 23:12:27.925892 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-6f89f694b6-btvwg" Nov 24 23:12:27 crc kubenswrapper[4915]: I1124 23:12:27.928636 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-57b8f9ccb-vdpvp" Nov 24 23:12:28 crc kubenswrapper[4915]: I1124 23:12:28.027603 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6f8894bc87-v9t9l"] Nov 24 23:12:28 crc kubenswrapper[4915]: I1124 23:12:28.027913 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-6f8894bc87-v9t9l" podUID="acdc80ea-56e9-4e98-bd4a-6d6d660e93c4" containerName="heat-cfnapi" containerID="cri-o://9191af4d3005f3bdc44461f95a1bef9076d7444b9c91ac9a323f36e9afcd0eaa" gracePeriod=60 Nov 24 23:12:28 crc kubenswrapper[4915]: I1124 23:12:28.049165 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6d86bfd46c-5v69x"] Nov 24 23:12:28 crc kubenswrapper[4915]: I1124 23:12:28.049385 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-6d86bfd46c-5v69x" podUID="cb4f24a4-d3ad-4bc2-bbed-47a35479c826" containerName="heat-api" containerID="cri-o://cb8fff6986184c87fa6d6717c605d46266420a1bcac82e9c44ce5e61858e5d9d" gracePeriod=60 Nov 24 23:12:31 crc kubenswrapper[4915]: I1124 23:12:31.215575 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-6f8894bc87-v9t9l" podUID="acdc80ea-56e9-4e98-bd4a-6d6d660e93c4" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.223:8000/healthcheck\": read tcp 10.217.0.2:59742->10.217.0.223:8000: read: connection reset by peer" Nov 24 23:12:31 crc kubenswrapper[4915]: I1124 23:12:31.232815 4915 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-6d86bfd46c-5v69x" podUID="cb4f24a4-d3ad-4bc2-bbed-47a35479c826" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.222:8004/healthcheck\": read tcp 10.217.0.2:52042->10.217.0.222:8004: read: connection reset by peer" Nov 24 23:12:31 crc kubenswrapper[4915]: I1124 23:12:31.842274 4915 generic.go:334] "Generic (PLEG): container finished" podID="acdc80ea-56e9-4e98-bd4a-6d6d660e93c4" containerID="9191af4d3005f3bdc44461f95a1bef9076d7444b9c91ac9a323f36e9afcd0eaa" exitCode=0 Nov 24 23:12:31 crc kubenswrapper[4915]: I1124 23:12:31.842670 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f8894bc87-v9t9l" event={"ID":"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4","Type":"ContainerDied","Data":"9191af4d3005f3bdc44461f95a1bef9076d7444b9c91ac9a323f36e9afcd0eaa"} Nov 24 23:12:31 crc kubenswrapper[4915]: I1124 23:12:31.845860 4915 generic.go:334] "Generic (PLEG): container finished" podID="cb4f24a4-d3ad-4bc2-bbed-47a35479c826" containerID="cb8fff6986184c87fa6d6717c605d46266420a1bcac82e9c44ce5e61858e5d9d" exitCode=0 Nov 24 23:12:31 crc kubenswrapper[4915]: I1124 23:12:31.846045 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d86bfd46c-5v69x" event={"ID":"cb4f24a4-d3ad-4bc2-bbed-47a35479c826","Type":"ContainerDied","Data":"cb8fff6986184c87fa6d6717c605d46266420a1bcac82e9c44ce5e61858e5d9d"} Nov 24 23:12:31 crc kubenswrapper[4915]: I1124 23:12:31.846155 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d86bfd46c-5v69x" event={"ID":"cb4f24a4-d3ad-4bc2-bbed-47a35479c826","Type":"ContainerDied","Data":"01aaebdff58298c9ee0eed04fc04096dc67da46fed61adb4148b5669e7f38e8a"} Nov 24 23:12:31 crc kubenswrapper[4915]: I1124 23:12:31.846227 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01aaebdff58298c9ee0eed04fc04096dc67da46fed61adb4148b5669e7f38e8a" Nov 24 23:12:31 crc kubenswrapper[4915]: I1124 23:12:31.988930 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d86bfd46c-5v69x" Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.005790 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f8894bc87-v9t9l" Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.134422 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-internal-tls-certs\") pod \"cb4f24a4-d3ad-4bc2-bbed-47a35479c826\" (UID: \"cb4f24a4-d3ad-4bc2-bbed-47a35479c826\") " Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.134743 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-public-tls-certs\") pod \"cb4f24a4-d3ad-4bc2-bbed-47a35479c826\" (UID: \"cb4f24a4-d3ad-4bc2-bbed-47a35479c826\") " Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.134815 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-combined-ca-bundle\") pod \"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4\" (UID: \"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4\") " Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.134854 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdplf\" (UniqueName: \"kubernetes.io/projected/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-kube-api-access-vdplf\") pod \"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4\" (UID: \"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4\") " Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.134874 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-config-data\") pod \"cb4f24a4-d3ad-4bc2-bbed-47a35479c826\" (UID: \"cb4f24a4-d3ad-4bc2-bbed-47a35479c826\") " Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.134910 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-public-tls-certs\") pod \"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4\" (UID: \"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4\") " Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.134956 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-config-data\") pod \"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4\" (UID: \"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4\") " Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.135003 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-combined-ca-bundle\") pod \"cb4f24a4-d3ad-4bc2-bbed-47a35479c826\" (UID: \"cb4f24a4-d3ad-4bc2-bbed-47a35479c826\") " Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.135038 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l47zh\" (UniqueName: \"kubernetes.io/projected/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-kube-api-access-l47zh\") pod \"cb4f24a4-d3ad-4bc2-bbed-47a35479c826\" (UID: \"cb4f24a4-d3ad-4bc2-bbed-47a35479c826\") " Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.135086 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-config-data-custom\") pod \"cb4f24a4-d3ad-4bc2-bbed-47a35479c826\" (UID: \"cb4f24a4-d3ad-4bc2-bbed-47a35479c826\") " Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.135109 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-config-data-custom\") pod \"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4\" (UID: \"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4\") " Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.135165 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-internal-tls-certs\") pod \"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4\" (UID: \"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4\") " Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.142606 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cb4f24a4-d3ad-4bc2-bbed-47a35479c826" (UID: "cb4f24a4-d3ad-4bc2-bbed-47a35479c826"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.144848 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-kube-api-access-vdplf" (OuterVolumeSpecName: "kube-api-access-vdplf") pod "acdc80ea-56e9-4e98-bd4a-6d6d660e93c4" (UID: "acdc80ea-56e9-4e98-bd4a-6d6d660e93c4"). InnerVolumeSpecName "kube-api-access-vdplf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.148950 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-kube-api-access-l47zh" (OuterVolumeSpecName: "kube-api-access-l47zh") pod "cb4f24a4-d3ad-4bc2-bbed-47a35479c826" (UID: "cb4f24a4-d3ad-4bc2-bbed-47a35479c826"). InnerVolumeSpecName "kube-api-access-l47zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.159990 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "acdc80ea-56e9-4e98-bd4a-6d6d660e93c4" (UID: "acdc80ea-56e9-4e98-bd4a-6d6d660e93c4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.190624 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb4f24a4-d3ad-4bc2-bbed-47a35479c826" (UID: "cb4f24a4-d3ad-4bc2-bbed-47a35479c826"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.190954 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "acdc80ea-56e9-4e98-bd4a-6d6d660e93c4" (UID: "acdc80ea-56e9-4e98-bd4a-6d6d660e93c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.206735 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cb4f24a4-d3ad-4bc2-bbed-47a35479c826" (UID: "cb4f24a4-d3ad-4bc2-bbed-47a35479c826"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.207184 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "acdc80ea-56e9-4e98-bd4a-6d6d660e93c4" (UID: "acdc80ea-56e9-4e98-bd4a-6d6d660e93c4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.208303 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-config-data" (OuterVolumeSpecName: "config-data") pod "cb4f24a4-d3ad-4bc2-bbed-47a35479c826" (UID: "cb4f24a4-d3ad-4bc2-bbed-47a35479c826"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.214282 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-config-data" (OuterVolumeSpecName: "config-data") pod "acdc80ea-56e9-4e98-bd4a-6d6d660e93c4" (UID: "acdc80ea-56e9-4e98-bd4a-6d6d660e93c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.222201 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cb4f24a4-d3ad-4bc2-bbed-47a35479c826" (UID: "cb4f24a4-d3ad-4bc2-bbed-47a35479c826"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.238034 4915 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.238066 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.238083 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdplf\" (UniqueName: \"kubernetes.io/projected/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-kube-api-access-vdplf\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.238098 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.238110 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.238121 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.238133 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l47zh\" (UniqueName: \"kubernetes.io/projected/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-kube-api-access-l47zh\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.238146 4915 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.238158 4915 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.238169 4915 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.238180 4915 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb4f24a4-d3ad-4bc2-bbed-47a35479c826-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.252180 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "acdc80ea-56e9-4e98-bd4a-6d6d660e93c4" (UID: "acdc80ea-56e9-4e98-bd4a-6d6d660e93c4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.354584 4915 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.857164 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-6f8894bc87-v9t9l" event={"ID":"acdc80ea-56e9-4e98-bd4a-6d6d660e93c4","Type":"ContainerDied","Data":"f5a8726cf154415ab859df4f0042e5eabffb3c20ce73bca4f2a8d310ca030b9c"} Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.857614 4915 scope.go:117] "RemoveContainer" containerID="9191af4d3005f3bdc44461f95a1bef9076d7444b9c91ac9a323f36e9afcd0eaa" Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.857192 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d86bfd46c-5v69x" Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.857186 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-6f8894bc87-v9t9l" Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.884623 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-6f8894bc87-v9t9l"] Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.898138 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-6f8894bc87-v9t9l"] Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.908234 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6d86bfd46c-5v69x"] Nov 24 23:12:32 crc kubenswrapper[4915]: I1124 23:12:32.917730 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6d86bfd46c-5v69x"] Nov 24 23:12:34 crc kubenswrapper[4915]: I1124 23:12:34.448125 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acdc80ea-56e9-4e98-bd4a-6d6d660e93c4" path="/var/lib/kubelet/pods/acdc80ea-56e9-4e98-bd4a-6d6d660e93c4/volumes" Nov 24 23:12:34 crc kubenswrapper[4915]: I1124 23:12:34.450517 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb4f24a4-d3ad-4bc2-bbed-47a35479c826" path="/var/lib/kubelet/pods/cb4f24a4-d3ad-4bc2-bbed-47a35479c826/volumes" Nov 24 23:12:36 crc kubenswrapper[4915]: I1124 23:12:36.143880 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-766d8d5b4c-d9mzt" Nov 24 23:12:36 crc kubenswrapper[4915]: I1124 23:12:36.217313 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-c6db6bd6d-wbhkr"] Nov 24 23:12:36 crc kubenswrapper[4915]: I1124 23:12:36.217568 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-c6db6bd6d-wbhkr" podUID="4f28548e-b6b5-4a05-8493-a3c1896e4c6d" containerName="heat-engine" containerID="cri-o://9d5782ad83c2168cb30c94182ed606a30443b13e950b8365a16dd745b742a3d9" gracePeriod=60 Nov 24 23:12:37 crc kubenswrapper[4915]: I1124 23:12:37.997315 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ddxwb"] Nov 24 23:12:37 crc kubenswrapper[4915]: E1124 23:12:37.998115 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb4f24a4-d3ad-4bc2-bbed-47a35479c826" containerName="heat-api" Nov 24 23:12:37 crc kubenswrapper[4915]: I1124 23:12:37.998129 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb4f24a4-d3ad-4bc2-bbed-47a35479c826" containerName="heat-api" Nov 24 23:12:37 crc kubenswrapper[4915]: E1124 23:12:37.998169 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acdc80ea-56e9-4e98-bd4a-6d6d660e93c4" containerName="heat-cfnapi" Nov 24 23:12:37 crc kubenswrapper[4915]: I1124 23:12:37.998175 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="acdc80ea-56e9-4e98-bd4a-6d6d660e93c4" containerName="heat-cfnapi" Nov 24 23:12:37 crc kubenswrapper[4915]: I1124 23:12:37.998397 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb4f24a4-d3ad-4bc2-bbed-47a35479c826" containerName="heat-api" Nov 24 23:12:37 crc kubenswrapper[4915]: I1124 23:12:37.998414 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="acdc80ea-56e9-4e98-bd4a-6d6d660e93c4" containerName="heat-cfnapi" Nov 24 23:12:38 crc kubenswrapper[4915]: I1124 23:12:38.000194 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ddxwb" Nov 24 23:12:38 crc kubenswrapper[4915]: I1124 23:12:38.008126 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ddxwb"] Nov 24 23:12:38 crc kubenswrapper[4915]: I1124 23:12:38.019305 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e233ad8b-763b-4205-8242-1be3217aa248-catalog-content\") pod \"certified-operators-ddxwb\" (UID: \"e233ad8b-763b-4205-8242-1be3217aa248\") " pod="openshift-marketplace/certified-operators-ddxwb" Nov 24 23:12:38 crc kubenswrapper[4915]: I1124 23:12:38.019450 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cts7t\" (UniqueName: \"kubernetes.io/projected/e233ad8b-763b-4205-8242-1be3217aa248-kube-api-access-cts7t\") pod \"certified-operators-ddxwb\" (UID: \"e233ad8b-763b-4205-8242-1be3217aa248\") " pod="openshift-marketplace/certified-operators-ddxwb" Nov 24 23:12:38 crc kubenswrapper[4915]: I1124 23:12:38.019709 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e233ad8b-763b-4205-8242-1be3217aa248-utilities\") pod \"certified-operators-ddxwb\" (UID: \"e233ad8b-763b-4205-8242-1be3217aa248\") " pod="openshift-marketplace/certified-operators-ddxwb" Nov 24 23:12:38 crc kubenswrapper[4915]: I1124 23:12:38.122134 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cts7t\" (UniqueName: \"kubernetes.io/projected/e233ad8b-763b-4205-8242-1be3217aa248-kube-api-access-cts7t\") pod \"certified-operators-ddxwb\" (UID: \"e233ad8b-763b-4205-8242-1be3217aa248\") " pod="openshift-marketplace/certified-operators-ddxwb" Nov 24 23:12:38 crc kubenswrapper[4915]: I1124 23:12:38.122351 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e233ad8b-763b-4205-8242-1be3217aa248-utilities\") pod \"certified-operators-ddxwb\" (UID: \"e233ad8b-763b-4205-8242-1be3217aa248\") " pod="openshift-marketplace/certified-operators-ddxwb" Nov 24 23:12:38 crc kubenswrapper[4915]: I1124 23:12:38.122422 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e233ad8b-763b-4205-8242-1be3217aa248-catalog-content\") pod \"certified-operators-ddxwb\" (UID: \"e233ad8b-763b-4205-8242-1be3217aa248\") " pod="openshift-marketplace/certified-operators-ddxwb" Nov 24 23:12:38 crc kubenswrapper[4915]: I1124 23:12:38.124222 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e233ad8b-763b-4205-8242-1be3217aa248-catalog-content\") pod \"certified-operators-ddxwb\" (UID: \"e233ad8b-763b-4205-8242-1be3217aa248\") " pod="openshift-marketplace/certified-operators-ddxwb" Nov 24 23:12:38 crc kubenswrapper[4915]: I1124 23:12:38.124222 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e233ad8b-763b-4205-8242-1be3217aa248-utilities\") pod \"certified-operators-ddxwb\" (UID: \"e233ad8b-763b-4205-8242-1be3217aa248\") " pod="openshift-marketplace/certified-operators-ddxwb" Nov 24 23:12:38 crc kubenswrapper[4915]: I1124 23:12:38.143844 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cts7t\" (UniqueName: \"kubernetes.io/projected/e233ad8b-763b-4205-8242-1be3217aa248-kube-api-access-cts7t\") pod \"certified-operators-ddxwb\" (UID: \"e233ad8b-763b-4205-8242-1be3217aa248\") " pod="openshift-marketplace/certified-operators-ddxwb" Nov 24 23:12:38 crc kubenswrapper[4915]: I1124 23:12:38.319268 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ddxwb" Nov 24 23:12:38 crc kubenswrapper[4915]: W1124 23:12:38.901143 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode233ad8b_763b_4205_8242_1be3217aa248.slice/crio-c82b9930c63140e605812c7421d14b2716e443da43a51bec3c013ac9e51b989b WatchSource:0}: Error finding container c82b9930c63140e605812c7421d14b2716e443da43a51bec3c013ac9e51b989b: Status 404 returned error can't find the container with id c82b9930c63140e605812c7421d14b2716e443da43a51bec3c013ac9e51b989b Nov 24 23:12:38 crc kubenswrapper[4915]: I1124 23:12:38.907090 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ddxwb"] Nov 24 23:12:38 crc kubenswrapper[4915]: I1124 23:12:38.934209 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ddxwb" event={"ID":"e233ad8b-763b-4205-8242-1be3217aa248","Type":"ContainerStarted","Data":"c82b9930c63140e605812c7421d14b2716e443da43a51bec3c013ac9e51b989b"} Nov 24 23:12:39 crc kubenswrapper[4915]: I1124 23:12:39.945537 4915 generic.go:334] "Generic (PLEG): container finished" podID="e233ad8b-763b-4205-8242-1be3217aa248" containerID="56b96c7d87607fbcf62a8dfd7d1e74dcde5a89cd7c4f4176ebcce7cbf19f5d46" exitCode=0 Nov 24 23:12:39 crc kubenswrapper[4915]: I1124 23:12:39.945641 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ddxwb" event={"ID":"e233ad8b-763b-4205-8242-1be3217aa248","Type":"ContainerDied","Data":"56b96c7d87607fbcf62a8dfd7d1e74dcde5a89cd7c4f4176ebcce7cbf19f5d46"} Nov 24 23:12:40 crc kubenswrapper[4915]: I1124 23:12:40.960303 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ddxwb" event={"ID":"e233ad8b-763b-4205-8242-1be3217aa248","Type":"ContainerStarted","Data":"7219166848b313e8efaa8ea88355fdc44d37680e3598a10be76c35fc5c2a7116"} Nov 24 23:12:42 crc kubenswrapper[4915]: E1124 23:12:42.298190 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9d5782ad83c2168cb30c94182ed606a30443b13e950b8365a16dd745b742a3d9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 23:12:42 crc kubenswrapper[4915]: E1124 23:12:42.300035 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9d5782ad83c2168cb30c94182ed606a30443b13e950b8365a16dd745b742a3d9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 23:12:42 crc kubenswrapper[4915]: E1124 23:12:42.302121 4915 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9d5782ad83c2168cb30c94182ed606a30443b13e950b8365a16dd745b742a3d9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 24 23:12:42 crc kubenswrapper[4915]: E1124 23:12:42.302207 4915 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-c6db6bd6d-wbhkr" podUID="4f28548e-b6b5-4a05-8493-a3c1896e4c6d" containerName="heat-engine" Nov 24 23:12:42 crc kubenswrapper[4915]: I1124 23:12:42.991029 4915 generic.go:334] "Generic (PLEG): container finished" podID="e233ad8b-763b-4205-8242-1be3217aa248" containerID="7219166848b313e8efaa8ea88355fdc44d37680e3598a10be76c35fc5c2a7116" exitCode=0 Nov 24 23:12:42 crc kubenswrapper[4915]: I1124 23:12:42.991121 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ddxwb" event={"ID":"e233ad8b-763b-4205-8242-1be3217aa248","Type":"ContainerDied","Data":"7219166848b313e8efaa8ea88355fdc44d37680e3598a10be76c35fc5c2a7116"} Nov 24 23:12:44 crc kubenswrapper[4915]: I1124 23:12:44.009888 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ddxwb" event={"ID":"e233ad8b-763b-4205-8242-1be3217aa248","Type":"ContainerStarted","Data":"85fb15f2558c6380e51a235efbb0bb881ac92b6944bfc1b31108a5f9ab8b7c35"} Nov 24 23:12:44 crc kubenswrapper[4915]: I1124 23:12:44.046132 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ddxwb" podStartSLOduration=3.549928995 podStartE2EDuration="7.046099859s" podCreationTimestamp="2025-11-24 23:12:37 +0000 UTC" firstStartedPulling="2025-11-24 23:12:39.948189128 +0000 UTC m=+6778.264441301" lastFinishedPulling="2025-11-24 23:12:43.444359962 +0000 UTC m=+6781.760612165" observedRunningTime="2025-11-24 23:12:44.0413309 +0000 UTC m=+6782.357583113" watchObservedRunningTime="2025-11-24 23:12:44.046099859 +0000 UTC m=+6782.362352072" Nov 24 23:12:45 crc kubenswrapper[4915]: I1124 23:12:45.223729 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 24 23:12:45 crc kubenswrapper[4915]: I1124 23:12:45.224601 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f894aef5-bbf9-4a91-ab23-6d6c216d5645" containerName="aodh-api" containerID="cri-o://47fd9b87119ef463887f8ae049c15821f51e7ce73f54aafcb36947bc15ab3256" gracePeriod=30 Nov 24 23:12:45 crc kubenswrapper[4915]: I1124 23:12:45.224801 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f894aef5-bbf9-4a91-ab23-6d6c216d5645" containerName="aodh-notifier" containerID="cri-o://4116e70ddb762a4dc7cd825b53527324ad3c4beb8f5becda7fc1e6279d34f6e8" gracePeriod=30 Nov 24 23:12:45 crc kubenswrapper[4915]: I1124 23:12:45.224951 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f894aef5-bbf9-4a91-ab23-6d6c216d5645" containerName="aodh-listener" containerID="cri-o://fa83c5b6b3d8da55e9ddcbdef89a0014366abf4dcc4962f6fb2799b368caf2c8" gracePeriod=30 Nov 24 23:12:45 crc kubenswrapper[4915]: I1124 23:12:45.224814 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="f894aef5-bbf9-4a91-ab23-6d6c216d5645" containerName="aodh-evaluator" containerID="cri-o://17de28c3ac630450d988b3309ab15d399b34975062a64a41ec8343541cea7975" gracePeriod=30 Nov 24 23:12:46 crc kubenswrapper[4915]: I1124 23:12:46.040379 4915 generic.go:334] "Generic (PLEG): container finished" podID="f894aef5-bbf9-4a91-ab23-6d6c216d5645" containerID="17de28c3ac630450d988b3309ab15d399b34975062a64a41ec8343541cea7975" exitCode=0 Nov 24 23:12:46 crc kubenswrapper[4915]: I1124 23:12:46.040424 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f894aef5-bbf9-4a91-ab23-6d6c216d5645","Type":"ContainerDied","Data":"17de28c3ac630450d988b3309ab15d399b34975062a64a41ec8343541cea7975"} Nov 24 23:12:48 crc kubenswrapper[4915]: I1124 23:12:48.319405 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ddxwb" Nov 24 23:12:48 crc kubenswrapper[4915]: I1124 23:12:48.320320 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ddxwb" Nov 24 23:12:48 crc kubenswrapper[4915]: I1124 23:12:48.373301 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ddxwb" Nov 24 23:12:48 crc kubenswrapper[4915]: I1124 23:12:48.903014 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-c6db6bd6d-wbhkr" Nov 24 23:12:48 crc kubenswrapper[4915]: I1124 23:12:48.912010 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f28548e-b6b5-4a05-8493-a3c1896e4c6d-combined-ca-bundle\") pod \"4f28548e-b6b5-4a05-8493-a3c1896e4c6d\" (UID: \"4f28548e-b6b5-4a05-8493-a3c1896e4c6d\") " Nov 24 23:12:48 crc kubenswrapper[4915]: I1124 23:12:48.912162 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjxqq\" (UniqueName: \"kubernetes.io/projected/4f28548e-b6b5-4a05-8493-a3c1896e4c6d-kube-api-access-gjxqq\") pod \"4f28548e-b6b5-4a05-8493-a3c1896e4c6d\" (UID: \"4f28548e-b6b5-4a05-8493-a3c1896e4c6d\") " Nov 24 23:12:48 crc kubenswrapper[4915]: I1124 23:12:48.912235 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f28548e-b6b5-4a05-8493-a3c1896e4c6d-config-data-custom\") pod \"4f28548e-b6b5-4a05-8493-a3c1896e4c6d\" (UID: \"4f28548e-b6b5-4a05-8493-a3c1896e4c6d\") " Nov 24 23:12:48 crc kubenswrapper[4915]: I1124 23:12:48.912274 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f28548e-b6b5-4a05-8493-a3c1896e4c6d-config-data\") pod \"4f28548e-b6b5-4a05-8493-a3c1896e4c6d\" (UID: \"4f28548e-b6b5-4a05-8493-a3c1896e4c6d\") " Nov 24 23:12:48 crc kubenswrapper[4915]: I1124 23:12:48.920748 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f28548e-b6b5-4a05-8493-a3c1896e4c6d-kube-api-access-gjxqq" (OuterVolumeSpecName: "kube-api-access-gjxqq") pod "4f28548e-b6b5-4a05-8493-a3c1896e4c6d" (UID: "4f28548e-b6b5-4a05-8493-a3c1896e4c6d"). InnerVolumeSpecName "kube-api-access-gjxqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:12:48 crc kubenswrapper[4915]: I1124 23:12:48.926774 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f28548e-b6b5-4a05-8493-a3c1896e4c6d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4f28548e-b6b5-4a05-8493-a3c1896e4c6d" (UID: "4f28548e-b6b5-4a05-8493-a3c1896e4c6d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:12:48 crc kubenswrapper[4915]: I1124 23:12:48.991804 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f28548e-b6b5-4a05-8493-a3c1896e4c6d-config-data" (OuterVolumeSpecName: "config-data") pod "4f28548e-b6b5-4a05-8493-a3c1896e4c6d" (UID: "4f28548e-b6b5-4a05-8493-a3c1896e4c6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:12:48 crc kubenswrapper[4915]: I1124 23:12:48.991841 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f28548e-b6b5-4a05-8493-a3c1896e4c6d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f28548e-b6b5-4a05-8493-a3c1896e4c6d" (UID: "4f28548e-b6b5-4a05-8493-a3c1896e4c6d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:12:49 crc kubenswrapper[4915]: I1124 23:12:49.014828 4915 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4f28548e-b6b5-4a05-8493-a3c1896e4c6d-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:49 crc kubenswrapper[4915]: I1124 23:12:49.014859 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f28548e-b6b5-4a05-8493-a3c1896e4c6d-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:49 crc kubenswrapper[4915]: I1124 23:12:49.014868 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f28548e-b6b5-4a05-8493-a3c1896e4c6d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:49 crc kubenswrapper[4915]: I1124 23:12:49.014877 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjxqq\" (UniqueName: \"kubernetes.io/projected/4f28548e-b6b5-4a05-8493-a3c1896e4c6d-kube-api-access-gjxqq\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:49 crc kubenswrapper[4915]: I1124 23:12:49.073563 4915 generic.go:334] "Generic (PLEG): container finished" podID="4f28548e-b6b5-4a05-8493-a3c1896e4c6d" containerID="9d5782ad83c2168cb30c94182ed606a30443b13e950b8365a16dd745b742a3d9" exitCode=0 Nov 24 23:12:49 crc kubenswrapper[4915]: I1124 23:12:49.073640 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-c6db6bd6d-wbhkr" Nov 24 23:12:49 crc kubenswrapper[4915]: I1124 23:12:49.073622 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-c6db6bd6d-wbhkr" event={"ID":"4f28548e-b6b5-4a05-8493-a3c1896e4c6d","Type":"ContainerDied","Data":"9d5782ad83c2168cb30c94182ed606a30443b13e950b8365a16dd745b742a3d9"} Nov 24 23:12:49 crc kubenswrapper[4915]: I1124 23:12:49.073803 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-c6db6bd6d-wbhkr" event={"ID":"4f28548e-b6b5-4a05-8493-a3c1896e4c6d","Type":"ContainerDied","Data":"2582d57fa65ddabf86b88fdb705f4901acdf324e90302869471dadb9f4b6ac88"} Nov 24 23:12:49 crc kubenswrapper[4915]: I1124 23:12:49.073842 4915 scope.go:117] "RemoveContainer" containerID="9d5782ad83c2168cb30c94182ed606a30443b13e950b8365a16dd745b742a3d9" Nov 24 23:12:49 crc kubenswrapper[4915]: I1124 23:12:49.077940 4915 generic.go:334] "Generic (PLEG): container finished" podID="f894aef5-bbf9-4a91-ab23-6d6c216d5645" containerID="4116e70ddb762a4dc7cd825b53527324ad3c4beb8f5becda7fc1e6279d34f6e8" exitCode=0 Nov 24 23:12:49 crc kubenswrapper[4915]: I1124 23:12:49.077966 4915 generic.go:334] "Generic (PLEG): container finished" podID="f894aef5-bbf9-4a91-ab23-6d6c216d5645" containerID="47fd9b87119ef463887f8ae049c15821f51e7ce73f54aafcb36947bc15ab3256" exitCode=0 Nov 24 23:12:49 crc kubenswrapper[4915]: I1124 23:12:49.078008 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f894aef5-bbf9-4a91-ab23-6d6c216d5645","Type":"ContainerDied","Data":"4116e70ddb762a4dc7cd825b53527324ad3c4beb8f5becda7fc1e6279d34f6e8"} Nov 24 23:12:49 crc kubenswrapper[4915]: I1124 23:12:49.078038 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f894aef5-bbf9-4a91-ab23-6d6c216d5645","Type":"ContainerDied","Data":"47fd9b87119ef463887f8ae049c15821f51e7ce73f54aafcb36947bc15ab3256"} Nov 24 23:12:49 crc kubenswrapper[4915]: I1124 23:12:49.112384 4915 scope.go:117] "RemoveContainer" containerID="9d5782ad83c2168cb30c94182ed606a30443b13e950b8365a16dd745b742a3d9" Nov 24 23:12:49 crc kubenswrapper[4915]: I1124 23:12:49.114182 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-c6db6bd6d-wbhkr"] Nov 24 23:12:49 crc kubenswrapper[4915]: E1124 23:12:49.115862 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d5782ad83c2168cb30c94182ed606a30443b13e950b8365a16dd745b742a3d9\": container with ID starting with 9d5782ad83c2168cb30c94182ed606a30443b13e950b8365a16dd745b742a3d9 not found: ID does not exist" containerID="9d5782ad83c2168cb30c94182ed606a30443b13e950b8365a16dd745b742a3d9" Nov 24 23:12:49 crc kubenswrapper[4915]: I1124 23:12:49.115901 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d5782ad83c2168cb30c94182ed606a30443b13e950b8365a16dd745b742a3d9"} err="failed to get container status \"9d5782ad83c2168cb30c94182ed606a30443b13e950b8365a16dd745b742a3d9\": rpc error: code = NotFound desc = could not find container \"9d5782ad83c2168cb30c94182ed606a30443b13e950b8365a16dd745b742a3d9\": container with ID starting with 9d5782ad83c2168cb30c94182ed606a30443b13e950b8365a16dd745b742a3d9 not found: ID does not exist" Nov 24 23:12:49 crc kubenswrapper[4915]: I1124 23:12:49.127489 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-c6db6bd6d-wbhkr"] Nov 24 23:12:49 crc kubenswrapper[4915]: I1124 23:12:49.152942 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ddxwb" Nov 24 23:12:49 crc kubenswrapper[4915]: I1124 23:12:49.199040 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ddxwb"] Nov 24 23:12:49 crc kubenswrapper[4915]: I1124 23:12:49.974681 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.049246 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f894aef5-bbf9-4a91-ab23-6d6c216d5645-internal-tls-certs\") pod \"f894aef5-bbf9-4a91-ab23-6d6c216d5645\" (UID: \"f894aef5-bbf9-4a91-ab23-6d6c216d5645\") " Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.049297 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6dtp\" (UniqueName: \"kubernetes.io/projected/f894aef5-bbf9-4a91-ab23-6d6c216d5645-kube-api-access-f6dtp\") pod \"f894aef5-bbf9-4a91-ab23-6d6c216d5645\" (UID: \"f894aef5-bbf9-4a91-ab23-6d6c216d5645\") " Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.049329 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f894aef5-bbf9-4a91-ab23-6d6c216d5645-combined-ca-bundle\") pod \"f894aef5-bbf9-4a91-ab23-6d6c216d5645\" (UID: \"f894aef5-bbf9-4a91-ab23-6d6c216d5645\") " Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.049378 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f894aef5-bbf9-4a91-ab23-6d6c216d5645-public-tls-certs\") pod \"f894aef5-bbf9-4a91-ab23-6d6c216d5645\" (UID: \"f894aef5-bbf9-4a91-ab23-6d6c216d5645\") " Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.049457 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f894aef5-bbf9-4a91-ab23-6d6c216d5645-config-data\") pod \"f894aef5-bbf9-4a91-ab23-6d6c216d5645\" (UID: \"f894aef5-bbf9-4a91-ab23-6d6c216d5645\") " Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.049495 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f894aef5-bbf9-4a91-ab23-6d6c216d5645-scripts\") pod \"f894aef5-bbf9-4a91-ab23-6d6c216d5645\" (UID: \"f894aef5-bbf9-4a91-ab23-6d6c216d5645\") " Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.057107 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f894aef5-bbf9-4a91-ab23-6d6c216d5645-kube-api-access-f6dtp" (OuterVolumeSpecName: "kube-api-access-f6dtp") pod "f894aef5-bbf9-4a91-ab23-6d6c216d5645" (UID: "f894aef5-bbf9-4a91-ab23-6d6c216d5645"). InnerVolumeSpecName "kube-api-access-f6dtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.060600 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f894aef5-bbf9-4a91-ab23-6d6c216d5645-scripts" (OuterVolumeSpecName: "scripts") pod "f894aef5-bbf9-4a91-ab23-6d6c216d5645" (UID: "f894aef5-bbf9-4a91-ab23-6d6c216d5645"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.105972 4915 generic.go:334] "Generic (PLEG): container finished" podID="f894aef5-bbf9-4a91-ab23-6d6c216d5645" containerID="fa83c5b6b3d8da55e9ddcbdef89a0014366abf4dcc4962f6fb2799b368caf2c8" exitCode=0 Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.106146 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f894aef5-bbf9-4a91-ab23-6d6c216d5645","Type":"ContainerDied","Data":"fa83c5b6b3d8da55e9ddcbdef89a0014366abf4dcc4962f6fb2799b368caf2c8"} Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.106277 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"f894aef5-bbf9-4a91-ab23-6d6c216d5645","Type":"ContainerDied","Data":"5c3570bde007f345f538516f2fdbb4bc7ec0d0b1c2fe6b1f909d05a7cd3d8594"} Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.106384 4915 scope.go:117] "RemoveContainer" containerID="fa83c5b6b3d8da55e9ddcbdef89a0014366abf4dcc4962f6fb2799b368caf2c8" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.106625 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.144060 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f894aef5-bbf9-4a91-ab23-6d6c216d5645-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f894aef5-bbf9-4a91-ab23-6d6c216d5645" (UID: "f894aef5-bbf9-4a91-ab23-6d6c216d5645"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.153910 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6dtp\" (UniqueName: \"kubernetes.io/projected/f894aef5-bbf9-4a91-ab23-6d6c216d5645-kube-api-access-f6dtp\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.154319 4915 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f894aef5-bbf9-4a91-ab23-6d6c216d5645-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.154419 4915 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f894aef5-bbf9-4a91-ab23-6d6c216d5645-scripts\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.163444 4915 scope.go:117] "RemoveContainer" containerID="4116e70ddb762a4dc7cd825b53527324ad3c4beb8f5becda7fc1e6279d34f6e8" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.172394 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f894aef5-bbf9-4a91-ab23-6d6c216d5645-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f894aef5-bbf9-4a91-ab23-6d6c216d5645" (UID: "f894aef5-bbf9-4a91-ab23-6d6c216d5645"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.201432 4915 scope.go:117] "RemoveContainer" containerID="17de28c3ac630450d988b3309ab15d399b34975062a64a41ec8343541cea7975" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.222200 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f894aef5-bbf9-4a91-ab23-6d6c216d5645-config-data" (OuterVolumeSpecName: "config-data") pod "f894aef5-bbf9-4a91-ab23-6d6c216d5645" (UID: "f894aef5-bbf9-4a91-ab23-6d6c216d5645"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.232972 4915 scope.go:117] "RemoveContainer" containerID="47fd9b87119ef463887f8ae049c15821f51e7ce73f54aafcb36947bc15ab3256" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.249359 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f894aef5-bbf9-4a91-ab23-6d6c216d5645-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f894aef5-bbf9-4a91-ab23-6d6c216d5645" (UID: "f894aef5-bbf9-4a91-ab23-6d6c216d5645"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.257077 4915 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f894aef5-bbf9-4a91-ab23-6d6c216d5645-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.257114 4915 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f894aef5-bbf9-4a91-ab23-6d6c216d5645-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.257122 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f894aef5-bbf9-4a91-ab23-6d6c216d5645-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.276572 4915 scope.go:117] "RemoveContainer" containerID="fa83c5b6b3d8da55e9ddcbdef89a0014366abf4dcc4962f6fb2799b368caf2c8" Nov 24 23:12:50 crc kubenswrapper[4915]: E1124 23:12:50.276952 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa83c5b6b3d8da55e9ddcbdef89a0014366abf4dcc4962f6fb2799b368caf2c8\": container with ID starting with fa83c5b6b3d8da55e9ddcbdef89a0014366abf4dcc4962f6fb2799b368caf2c8 not found: ID does not exist" containerID="fa83c5b6b3d8da55e9ddcbdef89a0014366abf4dcc4962f6fb2799b368caf2c8" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.276986 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa83c5b6b3d8da55e9ddcbdef89a0014366abf4dcc4962f6fb2799b368caf2c8"} err="failed to get container status \"fa83c5b6b3d8da55e9ddcbdef89a0014366abf4dcc4962f6fb2799b368caf2c8\": rpc error: code = NotFound desc = could not find container \"fa83c5b6b3d8da55e9ddcbdef89a0014366abf4dcc4962f6fb2799b368caf2c8\": container with ID starting with fa83c5b6b3d8da55e9ddcbdef89a0014366abf4dcc4962f6fb2799b368caf2c8 not found: ID does not exist" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.277005 4915 scope.go:117] "RemoveContainer" containerID="4116e70ddb762a4dc7cd825b53527324ad3c4beb8f5becda7fc1e6279d34f6e8" Nov 24 23:12:50 crc kubenswrapper[4915]: E1124 23:12:50.277216 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4116e70ddb762a4dc7cd825b53527324ad3c4beb8f5becda7fc1e6279d34f6e8\": container with ID starting with 4116e70ddb762a4dc7cd825b53527324ad3c4beb8f5becda7fc1e6279d34f6e8 not found: ID does not exist" containerID="4116e70ddb762a4dc7cd825b53527324ad3c4beb8f5becda7fc1e6279d34f6e8" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.277237 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4116e70ddb762a4dc7cd825b53527324ad3c4beb8f5becda7fc1e6279d34f6e8"} err="failed to get container status \"4116e70ddb762a4dc7cd825b53527324ad3c4beb8f5becda7fc1e6279d34f6e8\": rpc error: code = NotFound desc = could not find container \"4116e70ddb762a4dc7cd825b53527324ad3c4beb8f5becda7fc1e6279d34f6e8\": container with ID starting with 4116e70ddb762a4dc7cd825b53527324ad3c4beb8f5becda7fc1e6279d34f6e8 not found: ID does not exist" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.277248 4915 scope.go:117] "RemoveContainer" containerID="17de28c3ac630450d988b3309ab15d399b34975062a64a41ec8343541cea7975" Nov 24 23:12:50 crc kubenswrapper[4915]: E1124 23:12:50.277446 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17de28c3ac630450d988b3309ab15d399b34975062a64a41ec8343541cea7975\": container with ID starting with 17de28c3ac630450d988b3309ab15d399b34975062a64a41ec8343541cea7975 not found: ID does not exist" containerID="17de28c3ac630450d988b3309ab15d399b34975062a64a41ec8343541cea7975" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.277477 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17de28c3ac630450d988b3309ab15d399b34975062a64a41ec8343541cea7975"} err="failed to get container status \"17de28c3ac630450d988b3309ab15d399b34975062a64a41ec8343541cea7975\": rpc error: code = NotFound desc = could not find container \"17de28c3ac630450d988b3309ab15d399b34975062a64a41ec8343541cea7975\": container with ID starting with 17de28c3ac630450d988b3309ab15d399b34975062a64a41ec8343541cea7975 not found: ID does not exist" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.277495 4915 scope.go:117] "RemoveContainer" containerID="47fd9b87119ef463887f8ae049c15821f51e7ce73f54aafcb36947bc15ab3256" Nov 24 23:12:50 crc kubenswrapper[4915]: E1124 23:12:50.277753 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47fd9b87119ef463887f8ae049c15821f51e7ce73f54aafcb36947bc15ab3256\": container with ID starting with 47fd9b87119ef463887f8ae049c15821f51e7ce73f54aafcb36947bc15ab3256 not found: ID does not exist" containerID="47fd9b87119ef463887f8ae049c15821f51e7ce73f54aafcb36947bc15ab3256" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.277817 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47fd9b87119ef463887f8ae049c15821f51e7ce73f54aafcb36947bc15ab3256"} err="failed to get container status \"47fd9b87119ef463887f8ae049c15821f51e7ce73f54aafcb36947bc15ab3256\": rpc error: code = NotFound desc = could not find container \"47fd9b87119ef463887f8ae049c15821f51e7ce73f54aafcb36947bc15ab3256\": container with ID starting with 47fd9b87119ef463887f8ae049c15821f51e7ce73f54aafcb36947bc15ab3256 not found: ID does not exist" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.464412 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f28548e-b6b5-4a05-8493-a3c1896e4c6d" path="/var/lib/kubelet/pods/4f28548e-b6b5-4a05-8493-a3c1896e4c6d/volumes" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.496297 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.513617 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.524102 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 24 23:12:50 crc kubenswrapper[4915]: E1124 23:12:50.524629 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f894aef5-bbf9-4a91-ab23-6d6c216d5645" containerName="aodh-listener" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.524643 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f894aef5-bbf9-4a91-ab23-6d6c216d5645" containerName="aodh-listener" Nov 24 23:12:50 crc kubenswrapper[4915]: E1124 23:12:50.524674 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f28548e-b6b5-4a05-8493-a3c1896e4c6d" containerName="heat-engine" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.524679 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f28548e-b6b5-4a05-8493-a3c1896e4c6d" containerName="heat-engine" Nov 24 23:12:50 crc kubenswrapper[4915]: E1124 23:12:50.524696 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f894aef5-bbf9-4a91-ab23-6d6c216d5645" containerName="aodh-evaluator" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.524702 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f894aef5-bbf9-4a91-ab23-6d6c216d5645" containerName="aodh-evaluator" Nov 24 23:12:50 crc kubenswrapper[4915]: E1124 23:12:50.524716 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f894aef5-bbf9-4a91-ab23-6d6c216d5645" containerName="aodh-api" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.524721 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f894aef5-bbf9-4a91-ab23-6d6c216d5645" containerName="aodh-api" Nov 24 23:12:50 crc kubenswrapper[4915]: E1124 23:12:50.524743 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f894aef5-bbf9-4a91-ab23-6d6c216d5645" containerName="aodh-notifier" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.524748 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="f894aef5-bbf9-4a91-ab23-6d6c216d5645" containerName="aodh-notifier" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.524987 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f894aef5-bbf9-4a91-ab23-6d6c216d5645" containerName="aodh-api" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.525005 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f28548e-b6b5-4a05-8493-a3c1896e4c6d" containerName="heat-engine" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.525017 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f894aef5-bbf9-4a91-ab23-6d6c216d5645" containerName="aodh-listener" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.525027 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f894aef5-bbf9-4a91-ab23-6d6c216d5645" containerName="aodh-evaluator" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.525037 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="f894aef5-bbf9-4a91-ab23-6d6c216d5645" containerName="aodh-notifier" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.529675 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.538886 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.539109 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.539227 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-497jp" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.539285 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.539431 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.540404 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.674809 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce-public-tls-certs\") pod \"aodh-0\" (UID: \"b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce\") " pod="openstack/aodh-0" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.674864 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce\") " pod="openstack/aodh-0" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.674898 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce-config-data\") pod \"aodh-0\" (UID: \"b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce\") " pod="openstack/aodh-0" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.675218 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce-scripts\") pod \"aodh-0\" (UID: \"b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce\") " pod="openstack/aodh-0" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.675679 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce-internal-tls-certs\") pod \"aodh-0\" (UID: \"b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce\") " pod="openstack/aodh-0" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.675730 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfh2j\" (UniqueName: \"kubernetes.io/projected/b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce-kube-api-access-zfh2j\") pod \"aodh-0\" (UID: \"b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce\") " pod="openstack/aodh-0" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.777719 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce-internal-tls-certs\") pod \"aodh-0\" (UID: \"b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce\") " pod="openstack/aodh-0" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.777983 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfh2j\" (UniqueName: \"kubernetes.io/projected/b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce-kube-api-access-zfh2j\") pod \"aodh-0\" (UID: \"b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce\") " pod="openstack/aodh-0" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.778229 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce-public-tls-certs\") pod \"aodh-0\" (UID: \"b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce\") " pod="openstack/aodh-0" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.778381 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce\") " pod="openstack/aodh-0" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.778509 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce-config-data\") pod \"aodh-0\" (UID: \"b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce\") " pod="openstack/aodh-0" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.779067 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce-scripts\") pod \"aodh-0\" (UID: \"b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce\") " pod="openstack/aodh-0" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.783363 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce-internal-tls-certs\") pod \"aodh-0\" (UID: \"b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce\") " pod="openstack/aodh-0" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.784237 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce-public-tls-certs\") pod \"aodh-0\" (UID: \"b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce\") " pod="openstack/aodh-0" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.795367 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce-scripts\") pod \"aodh-0\" (UID: \"b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce\") " pod="openstack/aodh-0" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.795651 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce\") " pod="openstack/aodh-0" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.796664 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce-config-data\") pod \"aodh-0\" (UID: \"b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce\") " pod="openstack/aodh-0" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.805963 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfh2j\" (UniqueName: \"kubernetes.io/projected/b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce-kube-api-access-zfh2j\") pod \"aodh-0\" (UID: \"b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce\") " pod="openstack/aodh-0" Nov 24 23:12:50 crc kubenswrapper[4915]: I1124 23:12:50.849325 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 24 23:12:51 crc kubenswrapper[4915]: I1124 23:12:51.128972 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ddxwb" podUID="e233ad8b-763b-4205-8242-1be3217aa248" containerName="registry-server" containerID="cri-o://85fb15f2558c6380e51a235efbb0bb881ac92b6944bfc1b31108a5f9ab8b7c35" gracePeriod=2 Nov 24 23:12:51 crc kubenswrapper[4915]: I1124 23:12:51.401308 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 24 23:12:51 crc kubenswrapper[4915]: I1124 23:12:51.697508 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ddxwb" Nov 24 23:12:51 crc kubenswrapper[4915]: I1124 23:12:51.803493 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e233ad8b-763b-4205-8242-1be3217aa248-utilities\") pod \"e233ad8b-763b-4205-8242-1be3217aa248\" (UID: \"e233ad8b-763b-4205-8242-1be3217aa248\") " Nov 24 23:12:51 crc kubenswrapper[4915]: I1124 23:12:51.804499 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cts7t\" (UniqueName: \"kubernetes.io/projected/e233ad8b-763b-4205-8242-1be3217aa248-kube-api-access-cts7t\") pod \"e233ad8b-763b-4205-8242-1be3217aa248\" (UID: \"e233ad8b-763b-4205-8242-1be3217aa248\") " Nov 24 23:12:51 crc kubenswrapper[4915]: I1124 23:12:51.804566 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e233ad8b-763b-4205-8242-1be3217aa248-catalog-content\") pod \"e233ad8b-763b-4205-8242-1be3217aa248\" (UID: \"e233ad8b-763b-4205-8242-1be3217aa248\") " Nov 24 23:12:51 crc kubenswrapper[4915]: I1124 23:12:51.806339 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e233ad8b-763b-4205-8242-1be3217aa248-utilities" (OuterVolumeSpecName: "utilities") pod "e233ad8b-763b-4205-8242-1be3217aa248" (UID: "e233ad8b-763b-4205-8242-1be3217aa248"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:12:51 crc kubenswrapper[4915]: I1124 23:12:51.812483 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e233ad8b-763b-4205-8242-1be3217aa248-kube-api-access-cts7t" (OuterVolumeSpecName: "kube-api-access-cts7t") pod "e233ad8b-763b-4205-8242-1be3217aa248" (UID: "e233ad8b-763b-4205-8242-1be3217aa248"). InnerVolumeSpecName "kube-api-access-cts7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:12:51 crc kubenswrapper[4915]: I1124 23:12:51.880081 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e233ad8b-763b-4205-8242-1be3217aa248-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e233ad8b-763b-4205-8242-1be3217aa248" (UID: "e233ad8b-763b-4205-8242-1be3217aa248"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:12:51 crc kubenswrapper[4915]: I1124 23:12:51.907824 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cts7t\" (UniqueName: \"kubernetes.io/projected/e233ad8b-763b-4205-8242-1be3217aa248-kube-api-access-cts7t\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:51 crc kubenswrapper[4915]: I1124 23:12:51.908123 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e233ad8b-763b-4205-8242-1be3217aa248-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:51 crc kubenswrapper[4915]: I1124 23:12:51.908209 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e233ad8b-763b-4205-8242-1be3217aa248-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:52 crc kubenswrapper[4915]: I1124 23:12:52.141146 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce","Type":"ContainerStarted","Data":"8e7ea77327a50d3514656eb1378e2d9770520a57e63be85983323f19506a37f6"} Nov 24 23:12:52 crc kubenswrapper[4915]: I1124 23:12:52.141192 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce","Type":"ContainerStarted","Data":"71b8ad7021d873e90931e40bdce5aa2b2539a22554b5d995c38ebf0458923e60"} Nov 24 23:12:52 crc kubenswrapper[4915]: I1124 23:12:52.144698 4915 generic.go:334] "Generic (PLEG): container finished" podID="e233ad8b-763b-4205-8242-1be3217aa248" containerID="85fb15f2558c6380e51a235efbb0bb881ac92b6944bfc1b31108a5f9ab8b7c35" exitCode=0 Nov 24 23:12:52 crc kubenswrapper[4915]: I1124 23:12:52.144734 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ddxwb" event={"ID":"e233ad8b-763b-4205-8242-1be3217aa248","Type":"ContainerDied","Data":"85fb15f2558c6380e51a235efbb0bb881ac92b6944bfc1b31108a5f9ab8b7c35"} Nov 24 23:12:52 crc kubenswrapper[4915]: I1124 23:12:52.144758 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ddxwb" event={"ID":"e233ad8b-763b-4205-8242-1be3217aa248","Type":"ContainerDied","Data":"c82b9930c63140e605812c7421d14b2716e443da43a51bec3c013ac9e51b989b"} Nov 24 23:12:52 crc kubenswrapper[4915]: I1124 23:12:52.144797 4915 scope.go:117] "RemoveContainer" containerID="85fb15f2558c6380e51a235efbb0bb881ac92b6944bfc1b31108a5f9ab8b7c35" Nov 24 23:12:52 crc kubenswrapper[4915]: I1124 23:12:52.144791 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ddxwb" Nov 24 23:12:52 crc kubenswrapper[4915]: I1124 23:12:52.166317 4915 scope.go:117] "RemoveContainer" containerID="7219166848b313e8efaa8ea88355fdc44d37680e3598a10be76c35fc5c2a7116" Nov 24 23:12:52 crc kubenswrapper[4915]: I1124 23:12:52.187738 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ddxwb"] Nov 24 23:12:52 crc kubenswrapper[4915]: I1124 23:12:52.197543 4915 scope.go:117] "RemoveContainer" containerID="56b96c7d87607fbcf62a8dfd7d1e74dcde5a89cd7c4f4176ebcce7cbf19f5d46" Nov 24 23:12:52 crc kubenswrapper[4915]: I1124 23:12:52.201734 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ddxwb"] Nov 24 23:12:52 crc kubenswrapper[4915]: I1124 23:12:52.222021 4915 scope.go:117] "RemoveContainer" containerID="85fb15f2558c6380e51a235efbb0bb881ac92b6944bfc1b31108a5f9ab8b7c35" Nov 24 23:12:52 crc kubenswrapper[4915]: E1124 23:12:52.222533 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85fb15f2558c6380e51a235efbb0bb881ac92b6944bfc1b31108a5f9ab8b7c35\": container with ID starting with 85fb15f2558c6380e51a235efbb0bb881ac92b6944bfc1b31108a5f9ab8b7c35 not found: ID does not exist" containerID="85fb15f2558c6380e51a235efbb0bb881ac92b6944bfc1b31108a5f9ab8b7c35" Nov 24 23:12:52 crc kubenswrapper[4915]: I1124 23:12:52.222554 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85fb15f2558c6380e51a235efbb0bb881ac92b6944bfc1b31108a5f9ab8b7c35"} err="failed to get container status \"85fb15f2558c6380e51a235efbb0bb881ac92b6944bfc1b31108a5f9ab8b7c35\": rpc error: code = NotFound desc = could not find container \"85fb15f2558c6380e51a235efbb0bb881ac92b6944bfc1b31108a5f9ab8b7c35\": container with ID starting with 85fb15f2558c6380e51a235efbb0bb881ac92b6944bfc1b31108a5f9ab8b7c35 not found: ID does not exist" Nov 24 23:12:52 crc kubenswrapper[4915]: I1124 23:12:52.222576 4915 scope.go:117] "RemoveContainer" containerID="7219166848b313e8efaa8ea88355fdc44d37680e3598a10be76c35fc5c2a7116" Nov 24 23:12:52 crc kubenswrapper[4915]: E1124 23:12:52.223023 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7219166848b313e8efaa8ea88355fdc44d37680e3598a10be76c35fc5c2a7116\": container with ID starting with 7219166848b313e8efaa8ea88355fdc44d37680e3598a10be76c35fc5c2a7116 not found: ID does not exist" containerID="7219166848b313e8efaa8ea88355fdc44d37680e3598a10be76c35fc5c2a7116" Nov 24 23:12:52 crc kubenswrapper[4915]: I1124 23:12:52.223041 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7219166848b313e8efaa8ea88355fdc44d37680e3598a10be76c35fc5c2a7116"} err="failed to get container status \"7219166848b313e8efaa8ea88355fdc44d37680e3598a10be76c35fc5c2a7116\": rpc error: code = NotFound desc = could not find container \"7219166848b313e8efaa8ea88355fdc44d37680e3598a10be76c35fc5c2a7116\": container with ID starting with 7219166848b313e8efaa8ea88355fdc44d37680e3598a10be76c35fc5c2a7116 not found: ID does not exist" Nov 24 23:12:52 crc kubenswrapper[4915]: I1124 23:12:52.223056 4915 scope.go:117] "RemoveContainer" containerID="56b96c7d87607fbcf62a8dfd7d1e74dcde5a89cd7c4f4176ebcce7cbf19f5d46" Nov 24 23:12:52 crc kubenswrapper[4915]: E1124 23:12:52.223289 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56b96c7d87607fbcf62a8dfd7d1e74dcde5a89cd7c4f4176ebcce7cbf19f5d46\": container with ID starting with 56b96c7d87607fbcf62a8dfd7d1e74dcde5a89cd7c4f4176ebcce7cbf19f5d46 not found: ID does not exist" containerID="56b96c7d87607fbcf62a8dfd7d1e74dcde5a89cd7c4f4176ebcce7cbf19f5d46" Nov 24 23:12:52 crc kubenswrapper[4915]: I1124 23:12:52.223305 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56b96c7d87607fbcf62a8dfd7d1e74dcde5a89cd7c4f4176ebcce7cbf19f5d46"} err="failed to get container status \"56b96c7d87607fbcf62a8dfd7d1e74dcde5a89cd7c4f4176ebcce7cbf19f5d46\": rpc error: code = NotFound desc = could not find container \"56b96c7d87607fbcf62a8dfd7d1e74dcde5a89cd7c4f4176ebcce7cbf19f5d46\": container with ID starting with 56b96c7d87607fbcf62a8dfd7d1e74dcde5a89cd7c4f4176ebcce7cbf19f5d46 not found: ID does not exist" Nov 24 23:12:52 crc kubenswrapper[4915]: I1124 23:12:52.463058 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e233ad8b-763b-4205-8242-1be3217aa248" path="/var/lib/kubelet/pods/e233ad8b-763b-4205-8242-1be3217aa248/volumes" Nov 24 23:12:52 crc kubenswrapper[4915]: I1124 23:12:52.489950 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f894aef5-bbf9-4a91-ab23-6d6c216d5645" path="/var/lib/kubelet/pods/f894aef5-bbf9-4a91-ab23-6d6c216d5645/volumes" Nov 24 23:12:54 crc kubenswrapper[4915]: I1124 23:12:54.173267 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce","Type":"ContainerStarted","Data":"be6b94bf87f55a93864d43f41b95cf904eca6f7142f14cb04c9cc87041b8d893"} Nov 24 23:12:55 crc kubenswrapper[4915]: I1124 23:12:55.189117 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce","Type":"ContainerStarted","Data":"08b0fb9b3fd65c08f8ac0bf09d141f31ba261c1103ee34d7e9d9319a583eb0ce"} Nov 24 23:12:56 crc kubenswrapper[4915]: I1124 23:12:56.205008 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce","Type":"ContainerStarted","Data":"f98e55c428e51ae263550f111b8150340d5a8e3dce94908c1dffeb10dadc1c1d"} Nov 24 23:12:56 crc kubenswrapper[4915]: I1124 23:12:56.238207 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.25451034 podStartE2EDuration="6.238191146s" podCreationTimestamp="2025-11-24 23:12:50 +0000 UTC" firstStartedPulling="2025-11-24 23:12:51.411507079 +0000 UTC m=+6789.727759262" lastFinishedPulling="2025-11-24 23:12:55.395187885 +0000 UTC m=+6793.711440068" observedRunningTime="2025-11-24 23:12:56.233595521 +0000 UTC m=+6794.549847694" watchObservedRunningTime="2025-11-24 23:12:56.238191146 +0000 UTC m=+6794.554443319" Nov 24 23:12:58 crc kubenswrapper[4915]: I1124 23:12:58.228475 4915 generic.go:334] "Generic (PLEG): container finished" podID="5cde4f81-df73-4990-885c-690d843e90bb" containerID="4008e1cba667dba9f161e168ea340f91b40faa821707956421ac97d16e7f4162" exitCode=100 Nov 24 23:12:58 crc kubenswrapper[4915]: I1124 23:12:58.228811 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5cde4f81-df73-4990-885c-690d843e90bb","Type":"ContainerDied","Data":"4008e1cba667dba9f161e168ea340f91b40faa821707956421ac97d16e7f4162"} Nov 24 23:12:59 crc kubenswrapper[4915]: I1124 23:12:59.736359 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 23:12:59 crc kubenswrapper[4915]: I1124 23:12:59.801867 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5cde4f81-df73-4990-885c-690d843e90bb-ssh-key\") pod \"5cde4f81-df73-4990-885c-690d843e90bb\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " Nov 24 23:12:59 crc kubenswrapper[4915]: I1124 23:12:59.801937 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5cde4f81-df73-4990-885c-690d843e90bb-ca-certs\") pod \"5cde4f81-df73-4990-885c-690d843e90bb\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " Nov 24 23:12:59 crc kubenswrapper[4915]: I1124 23:12:59.802014 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5cde4f81-df73-4990-885c-690d843e90bb-test-operator-ephemeral-temporary\") pod \"5cde4f81-df73-4990-885c-690d843e90bb\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " Nov 24 23:12:59 crc kubenswrapper[4915]: I1124 23:12:59.802058 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"5cde4f81-df73-4990-885c-690d843e90bb\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " Nov 24 23:12:59 crc kubenswrapper[4915]: I1124 23:12:59.802132 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5cde4f81-df73-4990-885c-690d843e90bb-config-data\") pod \"5cde4f81-df73-4990-885c-690d843e90bb\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " Nov 24 23:12:59 crc kubenswrapper[4915]: I1124 23:12:59.802165 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhlkd\" (UniqueName: \"kubernetes.io/projected/5cde4f81-df73-4990-885c-690d843e90bb-kube-api-access-fhlkd\") pod \"5cde4f81-df73-4990-885c-690d843e90bb\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " Nov 24 23:12:59 crc kubenswrapper[4915]: I1124 23:12:59.802199 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5cde4f81-df73-4990-885c-690d843e90bb-openstack-config-secret\") pod \"5cde4f81-df73-4990-885c-690d843e90bb\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " Nov 24 23:12:59 crc kubenswrapper[4915]: I1124 23:12:59.802349 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5cde4f81-df73-4990-885c-690d843e90bb-openstack-config\") pod \"5cde4f81-df73-4990-885c-690d843e90bb\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " Nov 24 23:12:59 crc kubenswrapper[4915]: I1124 23:12:59.802519 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5cde4f81-df73-4990-885c-690d843e90bb-test-operator-ephemeral-workdir\") pod \"5cde4f81-df73-4990-885c-690d843e90bb\" (UID: \"5cde4f81-df73-4990-885c-690d843e90bb\") " Nov 24 23:12:59 crc kubenswrapper[4915]: I1124 23:12:59.804515 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cde4f81-df73-4990-885c-690d843e90bb-config-data" (OuterVolumeSpecName: "config-data") pod "5cde4f81-df73-4990-885c-690d843e90bb" (UID: "5cde4f81-df73-4990-885c-690d843e90bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 23:12:59 crc kubenswrapper[4915]: I1124 23:12:59.804633 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cde4f81-df73-4990-885c-690d843e90bb-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "5cde4f81-df73-4990-885c-690d843e90bb" (UID: "5cde4f81-df73-4990-885c-690d843e90bb"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:12:59 crc kubenswrapper[4915]: I1124 23:12:59.808445 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cde4f81-df73-4990-885c-690d843e90bb-kube-api-access-fhlkd" (OuterVolumeSpecName: "kube-api-access-fhlkd") pod "5cde4f81-df73-4990-885c-690d843e90bb" (UID: "5cde4f81-df73-4990-885c-690d843e90bb"). InnerVolumeSpecName "kube-api-access-fhlkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:12:59 crc kubenswrapper[4915]: I1124 23:12:59.820815 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cde4f81-df73-4990-885c-690d843e90bb-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "5cde4f81-df73-4990-885c-690d843e90bb" (UID: "5cde4f81-df73-4990-885c-690d843e90bb"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:12:59 crc kubenswrapper[4915]: I1124 23:12:59.824189 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "test-operator-logs") pod "5cde4f81-df73-4990-885c-690d843e90bb" (UID: "5cde4f81-df73-4990-885c-690d843e90bb"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 24 23:12:59 crc kubenswrapper[4915]: I1124 23:12:59.854687 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cde4f81-df73-4990-885c-690d843e90bb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5cde4f81-df73-4990-885c-690d843e90bb" (UID: "5cde4f81-df73-4990-885c-690d843e90bb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:12:59 crc kubenswrapper[4915]: I1124 23:12:59.855329 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cde4f81-df73-4990-885c-690d843e90bb-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "5cde4f81-df73-4990-885c-690d843e90bb" (UID: "5cde4f81-df73-4990-885c-690d843e90bb"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:12:59 crc kubenswrapper[4915]: I1124 23:12:59.858237 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cde4f81-df73-4990-885c-690d843e90bb-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "5cde4f81-df73-4990-885c-690d843e90bb" (UID: "5cde4f81-df73-4990-885c-690d843e90bb"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:12:59 crc kubenswrapper[4915]: I1124 23:12:59.878912 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cde4f81-df73-4990-885c-690d843e90bb-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "5cde4f81-df73-4990-885c-690d843e90bb" (UID: "5cde4f81-df73-4990-885c-690d843e90bb"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 23:12:59 crc kubenswrapper[4915]: I1124 23:12:59.906408 4915 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5cde4f81-df73-4990-885c-690d843e90bb-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:59 crc kubenswrapper[4915]: I1124 23:12:59.906447 4915 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/5cde4f81-df73-4990-885c-690d843e90bb-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:59 crc kubenswrapper[4915]: I1124 23:12:59.906462 4915 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5cde4f81-df73-4990-885c-690d843e90bb-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:59 crc kubenswrapper[4915]: I1124 23:12:59.906476 4915 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/5cde4f81-df73-4990-885c-690d843e90bb-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:59 crc kubenswrapper[4915]: I1124 23:12:59.906489 4915 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/5cde4f81-df73-4990-885c-690d843e90bb-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:59 crc kubenswrapper[4915]: I1124 23:12:59.906529 4915 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 24 23:12:59 crc kubenswrapper[4915]: I1124 23:12:59.906542 4915 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5cde4f81-df73-4990-885c-690d843e90bb-config-data\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:59 crc kubenswrapper[4915]: I1124 23:12:59.906555 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhlkd\" (UniqueName: \"kubernetes.io/projected/5cde4f81-df73-4990-885c-690d843e90bb-kube-api-access-fhlkd\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:59 crc kubenswrapper[4915]: I1124 23:12:59.906567 4915 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5cde4f81-df73-4990-885c-690d843e90bb-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 24 23:12:59 crc kubenswrapper[4915]: I1124 23:12:59.942369 4915 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 24 23:13:00 crc kubenswrapper[4915]: I1124 23:13:00.009298 4915 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 24 23:13:00 crc kubenswrapper[4915]: I1124 23:13:00.252103 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"5cde4f81-df73-4990-885c-690d843e90bb","Type":"ContainerDied","Data":"4eba24cee01cc9bf8ce52d5e195c628368fd31ad07024fad4e919c5511e7a548"} Nov 24 23:13:00 crc kubenswrapper[4915]: I1124 23:13:00.252142 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4eba24cee01cc9bf8ce52d5e195c628368fd31ad07024fad4e919c5511e7a548" Nov 24 23:13:00 crc kubenswrapper[4915]: I1124 23:13:00.252145 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 24 23:13:06 crc kubenswrapper[4915]: I1124 23:13:06.477949 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 24 23:13:06 crc kubenswrapper[4915]: E1124 23:13:06.479914 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e233ad8b-763b-4205-8242-1be3217aa248" containerName="registry-server" Nov 24 23:13:06 crc kubenswrapper[4915]: I1124 23:13:06.479939 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e233ad8b-763b-4205-8242-1be3217aa248" containerName="registry-server" Nov 24 23:13:06 crc kubenswrapper[4915]: E1124 23:13:06.479960 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e233ad8b-763b-4205-8242-1be3217aa248" containerName="extract-content" Nov 24 23:13:06 crc kubenswrapper[4915]: I1124 23:13:06.479968 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e233ad8b-763b-4205-8242-1be3217aa248" containerName="extract-content" Nov 24 23:13:06 crc kubenswrapper[4915]: E1124 23:13:06.479996 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e233ad8b-763b-4205-8242-1be3217aa248" containerName="extract-utilities" Nov 24 23:13:06 crc kubenswrapper[4915]: I1124 23:13:06.480005 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e233ad8b-763b-4205-8242-1be3217aa248" containerName="extract-utilities" Nov 24 23:13:06 crc kubenswrapper[4915]: E1124 23:13:06.480047 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cde4f81-df73-4990-885c-690d843e90bb" containerName="tempest-tests-tempest-tests-runner" Nov 24 23:13:06 crc kubenswrapper[4915]: I1124 23:13:06.480056 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cde4f81-df73-4990-885c-690d843e90bb" containerName="tempest-tests-tempest-tests-runner" Nov 24 23:13:06 crc kubenswrapper[4915]: I1124 23:13:06.480728 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cde4f81-df73-4990-885c-690d843e90bb" containerName="tempest-tests-tempest-tests-runner" Nov 24 23:13:06 crc kubenswrapper[4915]: I1124 23:13:06.480796 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="e233ad8b-763b-4205-8242-1be3217aa248" containerName="registry-server" Nov 24 23:13:06 crc kubenswrapper[4915]: I1124 23:13:06.482863 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 23:13:06 crc kubenswrapper[4915]: I1124 23:13:06.486601 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-rh5x7" Nov 24 23:13:06 crc kubenswrapper[4915]: I1124 23:13:06.520065 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 24 23:13:06 crc kubenswrapper[4915]: I1124 23:13:06.605094 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tftpc\" (UniqueName: \"kubernetes.io/projected/73d33248-5d5a-4ff9-bea5-c9d45fd8b48c-kube-api-access-tftpc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"73d33248-5d5a-4ff9-bea5-c9d45fd8b48c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 23:13:06 crc kubenswrapper[4915]: I1124 23:13:06.605264 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"73d33248-5d5a-4ff9-bea5-c9d45fd8b48c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 23:13:06 crc kubenswrapper[4915]: I1124 23:13:06.707320 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"73d33248-5d5a-4ff9-bea5-c9d45fd8b48c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 23:13:06 crc kubenswrapper[4915]: I1124 23:13:06.707567 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tftpc\" (UniqueName: \"kubernetes.io/projected/73d33248-5d5a-4ff9-bea5-c9d45fd8b48c-kube-api-access-tftpc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"73d33248-5d5a-4ff9-bea5-c9d45fd8b48c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 23:13:06 crc kubenswrapper[4915]: I1124 23:13:06.708199 4915 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"73d33248-5d5a-4ff9-bea5-c9d45fd8b48c\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 23:13:06 crc kubenswrapper[4915]: I1124 23:13:06.735075 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tftpc\" (UniqueName: \"kubernetes.io/projected/73d33248-5d5a-4ff9-bea5-c9d45fd8b48c-kube-api-access-tftpc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"73d33248-5d5a-4ff9-bea5-c9d45fd8b48c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 23:13:06 crc kubenswrapper[4915]: I1124 23:13:06.737927 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"73d33248-5d5a-4ff9-bea5-c9d45fd8b48c\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 23:13:06 crc kubenswrapper[4915]: I1124 23:13:06.814154 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 24 23:13:07 crc kubenswrapper[4915]: I1124 23:13:07.931561 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 24 23:13:07 crc kubenswrapper[4915]: W1124 23:13:07.955553 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73d33248_5d5a_4ff9_bea5_c9d45fd8b48c.slice/crio-988ec90daed8ff970543565aadeae0f4ef1579d04655abd5dfdf7a4dc2a0696e WatchSource:0}: Error finding container 988ec90daed8ff970543565aadeae0f4ef1579d04655abd5dfdf7a4dc2a0696e: Status 404 returned error can't find the container with id 988ec90daed8ff970543565aadeae0f4ef1579d04655abd5dfdf7a4dc2a0696e Nov 24 23:13:08 crc kubenswrapper[4915]: I1124 23:13:08.357026 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"73d33248-5d5a-4ff9-bea5-c9d45fd8b48c","Type":"ContainerStarted","Data":"988ec90daed8ff970543565aadeae0f4ef1579d04655abd5dfdf7a4dc2a0696e"} Nov 24 23:13:09 crc kubenswrapper[4915]: I1124 23:13:09.370461 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"73d33248-5d5a-4ff9-bea5-c9d45fd8b48c","Type":"ContainerStarted","Data":"86c33f9bfd1d295f9d6658b2573608304305e1fc4cbbd57997076e6b76ee9f84"} Nov 24 23:13:09 crc kubenswrapper[4915]: I1124 23:13:09.388205 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.477934649 podStartE2EDuration="3.388187715s" podCreationTimestamp="2025-11-24 23:13:06 +0000 UTC" firstStartedPulling="2025-11-24 23:13:07.966357928 +0000 UTC m=+6806.282610141" lastFinishedPulling="2025-11-24 23:13:08.876611034 +0000 UTC m=+6807.192863207" observedRunningTime="2025-11-24 23:13:09.385297677 +0000 UTC m=+6807.701549880" watchObservedRunningTime="2025-11-24 23:13:09.388187715 +0000 UTC m=+6807.704439888" Nov 24 23:13:26 crc kubenswrapper[4915]: I1124 23:13:26.045212 4915 scope.go:117] "RemoveContainer" containerID="cb8fff6986184c87fa6d6717c605d46266420a1bcac82e9c44ce5e61858e5d9d" Nov 24 23:13:49 crc kubenswrapper[4915]: I1124 23:13:49.538484 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4svfd/must-gather-2qzhh"] Nov 24 23:13:49 crc kubenswrapper[4915]: I1124 23:13:49.542980 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4svfd/must-gather-2qzhh" Nov 24 23:13:49 crc kubenswrapper[4915]: I1124 23:13:49.544917 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-4svfd"/"default-dockercfg-46f69" Nov 24 23:13:49 crc kubenswrapper[4915]: I1124 23:13:49.545642 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-4svfd"/"openshift-service-ca.crt" Nov 24 23:13:49 crc kubenswrapper[4915]: I1124 23:13:49.549626 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-4svfd"/"kube-root-ca.crt" Nov 24 23:13:49 crc kubenswrapper[4915]: I1124 23:13:49.565713 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-4svfd/must-gather-2qzhh"] Nov 24 23:13:49 crc kubenswrapper[4915]: I1124 23:13:49.707608 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/23858cf5-6d7d-4875-9586-92ebb2e329d0-must-gather-output\") pod \"must-gather-2qzhh\" (UID: \"23858cf5-6d7d-4875-9586-92ebb2e329d0\") " pod="openshift-must-gather-4svfd/must-gather-2qzhh" Nov 24 23:13:49 crc kubenswrapper[4915]: I1124 23:13:49.707660 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwsst\" (UniqueName: \"kubernetes.io/projected/23858cf5-6d7d-4875-9586-92ebb2e329d0-kube-api-access-xwsst\") pod \"must-gather-2qzhh\" (UID: \"23858cf5-6d7d-4875-9586-92ebb2e329d0\") " pod="openshift-must-gather-4svfd/must-gather-2qzhh" Nov 24 23:13:49 crc kubenswrapper[4915]: I1124 23:13:49.810361 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/23858cf5-6d7d-4875-9586-92ebb2e329d0-must-gather-output\") pod \"must-gather-2qzhh\" (UID: \"23858cf5-6d7d-4875-9586-92ebb2e329d0\") " pod="openshift-must-gather-4svfd/must-gather-2qzhh" Nov 24 23:13:49 crc kubenswrapper[4915]: I1124 23:13:49.810410 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwsst\" (UniqueName: \"kubernetes.io/projected/23858cf5-6d7d-4875-9586-92ebb2e329d0-kube-api-access-xwsst\") pod \"must-gather-2qzhh\" (UID: \"23858cf5-6d7d-4875-9586-92ebb2e329d0\") " pod="openshift-must-gather-4svfd/must-gather-2qzhh" Nov 24 23:13:49 crc kubenswrapper[4915]: I1124 23:13:49.810834 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/23858cf5-6d7d-4875-9586-92ebb2e329d0-must-gather-output\") pod \"must-gather-2qzhh\" (UID: \"23858cf5-6d7d-4875-9586-92ebb2e329d0\") " pod="openshift-must-gather-4svfd/must-gather-2qzhh" Nov 24 23:13:49 crc kubenswrapper[4915]: I1124 23:13:49.833622 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwsst\" (UniqueName: \"kubernetes.io/projected/23858cf5-6d7d-4875-9586-92ebb2e329d0-kube-api-access-xwsst\") pod \"must-gather-2qzhh\" (UID: \"23858cf5-6d7d-4875-9586-92ebb2e329d0\") " pod="openshift-must-gather-4svfd/must-gather-2qzhh" Nov 24 23:13:49 crc kubenswrapper[4915]: I1124 23:13:49.862226 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4svfd/must-gather-2qzhh" Nov 24 23:13:50 crc kubenswrapper[4915]: I1124 23:13:50.520964 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-4svfd/must-gather-2qzhh"] Nov 24 23:13:50 crc kubenswrapper[4915]: I1124 23:13:50.998172 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4svfd/must-gather-2qzhh" event={"ID":"23858cf5-6d7d-4875-9586-92ebb2e329d0","Type":"ContainerStarted","Data":"24953246f907e27f97cd071bfe29e639f2d8608b49aa7226c52edee11c21a1e6"} Nov 24 23:13:56 crc kubenswrapper[4915]: I1124 23:13:56.055878 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4svfd/must-gather-2qzhh" event={"ID":"23858cf5-6d7d-4875-9586-92ebb2e329d0","Type":"ContainerStarted","Data":"e3ff8640a16813f7fdbe44f0be2b2dd43c17a0714a52ead8dc915711b42aa6a1"} Nov 24 23:13:56 crc kubenswrapper[4915]: I1124 23:13:56.056467 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4svfd/must-gather-2qzhh" event={"ID":"23858cf5-6d7d-4875-9586-92ebb2e329d0","Type":"ContainerStarted","Data":"25ba5d523755facfc180e1fd1440ae2ae34e80005910166ecd79fc4ed75d3481"} Nov 24 23:13:56 crc kubenswrapper[4915]: I1124 23:13:56.074989 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-4svfd/must-gather-2qzhh" podStartSLOduration=2.377464934 podStartE2EDuration="7.074970473s" podCreationTimestamp="2025-11-24 23:13:49 +0000 UTC" firstStartedPulling="2025-11-24 23:13:50.523564129 +0000 UTC m=+6848.839816302" lastFinishedPulling="2025-11-24 23:13:55.221069668 +0000 UTC m=+6853.537321841" observedRunningTime="2025-11-24 23:13:56.067909621 +0000 UTC m=+6854.384161794" watchObservedRunningTime="2025-11-24 23:13:56.074970473 +0000 UTC m=+6854.391222656" Nov 24 23:14:00 crc kubenswrapper[4915]: I1124 23:14:00.728941 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4svfd/crc-debug-d8rk5"] Nov 24 23:14:00 crc kubenswrapper[4915]: I1124 23:14:00.730936 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4svfd/crc-debug-d8rk5" Nov 24 23:14:00 crc kubenswrapper[4915]: I1124 23:14:00.911866 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f7d2760-1c73-42e7-a12d-52cfc38a7830-host\") pod \"crc-debug-d8rk5\" (UID: \"0f7d2760-1c73-42e7-a12d-52cfc38a7830\") " pod="openshift-must-gather-4svfd/crc-debug-d8rk5" Nov 24 23:14:00 crc kubenswrapper[4915]: I1124 23:14:00.912987 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48rp5\" (UniqueName: \"kubernetes.io/projected/0f7d2760-1c73-42e7-a12d-52cfc38a7830-kube-api-access-48rp5\") pod \"crc-debug-d8rk5\" (UID: \"0f7d2760-1c73-42e7-a12d-52cfc38a7830\") " pod="openshift-must-gather-4svfd/crc-debug-d8rk5" Nov 24 23:14:01 crc kubenswrapper[4915]: I1124 23:14:01.015528 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48rp5\" (UniqueName: \"kubernetes.io/projected/0f7d2760-1c73-42e7-a12d-52cfc38a7830-kube-api-access-48rp5\") pod \"crc-debug-d8rk5\" (UID: \"0f7d2760-1c73-42e7-a12d-52cfc38a7830\") " pod="openshift-must-gather-4svfd/crc-debug-d8rk5" Nov 24 23:14:01 crc kubenswrapper[4915]: I1124 23:14:01.015677 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f7d2760-1c73-42e7-a12d-52cfc38a7830-host\") pod \"crc-debug-d8rk5\" (UID: \"0f7d2760-1c73-42e7-a12d-52cfc38a7830\") " pod="openshift-must-gather-4svfd/crc-debug-d8rk5" Nov 24 23:14:01 crc kubenswrapper[4915]: I1124 23:14:01.015835 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f7d2760-1c73-42e7-a12d-52cfc38a7830-host\") pod \"crc-debug-d8rk5\" (UID: \"0f7d2760-1c73-42e7-a12d-52cfc38a7830\") " pod="openshift-must-gather-4svfd/crc-debug-d8rk5" Nov 24 23:14:01 crc kubenswrapper[4915]: I1124 23:14:01.038757 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48rp5\" (UniqueName: \"kubernetes.io/projected/0f7d2760-1c73-42e7-a12d-52cfc38a7830-kube-api-access-48rp5\") pod \"crc-debug-d8rk5\" (UID: \"0f7d2760-1c73-42e7-a12d-52cfc38a7830\") " pod="openshift-must-gather-4svfd/crc-debug-d8rk5" Nov 24 23:14:01 crc kubenswrapper[4915]: I1124 23:14:01.049082 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4svfd/crc-debug-d8rk5" Nov 24 23:14:01 crc kubenswrapper[4915]: I1124 23:14:01.100538 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 23:14:01 crc kubenswrapper[4915]: I1124 23:14:01.114127 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4svfd/crc-debug-d8rk5" event={"ID":"0f7d2760-1c73-42e7-a12d-52cfc38a7830","Type":"ContainerStarted","Data":"6caad406c77774ca05b5225457b0e89d85c384690abce1f992c7e48ee85adb9c"} Nov 24 23:14:13 crc kubenswrapper[4915]: I1124 23:14:13.292221 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4svfd/crc-debug-d8rk5" event={"ID":"0f7d2760-1c73-42e7-a12d-52cfc38a7830","Type":"ContainerStarted","Data":"00f18dc3f8d40b4e9dbd0951a27a819a601abe8f559f08dd0837ac4dda7addd6"} Nov 24 23:14:13 crc kubenswrapper[4915]: I1124 23:14:13.318987 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-4svfd/crc-debug-d8rk5" podStartSLOduration=2.068218776 podStartE2EDuration="13.318966787s" podCreationTimestamp="2025-11-24 23:14:00 +0000 UTC" firstStartedPulling="2025-11-24 23:14:01.100295523 +0000 UTC m=+6859.416547696" lastFinishedPulling="2025-11-24 23:14:12.351043534 +0000 UTC m=+6870.667295707" observedRunningTime="2025-11-24 23:14:13.30647352 +0000 UTC m=+6871.622725713" watchObservedRunningTime="2025-11-24 23:14:13.318966787 +0000 UTC m=+6871.635218970" Nov 24 23:14:24 crc kubenswrapper[4915]: I1124 23:14:24.327455 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:14:24 crc kubenswrapper[4915]: I1124 23:14:24.328011 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:14:54 crc kubenswrapper[4915]: I1124 23:14:54.326945 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:14:54 crc kubenswrapper[4915]: I1124 23:14:54.327447 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:14:57 crc kubenswrapper[4915]: I1124 23:14:57.847750 4915 generic.go:334] "Generic (PLEG): container finished" podID="0f7d2760-1c73-42e7-a12d-52cfc38a7830" containerID="00f18dc3f8d40b4e9dbd0951a27a819a601abe8f559f08dd0837ac4dda7addd6" exitCode=0 Nov 24 23:14:57 crc kubenswrapper[4915]: I1124 23:14:57.847835 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4svfd/crc-debug-d8rk5" event={"ID":"0f7d2760-1c73-42e7-a12d-52cfc38a7830","Type":"ContainerDied","Data":"00f18dc3f8d40b4e9dbd0951a27a819a601abe8f559f08dd0837ac4dda7addd6"} Nov 24 23:14:58 crc kubenswrapper[4915]: I1124 23:14:58.980373 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4svfd/crc-debug-d8rk5" Nov 24 23:14:59 crc kubenswrapper[4915]: I1124 23:14:59.017998 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4svfd/crc-debug-d8rk5"] Nov 24 23:14:59 crc kubenswrapper[4915]: I1124 23:14:59.028167 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4svfd/crc-debug-d8rk5"] Nov 24 23:14:59 crc kubenswrapper[4915]: I1124 23:14:59.071835 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f7d2760-1c73-42e7-a12d-52cfc38a7830-host\") pod \"0f7d2760-1c73-42e7-a12d-52cfc38a7830\" (UID: \"0f7d2760-1c73-42e7-a12d-52cfc38a7830\") " Nov 24 23:14:59 crc kubenswrapper[4915]: I1124 23:14:59.071965 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f7d2760-1c73-42e7-a12d-52cfc38a7830-host" (OuterVolumeSpecName: "host") pod "0f7d2760-1c73-42e7-a12d-52cfc38a7830" (UID: "0f7d2760-1c73-42e7-a12d-52cfc38a7830"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 23:14:59 crc kubenswrapper[4915]: I1124 23:14:59.071981 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48rp5\" (UniqueName: \"kubernetes.io/projected/0f7d2760-1c73-42e7-a12d-52cfc38a7830-kube-api-access-48rp5\") pod \"0f7d2760-1c73-42e7-a12d-52cfc38a7830\" (UID: \"0f7d2760-1c73-42e7-a12d-52cfc38a7830\") " Nov 24 23:14:59 crc kubenswrapper[4915]: I1124 23:14:59.073104 4915 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0f7d2760-1c73-42e7-a12d-52cfc38a7830-host\") on node \"crc\" DevicePath \"\"" Nov 24 23:14:59 crc kubenswrapper[4915]: I1124 23:14:59.086015 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f7d2760-1c73-42e7-a12d-52cfc38a7830-kube-api-access-48rp5" (OuterVolumeSpecName: "kube-api-access-48rp5") pod "0f7d2760-1c73-42e7-a12d-52cfc38a7830" (UID: "0f7d2760-1c73-42e7-a12d-52cfc38a7830"). InnerVolumeSpecName "kube-api-access-48rp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:14:59 crc kubenswrapper[4915]: I1124 23:14:59.175559 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48rp5\" (UniqueName: \"kubernetes.io/projected/0f7d2760-1c73-42e7-a12d-52cfc38a7830-kube-api-access-48rp5\") on node \"crc\" DevicePath \"\"" Nov 24 23:14:59 crc kubenswrapper[4915]: I1124 23:14:59.876097 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6caad406c77774ca05b5225457b0e89d85c384690abce1f992c7e48ee85adb9c" Nov 24 23:14:59 crc kubenswrapper[4915]: I1124 23:14:59.876183 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4svfd/crc-debug-d8rk5" Nov 24 23:15:00 crc kubenswrapper[4915]: I1124 23:15:00.196735 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400435-hd86m"] Nov 24 23:15:00 crc kubenswrapper[4915]: E1124 23:15:00.197328 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f7d2760-1c73-42e7-a12d-52cfc38a7830" containerName="container-00" Nov 24 23:15:00 crc kubenswrapper[4915]: I1124 23:15:00.197341 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f7d2760-1c73-42e7-a12d-52cfc38a7830" containerName="container-00" Nov 24 23:15:00 crc kubenswrapper[4915]: I1124 23:15:00.197593 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f7d2760-1c73-42e7-a12d-52cfc38a7830" containerName="container-00" Nov 24 23:15:00 crc kubenswrapper[4915]: I1124 23:15:00.198467 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-hd86m" Nov 24 23:15:00 crc kubenswrapper[4915]: I1124 23:15:00.201482 4915 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 24 23:15:00 crc kubenswrapper[4915]: I1124 23:15:00.202294 4915 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 24 23:15:00 crc kubenswrapper[4915]: I1124 23:15:00.235222 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400435-hd86m"] Nov 24 23:15:00 crc kubenswrapper[4915]: I1124 23:15:00.276808 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4svfd/crc-debug-qfgrb"] Nov 24 23:15:00 crc kubenswrapper[4915]: I1124 23:15:00.278273 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4svfd/crc-debug-qfgrb" Nov 24 23:15:00 crc kubenswrapper[4915]: I1124 23:15:00.303812 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j48ff\" (UniqueName: \"kubernetes.io/projected/04e2ecb1-4815-4e78-a66a-197b52705a66-kube-api-access-j48ff\") pod \"collect-profiles-29400435-hd86m\" (UID: \"04e2ecb1-4815-4e78-a66a-197b52705a66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-hd86m" Nov 24 23:15:00 crc kubenswrapper[4915]: I1124 23:15:00.303886 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04e2ecb1-4815-4e78-a66a-197b52705a66-secret-volume\") pod \"collect-profiles-29400435-hd86m\" (UID: \"04e2ecb1-4815-4e78-a66a-197b52705a66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-hd86m" Nov 24 23:15:00 crc kubenswrapper[4915]: I1124 23:15:00.304199 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04e2ecb1-4815-4e78-a66a-197b52705a66-config-volume\") pod \"collect-profiles-29400435-hd86m\" (UID: \"04e2ecb1-4815-4e78-a66a-197b52705a66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-hd86m" Nov 24 23:15:00 crc kubenswrapper[4915]: I1124 23:15:00.406332 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/81cbc9d4-3e2e-4160-a984-78cf968adf3e-host\") pod \"crc-debug-qfgrb\" (UID: \"81cbc9d4-3e2e-4160-a984-78cf968adf3e\") " pod="openshift-must-gather-4svfd/crc-debug-qfgrb" Nov 24 23:15:00 crc kubenswrapper[4915]: I1124 23:15:00.406464 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j48ff\" (UniqueName: \"kubernetes.io/projected/04e2ecb1-4815-4e78-a66a-197b52705a66-kube-api-access-j48ff\") pod \"collect-profiles-29400435-hd86m\" (UID: \"04e2ecb1-4815-4e78-a66a-197b52705a66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-hd86m" Nov 24 23:15:00 crc kubenswrapper[4915]: I1124 23:15:00.406500 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04e2ecb1-4815-4e78-a66a-197b52705a66-secret-volume\") pod \"collect-profiles-29400435-hd86m\" (UID: \"04e2ecb1-4815-4e78-a66a-197b52705a66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-hd86m" Nov 24 23:15:00 crc kubenswrapper[4915]: I1124 23:15:00.406633 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwh5t\" (UniqueName: \"kubernetes.io/projected/81cbc9d4-3e2e-4160-a984-78cf968adf3e-kube-api-access-bwh5t\") pod \"crc-debug-qfgrb\" (UID: \"81cbc9d4-3e2e-4160-a984-78cf968adf3e\") " pod="openshift-must-gather-4svfd/crc-debug-qfgrb" Nov 24 23:15:00 crc kubenswrapper[4915]: I1124 23:15:00.406673 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04e2ecb1-4815-4e78-a66a-197b52705a66-config-volume\") pod \"collect-profiles-29400435-hd86m\" (UID: \"04e2ecb1-4815-4e78-a66a-197b52705a66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-hd86m" Nov 24 23:15:00 crc kubenswrapper[4915]: I1124 23:15:00.409019 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04e2ecb1-4815-4e78-a66a-197b52705a66-config-volume\") pod \"collect-profiles-29400435-hd86m\" (UID: \"04e2ecb1-4815-4e78-a66a-197b52705a66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-hd86m" Nov 24 23:15:00 crc kubenswrapper[4915]: I1124 23:15:00.414440 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04e2ecb1-4815-4e78-a66a-197b52705a66-secret-volume\") pod \"collect-profiles-29400435-hd86m\" (UID: \"04e2ecb1-4815-4e78-a66a-197b52705a66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-hd86m" Nov 24 23:15:00 crc kubenswrapper[4915]: I1124 23:15:00.428375 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j48ff\" (UniqueName: \"kubernetes.io/projected/04e2ecb1-4815-4e78-a66a-197b52705a66-kube-api-access-j48ff\") pod \"collect-profiles-29400435-hd86m\" (UID: \"04e2ecb1-4815-4e78-a66a-197b52705a66\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-hd86m" Nov 24 23:15:00 crc kubenswrapper[4915]: I1124 23:15:00.446666 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f7d2760-1c73-42e7-a12d-52cfc38a7830" path="/var/lib/kubelet/pods/0f7d2760-1c73-42e7-a12d-52cfc38a7830/volumes" Nov 24 23:15:00 crc kubenswrapper[4915]: I1124 23:15:00.509009 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwh5t\" (UniqueName: \"kubernetes.io/projected/81cbc9d4-3e2e-4160-a984-78cf968adf3e-kube-api-access-bwh5t\") pod \"crc-debug-qfgrb\" (UID: \"81cbc9d4-3e2e-4160-a984-78cf968adf3e\") " pod="openshift-must-gather-4svfd/crc-debug-qfgrb" Nov 24 23:15:00 crc kubenswrapper[4915]: I1124 23:15:00.509177 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/81cbc9d4-3e2e-4160-a984-78cf968adf3e-host\") pod \"crc-debug-qfgrb\" (UID: \"81cbc9d4-3e2e-4160-a984-78cf968adf3e\") " pod="openshift-must-gather-4svfd/crc-debug-qfgrb" Nov 24 23:15:00 crc kubenswrapper[4915]: I1124 23:15:00.509429 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/81cbc9d4-3e2e-4160-a984-78cf968adf3e-host\") pod \"crc-debug-qfgrb\" (UID: \"81cbc9d4-3e2e-4160-a984-78cf968adf3e\") " pod="openshift-must-gather-4svfd/crc-debug-qfgrb" Nov 24 23:15:00 crc kubenswrapper[4915]: I1124 23:15:00.531193 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwh5t\" (UniqueName: \"kubernetes.io/projected/81cbc9d4-3e2e-4160-a984-78cf968adf3e-kube-api-access-bwh5t\") pod \"crc-debug-qfgrb\" (UID: \"81cbc9d4-3e2e-4160-a984-78cf968adf3e\") " pod="openshift-must-gather-4svfd/crc-debug-qfgrb" Nov 24 23:15:00 crc kubenswrapper[4915]: I1124 23:15:00.537939 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-hd86m" Nov 24 23:15:00 crc kubenswrapper[4915]: I1124 23:15:00.601345 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4svfd/crc-debug-qfgrb" Nov 24 23:15:00 crc kubenswrapper[4915]: I1124 23:15:00.889763 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4svfd/crc-debug-qfgrb" event={"ID":"81cbc9d4-3e2e-4160-a984-78cf968adf3e","Type":"ContainerStarted","Data":"95e765f7f70eccda98bce04fd71d49a097936b769c11fd131d0ba3b0f839f0f5"} Nov 24 23:15:00 crc kubenswrapper[4915]: I1124 23:15:00.890054 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4svfd/crc-debug-qfgrb" event={"ID":"81cbc9d4-3e2e-4160-a984-78cf968adf3e","Type":"ContainerStarted","Data":"aa5329b11458cbbe5a67d452ac91cbf84d3523bcac5ddd173ee212f7cabacbbe"} Nov 24 23:15:00 crc kubenswrapper[4915]: I1124 23:15:00.901045 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-4svfd/crc-debug-qfgrb" podStartSLOduration=0.901030825 podStartE2EDuration="901.030825ms" podCreationTimestamp="2025-11-24 23:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 23:15:00.90007368 +0000 UTC m=+6919.216325843" watchObservedRunningTime="2025-11-24 23:15:00.901030825 +0000 UTC m=+6919.217282998" Nov 24 23:15:01 crc kubenswrapper[4915]: I1124 23:15:01.037123 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400435-hd86m"] Nov 24 23:15:01 crc kubenswrapper[4915]: I1124 23:15:01.901079 4915 generic.go:334] "Generic (PLEG): container finished" podID="81cbc9d4-3e2e-4160-a984-78cf968adf3e" containerID="95e765f7f70eccda98bce04fd71d49a097936b769c11fd131d0ba3b0f839f0f5" exitCode=0 Nov 24 23:15:01 crc kubenswrapper[4915]: I1124 23:15:01.901417 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4svfd/crc-debug-qfgrb" event={"ID":"81cbc9d4-3e2e-4160-a984-78cf968adf3e","Type":"ContainerDied","Data":"95e765f7f70eccda98bce04fd71d49a097936b769c11fd131d0ba3b0f839f0f5"} Nov 24 23:15:01 crc kubenswrapper[4915]: I1124 23:15:01.904078 4915 generic.go:334] "Generic (PLEG): container finished" podID="04e2ecb1-4815-4e78-a66a-197b52705a66" containerID="b31fd0eaf67b65b19fd9cd245fb14cebac6eca6449759c1d4a38e0bcbe4ae18c" exitCode=0 Nov 24 23:15:01 crc kubenswrapper[4915]: I1124 23:15:01.904916 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-hd86m" event={"ID":"04e2ecb1-4815-4e78-a66a-197b52705a66","Type":"ContainerDied","Data":"b31fd0eaf67b65b19fd9cd245fb14cebac6eca6449759c1d4a38e0bcbe4ae18c"} Nov 24 23:15:01 crc kubenswrapper[4915]: I1124 23:15:01.904986 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-hd86m" event={"ID":"04e2ecb1-4815-4e78-a66a-197b52705a66","Type":"ContainerStarted","Data":"0d563d6adc981a13cd30fd7fa565940ee4847e9506de9579e2a1c86c48f0f196"} Nov 24 23:15:03 crc kubenswrapper[4915]: I1124 23:15:03.046987 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4svfd/crc-debug-qfgrb" Nov 24 23:15:03 crc kubenswrapper[4915]: I1124 23:15:03.104080 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4svfd/crc-debug-qfgrb"] Nov 24 23:15:03 crc kubenswrapper[4915]: I1124 23:15:03.120937 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4svfd/crc-debug-qfgrb"] Nov 24 23:15:03 crc kubenswrapper[4915]: I1124 23:15:03.191382 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/81cbc9d4-3e2e-4160-a984-78cf968adf3e-host\") pod \"81cbc9d4-3e2e-4160-a984-78cf968adf3e\" (UID: \"81cbc9d4-3e2e-4160-a984-78cf968adf3e\") " Nov 24 23:15:03 crc kubenswrapper[4915]: I1124 23:15:03.191445 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwh5t\" (UniqueName: \"kubernetes.io/projected/81cbc9d4-3e2e-4160-a984-78cf968adf3e-kube-api-access-bwh5t\") pod \"81cbc9d4-3e2e-4160-a984-78cf968adf3e\" (UID: \"81cbc9d4-3e2e-4160-a984-78cf968adf3e\") " Nov 24 23:15:03 crc kubenswrapper[4915]: I1124 23:15:03.191459 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/81cbc9d4-3e2e-4160-a984-78cf968adf3e-host" (OuterVolumeSpecName: "host") pod "81cbc9d4-3e2e-4160-a984-78cf968adf3e" (UID: "81cbc9d4-3e2e-4160-a984-78cf968adf3e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 23:15:03 crc kubenswrapper[4915]: I1124 23:15:03.192025 4915 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/81cbc9d4-3e2e-4160-a984-78cf968adf3e-host\") on node \"crc\" DevicePath \"\"" Nov 24 23:15:03 crc kubenswrapper[4915]: I1124 23:15:03.198033 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81cbc9d4-3e2e-4160-a984-78cf968adf3e-kube-api-access-bwh5t" (OuterVolumeSpecName: "kube-api-access-bwh5t") pod "81cbc9d4-3e2e-4160-a984-78cf968adf3e" (UID: "81cbc9d4-3e2e-4160-a984-78cf968adf3e"). InnerVolumeSpecName "kube-api-access-bwh5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:15:03 crc kubenswrapper[4915]: I1124 23:15:03.295238 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwh5t\" (UniqueName: \"kubernetes.io/projected/81cbc9d4-3e2e-4160-a984-78cf968adf3e-kube-api-access-bwh5t\") on node \"crc\" DevicePath \"\"" Nov 24 23:15:03 crc kubenswrapper[4915]: I1124 23:15:03.365321 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-hd86m" Nov 24 23:15:03 crc kubenswrapper[4915]: I1124 23:15:03.499207 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04e2ecb1-4815-4e78-a66a-197b52705a66-config-volume\") pod \"04e2ecb1-4815-4e78-a66a-197b52705a66\" (UID: \"04e2ecb1-4815-4e78-a66a-197b52705a66\") " Nov 24 23:15:03 crc kubenswrapper[4915]: I1124 23:15:03.500068 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04e2ecb1-4815-4e78-a66a-197b52705a66-config-volume" (OuterVolumeSpecName: "config-volume") pod "04e2ecb1-4815-4e78-a66a-197b52705a66" (UID: "04e2ecb1-4815-4e78-a66a-197b52705a66"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 24 23:15:03 crc kubenswrapper[4915]: I1124 23:15:03.500131 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04e2ecb1-4815-4e78-a66a-197b52705a66-secret-volume\") pod \"04e2ecb1-4815-4e78-a66a-197b52705a66\" (UID: \"04e2ecb1-4815-4e78-a66a-197b52705a66\") " Nov 24 23:15:03 crc kubenswrapper[4915]: I1124 23:15:03.500475 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j48ff\" (UniqueName: \"kubernetes.io/projected/04e2ecb1-4815-4e78-a66a-197b52705a66-kube-api-access-j48ff\") pod \"04e2ecb1-4815-4e78-a66a-197b52705a66\" (UID: \"04e2ecb1-4815-4e78-a66a-197b52705a66\") " Nov 24 23:15:03 crc kubenswrapper[4915]: I1124 23:15:03.502127 4915 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04e2ecb1-4815-4e78-a66a-197b52705a66-config-volume\") on node \"crc\" DevicePath \"\"" Nov 24 23:15:03 crc kubenswrapper[4915]: I1124 23:15:03.505361 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04e2ecb1-4815-4e78-a66a-197b52705a66-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "04e2ecb1-4815-4e78-a66a-197b52705a66" (UID: "04e2ecb1-4815-4e78-a66a-197b52705a66"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 24 23:15:03 crc kubenswrapper[4915]: I1124 23:15:03.505539 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04e2ecb1-4815-4e78-a66a-197b52705a66-kube-api-access-j48ff" (OuterVolumeSpecName: "kube-api-access-j48ff") pod "04e2ecb1-4815-4e78-a66a-197b52705a66" (UID: "04e2ecb1-4815-4e78-a66a-197b52705a66"). InnerVolumeSpecName "kube-api-access-j48ff". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:15:03 crc kubenswrapper[4915]: I1124 23:15:03.604194 4915 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/04e2ecb1-4815-4e78-a66a-197b52705a66-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 24 23:15:03 crc kubenswrapper[4915]: I1124 23:15:03.604235 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j48ff\" (UniqueName: \"kubernetes.io/projected/04e2ecb1-4815-4e78-a66a-197b52705a66-kube-api-access-j48ff\") on node \"crc\" DevicePath \"\"" Nov 24 23:15:03 crc kubenswrapper[4915]: I1124 23:15:03.934446 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa5329b11458cbbe5a67d452ac91cbf84d3523bcac5ddd173ee212f7cabacbbe" Nov 24 23:15:03 crc kubenswrapper[4915]: I1124 23:15:03.934683 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4svfd/crc-debug-qfgrb" Nov 24 23:15:03 crc kubenswrapper[4915]: I1124 23:15:03.939449 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-hd86m" event={"ID":"04e2ecb1-4815-4e78-a66a-197b52705a66","Type":"ContainerDied","Data":"0d563d6adc981a13cd30fd7fa565940ee4847e9506de9579e2a1c86c48f0f196"} Nov 24 23:15:03 crc kubenswrapper[4915]: I1124 23:15:03.939687 4915 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d563d6adc981a13cd30fd7fa565940ee4847e9506de9579e2a1c86c48f0f196" Nov 24 23:15:03 crc kubenswrapper[4915]: I1124 23:15:03.939556 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29400435-hd86m" Nov 24 23:15:04 crc kubenswrapper[4915]: I1124 23:15:04.300466 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4svfd/crc-debug-vhfft"] Nov 24 23:15:04 crc kubenswrapper[4915]: E1124 23:15:04.300968 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81cbc9d4-3e2e-4160-a984-78cf968adf3e" containerName="container-00" Nov 24 23:15:04 crc kubenswrapper[4915]: I1124 23:15:04.300985 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="81cbc9d4-3e2e-4160-a984-78cf968adf3e" containerName="container-00" Nov 24 23:15:04 crc kubenswrapper[4915]: E1124 23:15:04.301013 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04e2ecb1-4815-4e78-a66a-197b52705a66" containerName="collect-profiles" Nov 24 23:15:04 crc kubenswrapper[4915]: I1124 23:15:04.301023 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="04e2ecb1-4815-4e78-a66a-197b52705a66" containerName="collect-profiles" Nov 24 23:15:04 crc kubenswrapper[4915]: I1124 23:15:04.301472 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="04e2ecb1-4815-4e78-a66a-197b52705a66" containerName="collect-profiles" Nov 24 23:15:04 crc kubenswrapper[4915]: I1124 23:15:04.301516 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="81cbc9d4-3e2e-4160-a984-78cf968adf3e" containerName="container-00" Nov 24 23:15:04 crc kubenswrapper[4915]: I1124 23:15:04.302742 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4svfd/crc-debug-vhfft" Nov 24 23:15:04 crc kubenswrapper[4915]: I1124 23:15:04.422997 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrbpv\" (UniqueName: \"kubernetes.io/projected/68a0b38d-95da-4ea8-b7ea-d95404f8d855-kube-api-access-hrbpv\") pod \"crc-debug-vhfft\" (UID: \"68a0b38d-95da-4ea8-b7ea-d95404f8d855\") " pod="openshift-must-gather-4svfd/crc-debug-vhfft" Nov 24 23:15:04 crc kubenswrapper[4915]: I1124 23:15:04.423112 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/68a0b38d-95da-4ea8-b7ea-d95404f8d855-host\") pod \"crc-debug-vhfft\" (UID: \"68a0b38d-95da-4ea8-b7ea-d95404f8d855\") " pod="openshift-must-gather-4svfd/crc-debug-vhfft" Nov 24 23:15:04 crc kubenswrapper[4915]: I1124 23:15:04.448520 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81cbc9d4-3e2e-4160-a984-78cf968adf3e" path="/var/lib/kubelet/pods/81cbc9d4-3e2e-4160-a984-78cf968adf3e/volumes" Nov 24 23:15:04 crc kubenswrapper[4915]: I1124 23:15:04.464689 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400390-wmlbj"] Nov 24 23:15:04 crc kubenswrapper[4915]: I1124 23:15:04.477355 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29400390-wmlbj"] Nov 24 23:15:04 crc kubenswrapper[4915]: I1124 23:15:04.525210 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrbpv\" (UniqueName: \"kubernetes.io/projected/68a0b38d-95da-4ea8-b7ea-d95404f8d855-kube-api-access-hrbpv\") pod \"crc-debug-vhfft\" (UID: \"68a0b38d-95da-4ea8-b7ea-d95404f8d855\") " pod="openshift-must-gather-4svfd/crc-debug-vhfft" Nov 24 23:15:04 crc kubenswrapper[4915]: I1124 23:15:04.525390 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/68a0b38d-95da-4ea8-b7ea-d95404f8d855-host\") pod \"crc-debug-vhfft\" (UID: \"68a0b38d-95da-4ea8-b7ea-d95404f8d855\") " pod="openshift-must-gather-4svfd/crc-debug-vhfft" Nov 24 23:15:04 crc kubenswrapper[4915]: I1124 23:15:04.527110 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/68a0b38d-95da-4ea8-b7ea-d95404f8d855-host\") pod \"crc-debug-vhfft\" (UID: \"68a0b38d-95da-4ea8-b7ea-d95404f8d855\") " pod="openshift-must-gather-4svfd/crc-debug-vhfft" Nov 24 23:15:04 crc kubenswrapper[4915]: I1124 23:15:04.544912 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrbpv\" (UniqueName: \"kubernetes.io/projected/68a0b38d-95da-4ea8-b7ea-d95404f8d855-kube-api-access-hrbpv\") pod \"crc-debug-vhfft\" (UID: \"68a0b38d-95da-4ea8-b7ea-d95404f8d855\") " pod="openshift-must-gather-4svfd/crc-debug-vhfft" Nov 24 23:15:04 crc kubenswrapper[4915]: I1124 23:15:04.623012 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4svfd/crc-debug-vhfft" Nov 24 23:15:04 crc kubenswrapper[4915]: W1124 23:15:04.661690 4915 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68a0b38d_95da_4ea8_b7ea_d95404f8d855.slice/crio-c0d229f244a5ba90cdab83241d2cb2ebbbb0a961e108f712e15d62d502979ad3 WatchSource:0}: Error finding container c0d229f244a5ba90cdab83241d2cb2ebbbb0a961e108f712e15d62d502979ad3: Status 404 returned error can't find the container with id c0d229f244a5ba90cdab83241d2cb2ebbbb0a961e108f712e15d62d502979ad3 Nov 24 23:15:04 crc kubenswrapper[4915]: I1124 23:15:04.952280 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4svfd/crc-debug-vhfft" event={"ID":"68a0b38d-95da-4ea8-b7ea-d95404f8d855","Type":"ContainerStarted","Data":"4555110c47f8ad29a60081e65fa115ccbc4703e19c2024701f5e4979b968103d"} Nov 24 23:15:04 crc kubenswrapper[4915]: I1124 23:15:04.952593 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4svfd/crc-debug-vhfft" event={"ID":"68a0b38d-95da-4ea8-b7ea-d95404f8d855","Type":"ContainerStarted","Data":"c0d229f244a5ba90cdab83241d2cb2ebbbb0a961e108f712e15d62d502979ad3"} Nov 24 23:15:05 crc kubenswrapper[4915]: I1124 23:15:05.000537 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4svfd/crc-debug-vhfft"] Nov 24 23:15:05 crc kubenswrapper[4915]: I1124 23:15:05.013504 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4svfd/crc-debug-vhfft"] Nov 24 23:15:05 crc kubenswrapper[4915]: I1124 23:15:05.967215 4915 generic.go:334] "Generic (PLEG): container finished" podID="68a0b38d-95da-4ea8-b7ea-d95404f8d855" containerID="4555110c47f8ad29a60081e65fa115ccbc4703e19c2024701f5e4979b968103d" exitCode=0 Nov 24 23:15:06 crc kubenswrapper[4915]: I1124 23:15:06.124988 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4svfd/crc-debug-vhfft" Nov 24 23:15:06 crc kubenswrapper[4915]: I1124 23:15:06.267185 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/68a0b38d-95da-4ea8-b7ea-d95404f8d855-host\") pod \"68a0b38d-95da-4ea8-b7ea-d95404f8d855\" (UID: \"68a0b38d-95da-4ea8-b7ea-d95404f8d855\") " Nov 24 23:15:06 crc kubenswrapper[4915]: I1124 23:15:06.267464 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrbpv\" (UniqueName: \"kubernetes.io/projected/68a0b38d-95da-4ea8-b7ea-d95404f8d855-kube-api-access-hrbpv\") pod \"68a0b38d-95da-4ea8-b7ea-d95404f8d855\" (UID: \"68a0b38d-95da-4ea8-b7ea-d95404f8d855\") " Nov 24 23:15:06 crc kubenswrapper[4915]: I1124 23:15:06.267565 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68a0b38d-95da-4ea8-b7ea-d95404f8d855-host" (OuterVolumeSpecName: "host") pod "68a0b38d-95da-4ea8-b7ea-d95404f8d855" (UID: "68a0b38d-95da-4ea8-b7ea-d95404f8d855"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 24 23:15:06 crc kubenswrapper[4915]: I1124 23:15:06.268202 4915 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/68a0b38d-95da-4ea8-b7ea-d95404f8d855-host\") on node \"crc\" DevicePath \"\"" Nov 24 23:15:06 crc kubenswrapper[4915]: I1124 23:15:06.272652 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68a0b38d-95da-4ea8-b7ea-d95404f8d855-kube-api-access-hrbpv" (OuterVolumeSpecName: "kube-api-access-hrbpv") pod "68a0b38d-95da-4ea8-b7ea-d95404f8d855" (UID: "68a0b38d-95da-4ea8-b7ea-d95404f8d855"). InnerVolumeSpecName "kube-api-access-hrbpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:15:06 crc kubenswrapper[4915]: I1124 23:15:06.370305 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrbpv\" (UniqueName: \"kubernetes.io/projected/68a0b38d-95da-4ea8-b7ea-d95404f8d855-kube-api-access-hrbpv\") on node \"crc\" DevicePath \"\"" Nov 24 23:15:06 crc kubenswrapper[4915]: I1124 23:15:06.440236 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68a0b38d-95da-4ea8-b7ea-d95404f8d855" path="/var/lib/kubelet/pods/68a0b38d-95da-4ea8-b7ea-d95404f8d855/volumes" Nov 24 23:15:06 crc kubenswrapper[4915]: I1124 23:15:06.442181 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95e5ac9f-3f14-4c8c-b8ac-94ef315888d1" path="/var/lib/kubelet/pods/95e5ac9f-3f14-4c8c-b8ac-94ef315888d1/volumes" Nov 24 23:15:06 crc kubenswrapper[4915]: I1124 23:15:06.979595 4915 scope.go:117] "RemoveContainer" containerID="4555110c47f8ad29a60081e65fa115ccbc4703e19c2024701f5e4979b968103d" Nov 24 23:15:06 crc kubenswrapper[4915]: I1124 23:15:06.979640 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4svfd/crc-debug-vhfft" Nov 24 23:15:24 crc kubenswrapper[4915]: I1124 23:15:24.327311 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:15:24 crc kubenswrapper[4915]: I1124 23:15:24.327769 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:15:24 crc kubenswrapper[4915]: I1124 23:15:24.327822 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 23:15:24 crc kubenswrapper[4915]: I1124 23:15:24.328650 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2a89069e46a05c46d5332f059d72afceaca9bcadbe4fdf497e4e10e488952af5"} pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 23:15:24 crc kubenswrapper[4915]: I1124 23:15:24.328692 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" containerID="cri-o://2a89069e46a05c46d5332f059d72afceaca9bcadbe4fdf497e4e10e488952af5" gracePeriod=600 Nov 24 23:15:25 crc kubenswrapper[4915]: I1124 23:15:25.219199 4915 generic.go:334] "Generic (PLEG): container finished" podID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerID="2a89069e46a05c46d5332f059d72afceaca9bcadbe4fdf497e4e10e488952af5" exitCode=0 Nov 24 23:15:25 crc kubenswrapper[4915]: I1124 23:15:25.219286 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerDied","Data":"2a89069e46a05c46d5332f059d72afceaca9bcadbe4fdf497e4e10e488952af5"} Nov 24 23:15:25 crc kubenswrapper[4915]: I1124 23:15:25.219726 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerStarted","Data":"4c4992e0c66e1a80e8d0f8351bb8d73606b94eabfb2432575a0d7a556eb597cf"} Nov 24 23:15:25 crc kubenswrapper[4915]: I1124 23:15:25.219773 4915 scope.go:117] "RemoveContainer" containerID="9460b23ff5cbdbb06020658fde3647ecb1f224c454abb42963e2e7769eedaf47" Nov 24 23:15:26 crc kubenswrapper[4915]: I1124 23:15:26.279657 4915 scope.go:117] "RemoveContainer" containerID="da58134e8c6f449adadd70e6c894b8980fb478baf41a16ed3afa4bed41e61063" Nov 24 23:15:26 crc kubenswrapper[4915]: I1124 23:15:26.320948 4915 scope.go:117] "RemoveContainer" containerID="e0d86a934adfa74a1b21ef35768c0599ef302e1cdff426306304e5b4b6ae9c67" Nov 24 23:15:26 crc kubenswrapper[4915]: I1124 23:15:26.438570 4915 scope.go:117] "RemoveContainer" containerID="3735226948a77bf2a9b6295e8b1d7902a207546caf501a087586ff7a0df08d34" Nov 24 23:15:26 crc kubenswrapper[4915]: I1124 23:15:26.470403 4915 scope.go:117] "RemoveContainer" containerID="cf323dbee840edf8775d762fc96202e963b9cb54421dde99facbeac8e472248f" Nov 24 23:15:29 crc kubenswrapper[4915]: I1124 23:15:29.737167 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce/aodh-api/0.log" Nov 24 23:15:29 crc kubenswrapper[4915]: I1124 23:15:29.886065 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce/aodh-evaluator/0.log" Nov 24 23:15:29 crc kubenswrapper[4915]: I1124 23:15:29.893449 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce/aodh-listener/0.log" Nov 24 23:15:29 crc kubenswrapper[4915]: I1124 23:15:29.943325 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_b3bebfc5-51fd-4ba3-af76-4fbe63dd96ce/aodh-notifier/0.log" Nov 24 23:15:30 crc kubenswrapper[4915]: I1124 23:15:30.067871 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-db-sync-dfc4k_4b249c5c-fb49-4f90-9821-2c4c7b37d448/aodh-db-sync/0.log" Nov 24 23:15:30 crc kubenswrapper[4915]: I1124 23:15:30.186083 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7985c5bcf4-b8zrw_35feb1a4-a62a-425e-a77f-48e60720b620/barbican-api/0.log" Nov 24 23:15:30 crc kubenswrapper[4915]: I1124 23:15:30.272545 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7985c5bcf4-b8zrw_35feb1a4-a62a-425e-a77f-48e60720b620/barbican-api-log/0.log" Nov 24 23:15:30 crc kubenswrapper[4915]: I1124 23:15:30.369975 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7dd95d5f64-ht7d8_2bce36ae-dbff-4c97-9f8e-43edd44dfad0/barbican-keystone-listener/0.log" Nov 24 23:15:30 crc kubenswrapper[4915]: I1124 23:15:30.392590 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7dd95d5f64-ht7d8_2bce36ae-dbff-4c97-9f8e-43edd44dfad0/barbican-keystone-listener-log/0.log" Nov 24 23:15:30 crc kubenswrapper[4915]: I1124 23:15:30.530114 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-66f8975c4c-2r5c7_2b2a93f7-9082-4391-b366-64cd870dc30e/barbican-worker/0.log" Nov 24 23:15:30 crc kubenswrapper[4915]: I1124 23:15:30.587111 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-66f8975c4c-2r5c7_2b2a93f7-9082-4391-b366-64cd870dc30e/barbican-worker-log/0.log" Nov 24 23:15:30 crc kubenswrapper[4915]: I1124 23:15:30.710725 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-xn2p6_48dcd241-f575-4735-8ad2-0449ae02ddaf/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:15:30 crc kubenswrapper[4915]: I1124 23:15:30.822409 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e69116cf-d50e-44af-899f-2e11d16e45d1/ceilometer-central-agent/0.log" Nov 24 23:15:30 crc kubenswrapper[4915]: I1124 23:15:30.894383 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e69116cf-d50e-44af-899f-2e11d16e45d1/proxy-httpd/0.log" Nov 24 23:15:30 crc kubenswrapper[4915]: I1124 23:15:30.911369 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e69116cf-d50e-44af-899f-2e11d16e45d1/ceilometer-notification-agent/0.log" Nov 24 23:15:31 crc kubenswrapper[4915]: I1124 23:15:31.007793 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_e69116cf-d50e-44af-899f-2e11d16e45d1/sg-core/0.log" Nov 24 23:15:31 crc kubenswrapper[4915]: I1124 23:15:31.127627 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_848690e8-3b39-4e42-b420-ab4cc3d251be/cinder-api-log/0.log" Nov 24 23:15:31 crc kubenswrapper[4915]: I1124 23:15:31.164104 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_848690e8-3b39-4e42-b420-ab4cc3d251be/cinder-api/0.log" Nov 24 23:15:31 crc kubenswrapper[4915]: I1124 23:15:31.337336 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e833f179-abc7-49e0-8cba-b50d7378ce5b/cinder-scheduler/0.log" Nov 24 23:15:31 crc kubenswrapper[4915]: I1124 23:15:31.360008 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_e833f179-abc7-49e0-8cba-b50d7378ce5b/probe/0.log" Nov 24 23:15:31 crc kubenswrapper[4915]: I1124 23:15:31.483611 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-97pjm_8927e2c7-ae6d-4664-a7b4-ea2fbe28fefe/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:15:31 crc kubenswrapper[4915]: I1124 23:15:31.580034 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-ct6bj_09f1f524-d962-4ff4-b8e4-d9e3ace2f492/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:15:31 crc kubenswrapper[4915]: I1124 23:15:31.742699 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-krgft_4adb52f8-9d39-407f-8e70-4dfe00552554/init/0.log" Nov 24 23:15:31 crc kubenswrapper[4915]: I1124 23:15:31.940371 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-krgft_4adb52f8-9d39-407f-8e70-4dfe00552554/init/0.log" Nov 24 23:15:31 crc kubenswrapper[4915]: I1124 23:15:31.961760 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-ht7z7_f71427ed-e785-491d-b980-63359324b3ac/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:15:32 crc kubenswrapper[4915]: I1124 23:15:32.023416 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-krgft_4adb52f8-9d39-407f-8e70-4dfe00552554/dnsmasq-dns/0.log" Nov 24 23:15:32 crc kubenswrapper[4915]: I1124 23:15:32.157785 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b0ccc65b-d989-428d-8ec6-ad88e8a03f42/glance-httpd/0.log" Nov 24 23:15:32 crc kubenswrapper[4915]: I1124 23:15:32.250468 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b0ccc65b-d989-428d-8ec6-ad88e8a03f42/glance-log/0.log" Nov 24 23:15:32 crc kubenswrapper[4915]: I1124 23:15:32.398447 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5b5763a8-4eea-4c3b-bddc-a9d55da42631/glance-log/0.log" Nov 24 23:15:32 crc kubenswrapper[4915]: I1124 23:15:32.430142 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5b5763a8-4eea-4c3b-bddc-a9d55da42631/glance-httpd/0.log" Nov 24 23:15:32 crc kubenswrapper[4915]: I1124 23:15:32.519194 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-57b8f9ccb-vdpvp_2fde2e2c-d0d1-42a3-9436-28a1bb06112e/heat-api/0.log" Nov 24 23:15:32 crc kubenswrapper[4915]: I1124 23:15:32.641977 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-6f89f694b6-btvwg_08e2ac0b-bab8-40e2-9369-9ccb8f4e377b/heat-cfnapi/0.log" Nov 24 23:15:32 crc kubenswrapper[4915]: I1124 23:15:32.984663 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-db-sync-bnv6m_3f948bc6-46ff-4a75-98c5-beafdbd54bcc/heat-db-sync/0.log" Nov 24 23:15:33 crc kubenswrapper[4915]: I1124 23:15:33.038978 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-766d8d5b4c-d9mzt_3dd2ba05-2c59-4467-9374-0777760cffd3/heat-engine/0.log" Nov 24 23:15:33 crc kubenswrapper[4915]: I1124 23:15:33.173241 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-nfdss_96495372-42d5-4a61-99d1-be56b844f795/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:15:33 crc kubenswrapper[4915]: I1124 23:15:33.331293 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-s9hmp_389bed1d-1ac1-470b-af86-8adb74146ef0/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:15:33 crc kubenswrapper[4915]: I1124 23:15:33.545598 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29400361-bksjl_8fb97ba2-7366-43cc-92db-3e5c5e61a99a/keystone-cron/0.log" Nov 24 23:15:33 crc kubenswrapper[4915]: I1124 23:15:33.650346 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29400421-8wvcv_621c0c9b-73f8-49ca-90a1-a7f3dc80382b/keystone-cron/0.log" Nov 24 23:15:33 crc kubenswrapper[4915]: I1124 23:15:33.690888 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6b68f549f-hk4pm_9535890f-6177-4219-9584-7e0200661d82/keystone-api/0.log" Nov 24 23:15:33 crc kubenswrapper[4915]: I1124 23:15:33.719811 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_daa86f21-5aec-48b3-833d-8d2c99e96028/kube-state-metrics/0.log" Nov 24 23:15:33 crc kubenswrapper[4915]: I1124 23:15:33.921831 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-jzmrf_ab48a05a-11c1-4c6b-b53c-2bf99f7b13b6/logging-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:15:33 crc kubenswrapper[4915]: I1124 23:15:33.926537 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-fv4nr_93a85159-d030-4598-8bad-305ba5b7a459/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:15:34 crc kubenswrapper[4915]: I1124 23:15:34.110879 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_d3319417-d53f-48fa-bffa-fa20dfabccd4/mysqld-exporter/0.log" Nov 24 23:15:34 crc kubenswrapper[4915]: I1124 23:15:34.376155 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-855b85d565-grqmb_e93587e9-d7d1-4cbb-894c-ad138ffa8fdd/neutron-api/0.log" Nov 24 23:15:34 crc kubenswrapper[4915]: I1124 23:15:34.458303 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-855b85d565-grqmb_e93587e9-d7d1-4cbb-894c-ad138ffa8fdd/neutron-httpd/0.log" Nov 24 23:15:34 crc kubenswrapper[4915]: I1124 23:15:34.514111 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-j2xl5_62407755-3fa0-4c4d-90ff-cf42f25bdbc6/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:15:35 crc kubenswrapper[4915]: I1124 23:15:35.006427 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_23643233-f874-4589-9494-dcdada59274e/nova-cell0-conductor-conductor/0.log" Nov 24 23:15:35 crc kubenswrapper[4915]: I1124 23:15:35.028132 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb/nova-api-log/0.log" Nov 24 23:15:35 crc kubenswrapper[4915]: I1124 23:15:35.370178 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_7a63b596-ad59-47a7-bc24-7efbc0a4001b/nova-cell1-conductor-conductor/0.log" Nov 24 23:15:35 crc kubenswrapper[4915]: I1124 23:15:35.392303 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_b2b977b2-72cc-4b96-9a12-85155332319b/nova-cell1-novncproxy-novncproxy/0.log" Nov 24 23:15:35 crc kubenswrapper[4915]: I1124 23:15:35.529016 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b2a17d8e-f2cd-4761-ae93-b8d7b817b6cb/nova-api-api/0.log" Nov 24 23:15:35 crc kubenswrapper[4915]: I1124 23:15:35.757734 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-dmpwz_96ebeccc-beda-4622-9d03-6aabb443b1fe/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:15:35 crc kubenswrapper[4915]: I1124 23:15:35.922921 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_59818a4e-515f-477f-96ad-78e6b8310657/memcached/0.log" Nov 24 23:15:36 crc kubenswrapper[4915]: I1124 23:15:36.007369 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b63d6368-a68c-4520-835c-2799c2d64673/nova-metadata-log/0.log" Nov 24 23:15:36 crc kubenswrapper[4915]: I1124 23:15:36.188328 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_2eb4f336-48eb-4d67-b147-cb401de61753/nova-scheduler-scheduler/0.log" Nov 24 23:15:36 crc kubenswrapper[4915]: I1124 23:15:36.211966 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d00638de-cc40-405d-b271-b681f199a172/mysql-bootstrap/0.log" Nov 24 23:15:36 crc kubenswrapper[4915]: I1124 23:15:36.453293 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d00638de-cc40-405d-b271-b681f199a172/galera/0.log" Nov 24 23:15:36 crc kubenswrapper[4915]: I1124 23:15:36.515033 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d00638de-cc40-405d-b271-b681f199a172/mysql-bootstrap/0.log" Nov 24 23:15:36 crc kubenswrapper[4915]: I1124 23:15:36.584263 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b0914514-21db-4664-9fdf-935c0f671637/mysql-bootstrap/0.log" Nov 24 23:15:36 crc kubenswrapper[4915]: I1124 23:15:36.673899 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b0914514-21db-4664-9fdf-935c0f671637/mysql-bootstrap/0.log" Nov 24 23:15:36 crc kubenswrapper[4915]: I1124 23:15:36.762769 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_b0914514-21db-4664-9fdf-935c0f671637/galera/0.log" Nov 24 23:15:36 crc kubenswrapper[4915]: I1124 23:15:36.797667 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_8274e484-f997-4150-a496-f8f1a68ccfae/openstackclient/0.log" Nov 24 23:15:37 crc kubenswrapper[4915]: I1124 23:15:37.035061 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-wv8qp_f8d6bb82-7d54-47c9-bc17-a572fe39d0df/openstack-network-exporter/0.log" Nov 24 23:15:37 crc kubenswrapper[4915]: I1124 23:15:37.141391 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mszzs_e5c7ffec-8976-487c-9bff-d3697cef3724/ovsdb-server-init/0.log" Nov 24 23:15:37 crc kubenswrapper[4915]: I1124 23:15:37.304482 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mszzs_e5c7ffec-8976-487c-9bff-d3697cef3724/ovsdb-server-init/0.log" Nov 24 23:15:37 crc kubenswrapper[4915]: I1124 23:15:37.367382 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mszzs_e5c7ffec-8976-487c-9bff-d3697cef3724/ovsdb-server/0.log" Nov 24 23:15:37 crc kubenswrapper[4915]: I1124 23:15:37.438249 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-mszzs_e5c7ffec-8976-487c-9bff-d3697cef3724/ovs-vswitchd/0.log" Nov 24 23:15:37 crc kubenswrapper[4915]: I1124 23:15:37.552937 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-wj47k_67dd4a19-a0b7-4c7b-8289-40b7fc2476dc/ovn-controller/0.log" Nov 24 23:15:37 crc kubenswrapper[4915]: I1124 23:15:37.661460 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b63d6368-a68c-4520-835c-2799c2d64673/nova-metadata-metadata/0.log" Nov 24 23:15:37 crc kubenswrapper[4915]: I1124 23:15:37.676681 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-wbgtx_48d4639e-cc06-4db8-b81d-336f8ef4bda5/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:15:37 crc kubenswrapper[4915]: I1124 23:15:37.798946 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_13f6ca90-aa94-45a7-803d-22cee7fd27aa/openstack-network-exporter/0.log" Nov 24 23:15:37 crc kubenswrapper[4915]: I1124 23:15:37.833410 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_13f6ca90-aa94-45a7-803d-22cee7fd27aa/ovn-northd/0.log" Nov 24 23:15:37 crc kubenswrapper[4915]: I1124 23:15:37.862546 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_30bb22de-3ca5-4580-92cf-09e8653a98ab/openstack-network-exporter/0.log" Nov 24 23:15:37 crc kubenswrapper[4915]: I1124 23:15:37.869601 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_30bb22de-3ca5-4580-92cf-09e8653a98ab/ovsdbserver-nb/0.log" Nov 24 23:15:38 crc kubenswrapper[4915]: I1124 23:15:38.031093 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_df07ff21-06da-4be5-85f8-cb5821d002bb/openstack-network-exporter/0.log" Nov 24 23:15:38 crc kubenswrapper[4915]: I1124 23:15:38.051109 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_df07ff21-06da-4be5-85f8-cb5821d002bb/ovsdbserver-sb/0.log" Nov 24 23:15:38 crc kubenswrapper[4915]: I1124 23:15:38.169142 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6db74dbdbb-6ntbq_ff00b457-c47c-46b0-aa05-a0fe3d54ffa0/placement-api/0.log" Nov 24 23:15:38 crc kubenswrapper[4915]: I1124 23:15:38.277199 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_cbc69ba4-d747-467c-98ab-d22491a8203c/init-config-reloader/0.log" Nov 24 23:15:38 crc kubenswrapper[4915]: I1124 23:15:38.279418 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6db74dbdbb-6ntbq_ff00b457-c47c-46b0-aa05-a0fe3d54ffa0/placement-log/0.log" Nov 24 23:15:38 crc kubenswrapper[4915]: I1124 23:15:38.444359 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_cbc69ba4-d747-467c-98ab-d22491a8203c/config-reloader/0.log" Nov 24 23:15:38 crc kubenswrapper[4915]: I1124 23:15:38.469906 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_cbc69ba4-d747-467c-98ab-d22491a8203c/init-config-reloader/0.log" Nov 24 23:15:38 crc kubenswrapper[4915]: I1124 23:15:38.498126 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_cbc69ba4-d747-467c-98ab-d22491a8203c/prometheus/0.log" Nov 24 23:15:38 crc kubenswrapper[4915]: I1124 23:15:38.523257 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_cbc69ba4-d747-467c-98ab-d22491a8203c/thanos-sidecar/0.log" Nov 24 23:15:38 crc kubenswrapper[4915]: I1124 23:15:38.632015 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6c951dd6-e7fe-411c-8156-92784c966328/setup-container/0.log" Nov 24 23:15:38 crc kubenswrapper[4915]: I1124 23:15:38.828955 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6c951dd6-e7fe-411c-8156-92784c966328/rabbitmq/0.log" Nov 24 23:15:38 crc kubenswrapper[4915]: I1124 23:15:38.852846 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_28a8634d-9ce0-460f-a8a9-e5cb05fc63cc/setup-container/0.log" Nov 24 23:15:38 crc kubenswrapper[4915]: I1124 23:15:38.855468 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_6c951dd6-e7fe-411c-8156-92784c966328/setup-container/0.log" Nov 24 23:15:39 crc kubenswrapper[4915]: I1124 23:15:39.067508 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_28a8634d-9ce0-460f-a8a9-e5cb05fc63cc/setup-container/0.log" Nov 24 23:15:39 crc kubenswrapper[4915]: I1124 23:15:39.087130 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_28a8634d-9ce0-460f-a8a9-e5cb05fc63cc/rabbitmq/0.log" Nov 24 23:15:39 crc kubenswrapper[4915]: I1124 23:15:39.116863 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-bl2vm_0a81a419-2ab6-4e5b-9f8f-91dee9db382c/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:15:39 crc kubenswrapper[4915]: I1124 23:15:39.245803 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-2ql5d_2bcdaa71-e70f-4d08-969f-bf0f3aa88db8/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:15:39 crc kubenswrapper[4915]: I1124 23:15:39.308088 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-2jtv2_17878839-cfa7-48e0-a162-9a11347e9424/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:15:39 crc kubenswrapper[4915]: I1124 23:15:39.361137 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-4nrw5_16282458-e621-4ced-9063-b7106c2fbd91/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:15:39 crc kubenswrapper[4915]: I1124 23:15:39.633098 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-ggpl4_c2d7b8c5-027d-4ea1-8eed-33866c899a66/ssh-known-hosts-edpm-deployment/0.log" Nov 24 23:15:39 crc kubenswrapper[4915]: I1124 23:15:39.849923 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-968b48459-dfqg8_cdff3fcc-adf5-4e04-9cea-b12c43b4f025/proxy-server/0.log" Nov 24 23:15:39 crc kubenswrapper[4915]: I1124 23:15:39.876843 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-968b48459-dfqg8_cdff3fcc-adf5-4e04-9cea-b12c43b4f025/proxy-httpd/0.log" Nov 24 23:15:39 crc kubenswrapper[4915]: I1124 23:15:39.950318 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-lfsfp_d354b8f9-3820-4897-8a8a-021d4a98668a/swift-ring-rebalance/0.log" Nov 24 23:15:40 crc kubenswrapper[4915]: I1124 23:15:40.082176 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea1dafcf-631a-4ae6-8aad-d716b977402d/account-auditor/0.log" Nov 24 23:15:40 crc kubenswrapper[4915]: I1124 23:15:40.116130 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea1dafcf-631a-4ae6-8aad-d716b977402d/account-reaper/0.log" Nov 24 23:15:40 crc kubenswrapper[4915]: I1124 23:15:40.184655 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea1dafcf-631a-4ae6-8aad-d716b977402d/account-server/0.log" Nov 24 23:15:40 crc kubenswrapper[4915]: I1124 23:15:40.216045 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea1dafcf-631a-4ae6-8aad-d716b977402d/account-replicator/0.log" Nov 24 23:15:40 crc kubenswrapper[4915]: I1124 23:15:40.246861 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea1dafcf-631a-4ae6-8aad-d716b977402d/container-auditor/0.log" Nov 24 23:15:40 crc kubenswrapper[4915]: I1124 23:15:40.335148 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea1dafcf-631a-4ae6-8aad-d716b977402d/container-replicator/0.log" Nov 24 23:15:40 crc kubenswrapper[4915]: I1124 23:15:40.366636 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea1dafcf-631a-4ae6-8aad-d716b977402d/container-server/0.log" Nov 24 23:15:40 crc kubenswrapper[4915]: I1124 23:15:40.385296 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea1dafcf-631a-4ae6-8aad-d716b977402d/container-updater/0.log" Nov 24 23:15:40 crc kubenswrapper[4915]: I1124 23:15:40.434115 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea1dafcf-631a-4ae6-8aad-d716b977402d/object-auditor/0.log" Nov 24 23:15:40 crc kubenswrapper[4915]: I1124 23:15:40.467099 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea1dafcf-631a-4ae6-8aad-d716b977402d/object-expirer/0.log" Nov 24 23:15:40 crc kubenswrapper[4915]: I1124 23:15:40.552030 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea1dafcf-631a-4ae6-8aad-d716b977402d/object-replicator/0.log" Nov 24 23:15:40 crc kubenswrapper[4915]: I1124 23:15:40.595110 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea1dafcf-631a-4ae6-8aad-d716b977402d/object-updater/0.log" Nov 24 23:15:40 crc kubenswrapper[4915]: I1124 23:15:40.603920 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea1dafcf-631a-4ae6-8aad-d716b977402d/object-server/0.log" Nov 24 23:15:40 crc kubenswrapper[4915]: I1124 23:15:40.646067 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea1dafcf-631a-4ae6-8aad-d716b977402d/rsync/0.log" Nov 24 23:15:40 crc kubenswrapper[4915]: I1124 23:15:40.702328 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ea1dafcf-631a-4ae6-8aad-d716b977402d/swift-recon-cron/0.log" Nov 24 23:15:40 crc kubenswrapper[4915]: I1124 23:15:40.813614 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-9lpzk_13961a83-9c81-49e1-b894-3651f2f91eb3/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:15:40 crc kubenswrapper[4915]: I1124 23:15:40.859012 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-cgj9x_7e90f618-8014-4582-a184-e00647111efc/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:15:41 crc kubenswrapper[4915]: I1124 23:15:41.031675 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_5cde4f81-df73-4990-885c-690d843e90bb/tempest-tests-tempest-tests-runner/0.log" Nov 24 23:15:41 crc kubenswrapper[4915]: I1124 23:15:41.064421 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_73d33248-5d5a-4ff9-bea5-c9d45fd8b48c/test-operator-logs-container/0.log" Nov 24 23:15:41 crc kubenswrapper[4915]: I1124 23:15:41.167681 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-8pf49_12c05179-3387-4aad-aa3a-f23931dd6360/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 24 23:16:06 crc kubenswrapper[4915]: I1124 23:16:06.767390 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-rml9z_5c67ba4c-d4f5-496d-bf28-6d681f42c840/kube-rbac-proxy/0.log" Nov 24 23:16:06 crc kubenswrapper[4915]: I1124 23:16:06.829960 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-rml9z_5c67ba4c-d4f5-496d-bf28-6d681f42c840/manager/0.log" Nov 24 23:16:06 crc kubenswrapper[4915]: I1124 23:16:06.953091 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-8hkmn_87d99431-c528-49c5-b28b-e32a6f46baaf/kube-rbac-proxy/0.log" Nov 24 23:16:07 crc kubenswrapper[4915]: I1124 23:16:07.014487 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-8hkmn_87d99431-c528-49c5-b28b-e32a6f46baaf/manager/0.log" Nov 24 23:16:07 crc kubenswrapper[4915]: I1124 23:16:07.146459 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f_b2a9acff-41f6-47c0-9d84-8cb23ea017df/util/0.log" Nov 24 23:16:07 crc kubenswrapper[4915]: I1124 23:16:07.320800 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f_b2a9acff-41f6-47c0-9d84-8cb23ea017df/pull/0.log" Nov 24 23:16:07 crc kubenswrapper[4915]: I1124 23:16:07.351625 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f_b2a9acff-41f6-47c0-9d84-8cb23ea017df/pull/0.log" Nov 24 23:16:07 crc kubenswrapper[4915]: I1124 23:16:07.353086 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f_b2a9acff-41f6-47c0-9d84-8cb23ea017df/util/0.log" Nov 24 23:16:07 crc kubenswrapper[4915]: I1124 23:16:07.506223 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f_b2a9acff-41f6-47c0-9d84-8cb23ea017df/util/0.log" Nov 24 23:16:07 crc kubenswrapper[4915]: I1124 23:16:07.561490 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f_b2a9acff-41f6-47c0-9d84-8cb23ea017df/extract/0.log" Nov 24 23:16:07 crc kubenswrapper[4915]: I1124 23:16:07.573399 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d8f30c8faa25ec6b7c340e3f93121127e93892cb4101c805e95556ac97zjp9f_b2a9acff-41f6-47c0-9d84-8cb23ea017df/pull/0.log" Nov 24 23:16:07 crc kubenswrapper[4915]: I1124 23:16:07.696126 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-wp6b9_6638fde3-c855-47c4-a339-a4b64a3b83ad/kube-rbac-proxy/0.log" Nov 24 23:16:07 crc kubenswrapper[4915]: I1124 23:16:07.768627 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-wp6b9_6638fde3-c855-47c4-a339-a4b64a3b83ad/manager/0.log" Nov 24 23:16:07 crc kubenswrapper[4915]: I1124 23:16:07.789963 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-rpbgd_04e18c27-a9e4-439e-994b-2c38eb126153/kube-rbac-proxy/0.log" Nov 24 23:16:07 crc kubenswrapper[4915]: I1124 23:16:07.985875 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-qdzj4_c8b0feeb-28f3-41a6-8ccc-a9eb042c4416/kube-rbac-proxy/0.log" Nov 24 23:16:07 crc kubenswrapper[4915]: I1124 23:16:07.986221 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-rpbgd_04e18c27-a9e4-439e-994b-2c38eb126153/manager/0.log" Nov 24 23:16:08 crc kubenswrapper[4915]: I1124 23:16:08.103511 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-qdzj4_c8b0feeb-28f3-41a6-8ccc-a9eb042c4416/manager/0.log" Nov 24 23:16:08 crc kubenswrapper[4915]: I1124 23:16:08.152702 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-vft2k_32115f30-05b5-4828-ba4b-155b238026a1/kube-rbac-proxy/0.log" Nov 24 23:16:08 crc kubenswrapper[4915]: I1124 23:16:08.218784 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-vft2k_32115f30-05b5-4828-ba4b-155b238026a1/manager/0.log" Nov 24 23:16:08 crc kubenswrapper[4915]: I1124 23:16:08.341671 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-kcs7l_b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b/kube-rbac-proxy/0.log" Nov 24 23:16:08 crc kubenswrapper[4915]: I1124 23:16:08.482347 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-m4z8z_6c2ce74c-40c4-4b98-ac53-1ce4869dfbe1/kube-rbac-proxy/0.log" Nov 24 23:16:08 crc kubenswrapper[4915]: I1124 23:16:08.574461 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-kcs7l_b37bc9b3-fd0d-43f8-b30c-b5dda3f4cb8b/manager/0.log" Nov 24 23:16:08 crc kubenswrapper[4915]: I1124 23:16:08.596031 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-m4z8z_6c2ce74c-40c4-4b98-ac53-1ce4869dfbe1/manager/0.log" Nov 24 23:16:08 crc kubenswrapper[4915]: I1124 23:16:08.685487 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-86mf2_2ce17a51-ca7e-4692-946d-1a09c9a865c5/kube-rbac-proxy/0.log" Nov 24 23:16:08 crc kubenswrapper[4915]: I1124 23:16:08.803656 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-86mf2_2ce17a51-ca7e-4692-946d-1a09c9a865c5/manager/0.log" Nov 24 23:16:08 crc kubenswrapper[4915]: I1124 23:16:08.898394 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-8bzrw_d090cd1f-8af4-468c-881d-d04cf192b0c4/kube-rbac-proxy/0.log" Nov 24 23:16:08 crc kubenswrapper[4915]: I1124 23:16:08.921044 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-8bzrw_d090cd1f-8af4-468c-881d-d04cf192b0c4/manager/0.log" Nov 24 23:16:09 crc kubenswrapper[4915]: I1124 23:16:09.023593 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-xz4zz_567bba9d-1881-4a67-b6bb-678650252bcc/kube-rbac-proxy/0.log" Nov 24 23:16:09 crc kubenswrapper[4915]: I1124 23:16:09.126646 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-xz4zz_567bba9d-1881-4a67-b6bb-678650252bcc/manager/0.log" Nov 24 23:16:09 crc kubenswrapper[4915]: I1124 23:16:09.188995 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-4mrzh_3f175a4a-2119-4b83-84f6-d067eb8be406/kube-rbac-proxy/0.log" Nov 24 23:16:09 crc kubenswrapper[4915]: I1124 23:16:09.279101 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-4mrzh_3f175a4a-2119-4b83-84f6-d067eb8be406/manager/0.log" Nov 24 23:16:09 crc kubenswrapper[4915]: I1124 23:16:09.367730 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-zk5k2_3bbdd81f-f69b-4e09-b1b2-374723b591ab/kube-rbac-proxy/0.log" Nov 24 23:16:09 crc kubenswrapper[4915]: I1124 23:16:09.442820 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-zk5k2_3bbdd81f-f69b-4e09-b1b2-374723b591ab/manager/0.log" Nov 24 23:16:09 crc kubenswrapper[4915]: I1124 23:16:09.558608 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-whss9_cdd28f21-72b9-4818-88fd-68e6a8dbc508/kube-rbac-proxy/0.log" Nov 24 23:16:09 crc kubenswrapper[4915]: I1124 23:16:09.613402 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-whss9_cdd28f21-72b9-4818-88fd-68e6a8dbc508/manager/0.log" Nov 24 23:16:09 crc kubenswrapper[4915]: I1124 23:16:09.735014 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7_0a3128a6-4ca7-4cd5-800f-20860a97aed5/kube-rbac-proxy/0.log" Nov 24 23:16:09 crc kubenswrapper[4915]: I1124 23:16:09.777978 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-wxhz7_0a3128a6-4ca7-4cd5-800f-20860a97aed5/manager/0.log" Nov 24 23:16:10 crc kubenswrapper[4915]: I1124 23:16:10.177950 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-x5l2g_b37899b6-9f24-4165-b16c-ee2984d44800/registry-server/0.log" Nov 24 23:16:10 crc kubenswrapper[4915]: I1124 23:16:10.213993 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-78bdd85758-sqsbl_920274d2-7453-4b35-ac67-ada3c857cd58/operator/0.log" Nov 24 23:16:10 crc kubenswrapper[4915]: I1124 23:16:10.430522 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-zr674_03b268f6-b5db-44d1-8fc9-6d8cedb8c9b6/kube-rbac-proxy/0.log" Nov 24 23:16:10 crc kubenswrapper[4915]: I1124 23:16:10.527289 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-zr674_03b268f6-b5db-44d1-8fc9-6d8cedb8c9b6/manager/0.log" Nov 24 23:16:10 crc kubenswrapper[4915]: I1124 23:16:10.598343 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-cc9cv_272c7c4e-9ecc-42cf-8b44-0a61f7823578/kube-rbac-proxy/0.log" Nov 24 23:16:10 crc kubenswrapper[4915]: I1124 23:16:10.670084 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-cc9cv_272c7c4e-9ecc-42cf-8b44-0a61f7823578/manager/0.log" Nov 24 23:16:10 crc kubenswrapper[4915]: I1124 23:16:10.770635 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-zxrdr_49fac201-00d2-42d3-9e1f-ac2fde219037/operator/0.log" Nov 24 23:16:10 crc kubenswrapper[4915]: I1124 23:16:10.907402 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-6q9th_caacbc5e-655b-4876-a8ec-94fc83478510/kube-rbac-proxy/0.log" Nov 24 23:16:11 crc kubenswrapper[4915]: I1124 23:16:11.015189 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-6q9th_caacbc5e-655b-4876-a8ec-94fc83478510/manager/0.log" Nov 24 23:16:11 crc kubenswrapper[4915]: I1124 23:16:11.114354 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-f55c5bd94-dck7p_3f8c6903-143e-450e-8ea1-92d6ac474b48/kube-rbac-proxy/0.log" Nov 24 23:16:11 crc kubenswrapper[4915]: I1124 23:16:11.204269 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-b6b55f9c-sl7sk_87db2cf4-862d-4c0e-9cc9-72548e8bb63b/manager/0.log" Nov 24 23:16:11 crc kubenswrapper[4915]: I1124 23:16:11.253664 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-g9rrr_d23a4b31-9618-4e98-82a6-8e32881bec59/kube-rbac-proxy/0.log" Nov 24 23:16:11 crc kubenswrapper[4915]: I1124 23:16:11.351944 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-f55c5bd94-dck7p_3f8c6903-143e-450e-8ea1-92d6ac474b48/manager/0.log" Nov 24 23:16:11 crc kubenswrapper[4915]: I1124 23:16:11.389478 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-g9rrr_d23a4b31-9618-4e98-82a6-8e32881bec59/manager/0.log" Nov 24 23:16:11 crc kubenswrapper[4915]: I1124 23:16:11.458980 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-ngwcx_de2fcac4-4e4f-404d-a8e9-a4774d3f3936/manager/0.log" Nov 24 23:16:11 crc kubenswrapper[4915]: I1124 23:16:11.474515 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-ngwcx_de2fcac4-4e4f-404d-a8e9-a4774d3f3936/kube-rbac-proxy/0.log" Nov 24 23:16:30 crc kubenswrapper[4915]: I1124 23:16:30.632806 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-rpff6_fe4383de-d0d0-423d-aca2-f3dc1da5acba/control-plane-machine-set-operator/0.log" Nov 24 23:16:30 crc kubenswrapper[4915]: I1124 23:16:30.777959 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bp8jj_9195fe9f-3872-4a32-bb62-a13dfea6c331/machine-api-operator/0.log" Nov 24 23:16:30 crc kubenswrapper[4915]: I1124 23:16:30.794325 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bp8jj_9195fe9f-3872-4a32-bb62-a13dfea6c331/kube-rbac-proxy/0.log" Nov 24 23:16:44 crc kubenswrapper[4915]: I1124 23:16:44.825903 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-zp2vk_18dddeaf-8d70-474f-8a26-d39556870aa5/cert-manager-controller/0.log" Nov 24 23:16:44 crc kubenswrapper[4915]: I1124 23:16:44.831771 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-7k4q2_dde8b00e-87d8-4fe6-8a29-82774ca1e721/cert-manager-cainjector/0.log" Nov 24 23:16:44 crc kubenswrapper[4915]: I1124 23:16:44.972855 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-wstf7_357bd908-9457-4270-9b3b-a6b6ad47016f/cert-manager-webhook/0.log" Nov 24 23:16:58 crc kubenswrapper[4915]: I1124 23:16:58.822417 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-kvpdj_a6569ffe-880f-4e4e-8603-391a495fbb50/nmstate-console-plugin/0.log" Nov 24 23:16:58 crc kubenswrapper[4915]: I1124 23:16:58.991224 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-srvwk_b329347f-4c7a-4973-b432-fb033406721f/kube-rbac-proxy/0.log" Nov 24 23:16:58 crc kubenswrapper[4915]: I1124 23:16:58.993708 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-brgx6_deb64d76-5b5e-481c-8486-c387b0579aa5/nmstate-handler/0.log" Nov 24 23:16:59 crc kubenswrapper[4915]: I1124 23:16:59.042235 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-srvwk_b329347f-4c7a-4973-b432-fb033406721f/nmstate-metrics/0.log" Nov 24 23:16:59 crc kubenswrapper[4915]: I1124 23:16:59.207072 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-7jsng_02d4045f-82d7-42b3-88eb-a4011970e80f/nmstate-operator/0.log" Nov 24 23:16:59 crc kubenswrapper[4915]: I1124 23:16:59.246533 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-tgnfp_f28e1e42-b484-49c7-877f-43fd48691994/nmstate-webhook/0.log" Nov 24 23:17:12 crc kubenswrapper[4915]: I1124 23:17:12.888315 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-65b8d94b4b-6kr4m_cf2549dc-2e6e-464a-9d5b-4631dcfe9e74/manager/0.log" Nov 24 23:17:12 crc kubenswrapper[4915]: I1124 23:17:12.921577 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-65b8d94b4b-6kr4m_cf2549dc-2e6e-464a-9d5b-4631dcfe9e74/kube-rbac-proxy/0.log" Nov 24 23:17:24 crc kubenswrapper[4915]: I1124 23:17:24.328060 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:17:24 crc kubenswrapper[4915]: I1124 23:17:24.328810 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:17:28 crc kubenswrapper[4915]: I1124 23:17:28.144531 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-ff9846bd-9pcwh_e75a5a1e-8a27-41b2-8b34-86e0981dae10/cluster-logging-operator/0.log" Nov 24 23:17:28 crc kubenswrapper[4915]: I1124 23:17:28.348528 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-ftlx8_be6316b9-4bd5-4762-93bb-245771235e4d/collector/0.log" Nov 24 23:17:28 crc kubenswrapper[4915]: I1124 23:17:28.431979 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_8e687b84-4a5d-4a03-b384-35db23ba77cb/loki-compactor/0.log" Nov 24 23:17:28 crc kubenswrapper[4915]: I1124 23:17:28.558017 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-76cc67bf56-5qxx5_d8add4aa-1636-46a6-9bb9-050f2c4a456f/loki-distributor/0.log" Nov 24 23:17:28 crc kubenswrapper[4915]: I1124 23:17:28.690120 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-85f6fc88b5-2ltkt_cd4feec8-19ce-4e0a-8b72-62a5630a13cd/gateway/0.log" Nov 24 23:17:28 crc kubenswrapper[4915]: I1124 23:17:28.746848 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-85f6fc88b5-2ltkt_cd4feec8-19ce-4e0a-8b72-62a5630a13cd/opa/0.log" Nov 24 23:17:28 crc kubenswrapper[4915]: I1124 23:17:28.864849 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-85f6fc88b5-bnc8w_12be2daf-d31e-4bbb-921f-15a90d8db057/gateway/0.log" Nov 24 23:17:28 crc kubenswrapper[4915]: I1124 23:17:28.897452 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-85f6fc88b5-bnc8w_12be2daf-d31e-4bbb-921f-15a90d8db057/opa/0.log" Nov 24 23:17:28 crc kubenswrapper[4915]: I1124 23:17:28.989062 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_a4ba5fbb-400a-47eb-9bad-fee76800d021/loki-index-gateway/0.log" Nov 24 23:17:29 crc kubenswrapper[4915]: I1124 23:17:29.344836 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-5895d59bb8-r4hdw_1e7f0b89-7e9d-4579-99ea-ba9d11af5e0f/loki-querier/0.log" Nov 24 23:17:29 crc kubenswrapper[4915]: I1124 23:17:29.374188 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_3159fb3d-ea27-4141-97d1-0924f1854801/loki-ingester/0.log" Nov 24 23:17:29 crc kubenswrapper[4915]: I1124 23:17:29.492565 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-84558f7c9f-4knt6_7cd510cd-fbfb-4351-88fa-149952989968/loki-query-frontend/0.log" Nov 24 23:17:43 crc kubenswrapper[4915]: I1124 23:17:43.927620 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-fg52n_f1123b31-719b-4a3e-a6a9-31ea827aa3eb/kube-rbac-proxy/0.log" Nov 24 23:17:44 crc kubenswrapper[4915]: I1124 23:17:44.047531 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-fg52n_f1123b31-719b-4a3e-a6a9-31ea827aa3eb/controller/0.log" Nov 24 23:17:44 crc kubenswrapper[4915]: I1124 23:17:44.096507 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-97pzg_aca8edea-7127-45be-be5f-1a7385ce37bf/frr-k8s-webhook-server/0.log" Nov 24 23:17:44 crc kubenswrapper[4915]: I1124 23:17:44.243913 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wfnx5_1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679/cp-frr-files/0.log" Nov 24 23:17:44 crc kubenswrapper[4915]: I1124 23:17:44.364976 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wfnx5_1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679/cp-metrics/0.log" Nov 24 23:17:44 crc kubenswrapper[4915]: I1124 23:17:44.381248 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wfnx5_1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679/cp-frr-files/0.log" Nov 24 23:17:44 crc kubenswrapper[4915]: I1124 23:17:44.421244 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wfnx5_1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679/cp-reloader/0.log" Nov 24 23:17:44 crc kubenswrapper[4915]: I1124 23:17:44.446361 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wfnx5_1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679/cp-reloader/0.log" Nov 24 23:17:44 crc kubenswrapper[4915]: I1124 23:17:44.604675 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wfnx5_1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679/cp-metrics/0.log" Nov 24 23:17:44 crc kubenswrapper[4915]: I1124 23:17:44.606034 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wfnx5_1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679/cp-reloader/0.log" Nov 24 23:17:44 crc kubenswrapper[4915]: I1124 23:17:44.618572 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wfnx5_1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679/cp-frr-files/0.log" Nov 24 23:17:44 crc kubenswrapper[4915]: I1124 23:17:44.690287 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wfnx5_1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679/cp-metrics/0.log" Nov 24 23:17:44 crc kubenswrapper[4915]: I1124 23:17:44.815158 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wfnx5_1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679/cp-reloader/0.log" Nov 24 23:17:44 crc kubenswrapper[4915]: I1124 23:17:44.821989 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wfnx5_1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679/cp-frr-files/0.log" Nov 24 23:17:44 crc kubenswrapper[4915]: I1124 23:17:44.836830 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wfnx5_1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679/cp-metrics/0.log" Nov 24 23:17:44 crc kubenswrapper[4915]: I1124 23:17:44.896392 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wfnx5_1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679/controller/0.log" Nov 24 23:17:44 crc kubenswrapper[4915]: I1124 23:17:44.996103 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wfnx5_1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679/frr-metrics/0.log" Nov 24 23:17:45 crc kubenswrapper[4915]: I1124 23:17:45.038477 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wfnx5_1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679/kube-rbac-proxy/0.log" Nov 24 23:17:45 crc kubenswrapper[4915]: I1124 23:17:45.143710 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wfnx5_1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679/kube-rbac-proxy-frr/0.log" Nov 24 23:17:45 crc kubenswrapper[4915]: I1124 23:17:45.212315 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wfnx5_1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679/reloader/0.log" Nov 24 23:17:45 crc kubenswrapper[4915]: I1124 23:17:45.403147 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-b6bfbb889-hdsxt_c96e3c48-3b17-4e76-ada9-4ab0d0890974/manager/0.log" Nov 24 23:17:45 crc kubenswrapper[4915]: I1124 23:17:45.420420 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7796db4489-smctd_64d59574-0f7e-4d28-818c-0fac0b2603bd/webhook-server/0.log" Nov 24 23:17:45 crc kubenswrapper[4915]: I1124 23:17:45.660525 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-pw7wv_e20a9fa7-0d01-46c0-9475-0895c44836f2/kube-rbac-proxy/0.log" Nov 24 23:17:46 crc kubenswrapper[4915]: I1124 23:17:46.240914 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-pw7wv_e20a9fa7-0d01-46c0-9475-0895c44836f2/speaker/0.log" Nov 24 23:17:47 crc kubenswrapper[4915]: I1124 23:17:47.022985 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-wfnx5_1f094d7a-47c6-4b6e-b33e-5cfc7ad5e679/frr/0.log" Nov 24 23:17:54 crc kubenswrapper[4915]: I1124 23:17:54.328392 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:17:54 crc kubenswrapper[4915]: I1124 23:17:54.329096 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:18:00 crc kubenswrapper[4915]: I1124 23:18:00.298592 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq_65a8e33a-e2b6-45a0-a989-ea41d3223442/util/0.log" Nov 24 23:18:00 crc kubenswrapper[4915]: I1124 23:18:00.509898 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq_65a8e33a-e2b6-45a0-a989-ea41d3223442/util/0.log" Nov 24 23:18:00 crc kubenswrapper[4915]: I1124 23:18:00.517851 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq_65a8e33a-e2b6-45a0-a989-ea41d3223442/pull/0.log" Nov 24 23:18:00 crc kubenswrapper[4915]: I1124 23:18:00.529334 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq_65a8e33a-e2b6-45a0-a989-ea41d3223442/pull/0.log" Nov 24 23:18:00 crc kubenswrapper[4915]: I1124 23:18:00.694453 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq_65a8e33a-e2b6-45a0-a989-ea41d3223442/util/0.log" Nov 24 23:18:00 crc kubenswrapper[4915]: I1124 23:18:00.709734 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq_65a8e33a-e2b6-45a0-a989-ea41d3223442/extract/0.log" Nov 24 23:18:00 crc kubenswrapper[4915]: I1124 23:18:00.738735 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8t86nq_65a8e33a-e2b6-45a0-a989-ea41d3223442/pull/0.log" Nov 24 23:18:00 crc kubenswrapper[4915]: I1124 23:18:00.900660 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj_fbb9899f-70f3-48f3-9bcf-d9d591ba184d/util/0.log" Nov 24 23:18:01 crc kubenswrapper[4915]: I1124 23:18:01.077982 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj_fbb9899f-70f3-48f3-9bcf-d9d591ba184d/pull/0.log" Nov 24 23:18:01 crc kubenswrapper[4915]: I1124 23:18:01.082119 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj_fbb9899f-70f3-48f3-9bcf-d9d591ba184d/util/0.log" Nov 24 23:18:01 crc kubenswrapper[4915]: I1124 23:18:01.085678 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj_fbb9899f-70f3-48f3-9bcf-d9d591ba184d/pull/0.log" Nov 24 23:18:01 crc kubenswrapper[4915]: I1124 23:18:01.238659 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj_fbb9899f-70f3-48f3-9bcf-d9d591ba184d/util/0.log" Nov 24 23:18:01 crc kubenswrapper[4915]: I1124 23:18:01.242646 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj_fbb9899f-70f3-48f3-9bcf-d9d591ba184d/pull/0.log" Nov 24 23:18:01 crc kubenswrapper[4915]: I1124 23:18:01.269253 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772ewzdgj_fbb9899f-70f3-48f3-9bcf-d9d591ba184d/extract/0.log" Nov 24 23:18:01 crc kubenswrapper[4915]: I1124 23:18:01.398828 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm_fd00050c-03ed-4857-a586-146fc1d10b91/util/0.log" Nov 24 23:18:01 crc kubenswrapper[4915]: I1124 23:18:01.657957 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm_fd00050c-03ed-4857-a586-146fc1d10b91/pull/0.log" Nov 24 23:18:01 crc kubenswrapper[4915]: I1124 23:18:01.662055 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm_fd00050c-03ed-4857-a586-146fc1d10b91/util/0.log" Nov 24 23:18:01 crc kubenswrapper[4915]: I1124 23:18:01.666365 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm_fd00050c-03ed-4857-a586-146fc1d10b91/pull/0.log" Nov 24 23:18:01 crc kubenswrapper[4915]: I1124 23:18:01.827593 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm_fd00050c-03ed-4857-a586-146fc1d10b91/util/0.log" Nov 24 23:18:01 crc kubenswrapper[4915]: I1124 23:18:01.845682 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm_fd00050c-03ed-4857-a586-146fc1d10b91/pull/0.log" Nov 24 23:18:01 crc kubenswrapper[4915]: I1124 23:18:01.853490 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210dd6mm_fd00050c-03ed-4857-a586-146fc1d10b91/extract/0.log" Nov 24 23:18:02 crc kubenswrapper[4915]: I1124 23:18:02.017302 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v_e1fb0a11-1a7c-48b6-ae22-75332ce05f0e/util/0.log" Nov 24 23:18:02 crc kubenswrapper[4915]: I1124 23:18:02.156423 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v_e1fb0a11-1a7c-48b6-ae22-75332ce05f0e/pull/0.log" Nov 24 23:18:02 crc kubenswrapper[4915]: I1124 23:18:02.186995 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v_e1fb0a11-1a7c-48b6-ae22-75332ce05f0e/pull/0.log" Nov 24 23:18:02 crc kubenswrapper[4915]: I1124 23:18:02.209617 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v_e1fb0a11-1a7c-48b6-ae22-75332ce05f0e/util/0.log" Nov 24 23:18:02 crc kubenswrapper[4915]: I1124 23:18:02.361846 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v_e1fb0a11-1a7c-48b6-ae22-75332ce05f0e/util/0.log" Nov 24 23:18:02 crc kubenswrapper[4915]: I1124 23:18:02.381276 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v_e1fb0a11-1a7c-48b6-ae22-75332ce05f0e/pull/0.log" Nov 24 23:18:02 crc kubenswrapper[4915]: I1124 23:18:02.415083 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fskr2v_e1fb0a11-1a7c-48b6-ae22-75332ce05f0e/extract/0.log" Nov 24 23:18:02 crc kubenswrapper[4915]: I1124 23:18:02.530729 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8r2vt_29e5d436-67fe-411f-b770-9ab45a3aa7a1/extract-utilities/0.log" Nov 24 23:18:02 crc kubenswrapper[4915]: I1124 23:18:02.771038 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8r2vt_29e5d436-67fe-411f-b770-9ab45a3aa7a1/extract-content/0.log" Nov 24 23:18:02 crc kubenswrapper[4915]: I1124 23:18:02.801974 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8r2vt_29e5d436-67fe-411f-b770-9ab45a3aa7a1/extract-utilities/0.log" Nov 24 23:18:02 crc kubenswrapper[4915]: I1124 23:18:02.814200 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8r2vt_29e5d436-67fe-411f-b770-9ab45a3aa7a1/extract-content/0.log" Nov 24 23:18:03 crc kubenswrapper[4915]: I1124 23:18:03.021069 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8r2vt_29e5d436-67fe-411f-b770-9ab45a3aa7a1/extract-content/0.log" Nov 24 23:18:03 crc kubenswrapper[4915]: I1124 23:18:03.026764 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8r2vt_29e5d436-67fe-411f-b770-9ab45a3aa7a1/extract-utilities/0.log" Nov 24 23:18:03 crc kubenswrapper[4915]: I1124 23:18:03.216032 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wvj54_c95008b7-0cb9-4be3-bb4e-178ff7d25cc3/extract-utilities/0.log" Nov 24 23:18:03 crc kubenswrapper[4915]: I1124 23:18:03.376654 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wvj54_c95008b7-0cb9-4be3-bb4e-178ff7d25cc3/extract-utilities/0.log" Nov 24 23:18:03 crc kubenswrapper[4915]: I1124 23:18:03.445714 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wvj54_c95008b7-0cb9-4be3-bb4e-178ff7d25cc3/extract-content/0.log" Nov 24 23:18:03 crc kubenswrapper[4915]: I1124 23:18:03.461922 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8r2vt_29e5d436-67fe-411f-b770-9ab45a3aa7a1/registry-server/0.log" Nov 24 23:18:03 crc kubenswrapper[4915]: I1124 23:18:03.505677 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wvj54_c95008b7-0cb9-4be3-bb4e-178ff7d25cc3/extract-content/0.log" Nov 24 23:18:03 crc kubenswrapper[4915]: I1124 23:18:03.644641 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wvj54_c95008b7-0cb9-4be3-bb4e-178ff7d25cc3/extract-content/0.log" Nov 24 23:18:03 crc kubenswrapper[4915]: I1124 23:18:03.694461 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wvj54_c95008b7-0cb9-4be3-bb4e-178ff7d25cc3/extract-utilities/0.log" Nov 24 23:18:03 crc kubenswrapper[4915]: I1124 23:18:03.754482 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz_67e05b3d-d45a-42b8-ad97-3e03cabf726a/util/0.log" Nov 24 23:18:03 crc kubenswrapper[4915]: I1124 23:18:03.942885 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz_67e05b3d-d45a-42b8-ad97-3e03cabf726a/pull/0.log" Nov 24 23:18:03 crc kubenswrapper[4915]: I1124 23:18:03.989511 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz_67e05b3d-d45a-42b8-ad97-3e03cabf726a/util/0.log" Nov 24 23:18:04 crc kubenswrapper[4915]: I1124 23:18:04.000823 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz_67e05b3d-d45a-42b8-ad97-3e03cabf726a/pull/0.log" Nov 24 23:18:04 crc kubenswrapper[4915]: I1124 23:18:04.163692 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz_67e05b3d-d45a-42b8-ad97-3e03cabf726a/util/0.log" Nov 24 23:18:04 crc kubenswrapper[4915]: I1124 23:18:04.164517 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz_67e05b3d-d45a-42b8-ad97-3e03cabf726a/pull/0.log" Nov 24 23:18:04 crc kubenswrapper[4915]: I1124 23:18:04.258390 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6gsnbz_67e05b3d-d45a-42b8-ad97-3e03cabf726a/extract/0.log" Nov 24 23:18:04 crc kubenswrapper[4915]: I1124 23:18:04.374059 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-d489h_f3048109-51ee-4326-827b-979dc4ec0481/marketplace-operator/0.log" Nov 24 23:18:04 crc kubenswrapper[4915]: I1124 23:18:04.483718 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hj6cr_e7b7d384-453b-42c0-89ec-b967910ab508/extract-utilities/0.log" Nov 24 23:18:04 crc kubenswrapper[4915]: I1124 23:18:04.608834 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wvj54_c95008b7-0cb9-4be3-bb4e-178ff7d25cc3/registry-server/0.log" Nov 24 23:18:04 crc kubenswrapper[4915]: I1124 23:18:04.658743 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hj6cr_e7b7d384-453b-42c0-89ec-b967910ab508/extract-utilities/0.log" Nov 24 23:18:04 crc kubenswrapper[4915]: I1124 23:18:04.673623 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hj6cr_e7b7d384-453b-42c0-89ec-b967910ab508/extract-content/0.log" Nov 24 23:18:04 crc kubenswrapper[4915]: I1124 23:18:04.695551 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hj6cr_e7b7d384-453b-42c0-89ec-b967910ab508/extract-content/0.log" Nov 24 23:18:04 crc kubenswrapper[4915]: I1124 23:18:04.841746 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hj6cr_e7b7d384-453b-42c0-89ec-b967910ab508/extract-utilities/0.log" Nov 24 23:18:04 crc kubenswrapper[4915]: I1124 23:18:04.843495 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hj6cr_e7b7d384-453b-42c0-89ec-b967910ab508/extract-content/0.log" Nov 24 23:18:04 crc kubenswrapper[4915]: I1124 23:18:04.898890 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4b2jj_2a466a23-e349-49af-b513-e6e96cab17c7/extract-utilities/0.log" Nov 24 23:18:05 crc kubenswrapper[4915]: I1124 23:18:05.087708 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-hj6cr_e7b7d384-453b-42c0-89ec-b967910ab508/registry-server/0.log" Nov 24 23:18:05 crc kubenswrapper[4915]: I1124 23:18:05.113332 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4b2jj_2a466a23-e349-49af-b513-e6e96cab17c7/extract-content/0.log" Nov 24 23:18:05 crc kubenswrapper[4915]: I1124 23:18:05.145791 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4b2jj_2a466a23-e349-49af-b513-e6e96cab17c7/extract-utilities/0.log" Nov 24 23:18:05 crc kubenswrapper[4915]: I1124 23:18:05.157516 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4b2jj_2a466a23-e349-49af-b513-e6e96cab17c7/extract-content/0.log" Nov 24 23:18:05 crc kubenswrapper[4915]: I1124 23:18:05.311143 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4b2jj_2a466a23-e349-49af-b513-e6e96cab17c7/extract-utilities/0.log" Nov 24 23:18:05 crc kubenswrapper[4915]: I1124 23:18:05.328258 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4b2jj_2a466a23-e349-49af-b513-e6e96cab17c7/extract-content/0.log" Nov 24 23:18:05 crc kubenswrapper[4915]: I1124 23:18:05.966481 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4b2jj_2a466a23-e349-49af-b513-e6e96cab17c7/registry-server/0.log" Nov 24 23:18:19 crc kubenswrapper[4915]: I1124 23:18:19.562825 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-qtz6h_00e70017-d6b3-4d57-b669-068d163bc3c4/prometheus-operator/0.log" Nov 24 23:18:19 crc kubenswrapper[4915]: I1124 23:18:19.861930 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6d49669fb4-9fbpw_5720a2a7-47a3-4871-8003-736d6f5d7673/prometheus-operator-admission-webhook/0.log" Nov 24 23:18:19 crc kubenswrapper[4915]: I1124 23:18:19.945915 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6d49669fb4-s5pjq_58a55a1e-cef9-48f3-b562-5b3a02c92cc3/prometheus-operator-admission-webhook/0.log" Nov 24 23:18:20 crc kubenswrapper[4915]: I1124 23:18:20.065017 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-sdmdj_f68bd40b-44eb-4241-9f04-5d3deadf1951/operator/0.log" Nov 24 23:18:20 crc kubenswrapper[4915]: I1124 23:18:20.102993 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-7d5fb4cbfb-8dsqc_22ffe2e4-a7d4-4af8-93e6-1255daaabe16/observability-ui-dashboards/0.log" Nov 24 23:18:20 crc kubenswrapper[4915]: I1124 23:18:20.270207 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-wmrpm_582a64c0-8cfc-43d9-beeb-37f6e2460561/perses-operator/0.log" Nov 24 23:18:24 crc kubenswrapper[4915]: I1124 23:18:24.327907 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:18:24 crc kubenswrapper[4915]: I1124 23:18:24.329557 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:18:24 crc kubenswrapper[4915]: I1124 23:18:24.329687 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 23:18:24 crc kubenswrapper[4915]: I1124 23:18:24.330758 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4c4992e0c66e1a80e8d0f8351bb8d73606b94eabfb2432575a0d7a556eb597cf"} pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 23:18:24 crc kubenswrapper[4915]: I1124 23:18:24.330947 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" containerID="cri-o://4c4992e0c66e1a80e8d0f8351bb8d73606b94eabfb2432575a0d7a556eb597cf" gracePeriod=600 Nov 24 23:18:24 crc kubenswrapper[4915]: E1124 23:18:24.465018 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:18:24 crc kubenswrapper[4915]: I1124 23:18:24.566738 4915 generic.go:334] "Generic (PLEG): container finished" podID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerID="4c4992e0c66e1a80e8d0f8351bb8d73606b94eabfb2432575a0d7a556eb597cf" exitCode=0 Nov 24 23:18:24 crc kubenswrapper[4915]: I1124 23:18:24.566932 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerDied","Data":"4c4992e0c66e1a80e8d0f8351bb8d73606b94eabfb2432575a0d7a556eb597cf"} Nov 24 23:18:24 crc kubenswrapper[4915]: I1124 23:18:24.567088 4915 scope.go:117] "RemoveContainer" containerID="2a89069e46a05c46d5332f059d72afceaca9bcadbe4fdf497e4e10e488952af5" Nov 24 23:18:24 crc kubenswrapper[4915]: I1124 23:18:24.567528 4915 scope.go:117] "RemoveContainer" containerID="4c4992e0c66e1a80e8d0f8351bb8d73606b94eabfb2432575a0d7a556eb597cf" Nov 24 23:18:24 crc kubenswrapper[4915]: E1124 23:18:24.567846 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:18:35 crc kubenswrapper[4915]: I1124 23:18:35.842997 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-65b8d94b4b-6kr4m_cf2549dc-2e6e-464a-9d5b-4631dcfe9e74/kube-rbac-proxy/0.log" Nov 24 23:18:35 crc kubenswrapper[4915]: I1124 23:18:35.907153 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-65b8d94b4b-6kr4m_cf2549dc-2e6e-464a-9d5b-4631dcfe9e74/manager/0.log" Nov 24 23:18:38 crc kubenswrapper[4915]: I1124 23:18:38.426980 4915 scope.go:117] "RemoveContainer" containerID="4c4992e0c66e1a80e8d0f8351bb8d73606b94eabfb2432575a0d7a556eb597cf" Nov 24 23:18:38 crc kubenswrapper[4915]: E1124 23:18:38.427619 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:18:49 crc kubenswrapper[4915]: I1124 23:18:49.426861 4915 scope.go:117] "RemoveContainer" containerID="4c4992e0c66e1a80e8d0f8351bb8d73606b94eabfb2432575a0d7a556eb597cf" Nov 24 23:18:49 crc kubenswrapper[4915]: E1124 23:18:49.427535 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:19:04 crc kubenswrapper[4915]: I1124 23:19:04.428206 4915 scope.go:117] "RemoveContainer" containerID="4c4992e0c66e1a80e8d0f8351bb8d73606b94eabfb2432575a0d7a556eb597cf" Nov 24 23:19:04 crc kubenswrapper[4915]: E1124 23:19:04.429119 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:19:19 crc kubenswrapper[4915]: I1124 23:19:19.429040 4915 scope.go:117] "RemoveContainer" containerID="4c4992e0c66e1a80e8d0f8351bb8d73606b94eabfb2432575a0d7a556eb597cf" Nov 24 23:19:19 crc kubenswrapper[4915]: E1124 23:19:19.430815 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:19:30 crc kubenswrapper[4915]: I1124 23:19:30.427752 4915 scope.go:117] "RemoveContainer" containerID="4c4992e0c66e1a80e8d0f8351bb8d73606b94eabfb2432575a0d7a556eb597cf" Nov 24 23:19:30 crc kubenswrapper[4915]: E1124 23:19:30.429123 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:19:45 crc kubenswrapper[4915]: I1124 23:19:45.428627 4915 scope.go:117] "RemoveContainer" containerID="4c4992e0c66e1a80e8d0f8351bb8d73606b94eabfb2432575a0d7a556eb597cf" Nov 24 23:19:45 crc kubenswrapper[4915]: E1124 23:19:45.429845 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:19:52 crc kubenswrapper[4915]: I1124 23:19:52.145376 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-666mb"] Nov 24 23:19:52 crc kubenswrapper[4915]: E1124 23:19:52.148782 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68a0b38d-95da-4ea8-b7ea-d95404f8d855" containerName="container-00" Nov 24 23:19:52 crc kubenswrapper[4915]: I1124 23:19:52.148825 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="68a0b38d-95da-4ea8-b7ea-d95404f8d855" containerName="container-00" Nov 24 23:19:52 crc kubenswrapper[4915]: I1124 23:19:52.149141 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="68a0b38d-95da-4ea8-b7ea-d95404f8d855" containerName="container-00" Nov 24 23:19:52 crc kubenswrapper[4915]: I1124 23:19:52.151799 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-666mb" Nov 24 23:19:52 crc kubenswrapper[4915]: I1124 23:19:52.177216 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-666mb"] Nov 24 23:19:52 crc kubenswrapper[4915]: I1124 23:19:52.256093 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b4fc230-ad53-431f-bb5f-acc1b97d14e8-utilities\") pod \"redhat-operators-666mb\" (UID: \"4b4fc230-ad53-431f-bb5f-acc1b97d14e8\") " pod="openshift-marketplace/redhat-operators-666mb" Nov 24 23:19:52 crc kubenswrapper[4915]: I1124 23:19:52.256144 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b4fc230-ad53-431f-bb5f-acc1b97d14e8-catalog-content\") pod \"redhat-operators-666mb\" (UID: \"4b4fc230-ad53-431f-bb5f-acc1b97d14e8\") " pod="openshift-marketplace/redhat-operators-666mb" Nov 24 23:19:52 crc kubenswrapper[4915]: I1124 23:19:52.256204 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6cpz\" (UniqueName: \"kubernetes.io/projected/4b4fc230-ad53-431f-bb5f-acc1b97d14e8-kube-api-access-v6cpz\") pod \"redhat-operators-666mb\" (UID: \"4b4fc230-ad53-431f-bb5f-acc1b97d14e8\") " pod="openshift-marketplace/redhat-operators-666mb" Nov 24 23:19:52 crc kubenswrapper[4915]: I1124 23:19:52.358424 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b4fc230-ad53-431f-bb5f-acc1b97d14e8-utilities\") pod \"redhat-operators-666mb\" (UID: \"4b4fc230-ad53-431f-bb5f-acc1b97d14e8\") " pod="openshift-marketplace/redhat-operators-666mb" Nov 24 23:19:52 crc kubenswrapper[4915]: I1124 23:19:52.358468 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b4fc230-ad53-431f-bb5f-acc1b97d14e8-catalog-content\") pod \"redhat-operators-666mb\" (UID: \"4b4fc230-ad53-431f-bb5f-acc1b97d14e8\") " pod="openshift-marketplace/redhat-operators-666mb" Nov 24 23:19:52 crc kubenswrapper[4915]: I1124 23:19:52.358533 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6cpz\" (UniqueName: \"kubernetes.io/projected/4b4fc230-ad53-431f-bb5f-acc1b97d14e8-kube-api-access-v6cpz\") pod \"redhat-operators-666mb\" (UID: \"4b4fc230-ad53-431f-bb5f-acc1b97d14e8\") " pod="openshift-marketplace/redhat-operators-666mb" Nov 24 23:19:52 crc kubenswrapper[4915]: I1124 23:19:52.359046 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b4fc230-ad53-431f-bb5f-acc1b97d14e8-utilities\") pod \"redhat-operators-666mb\" (UID: \"4b4fc230-ad53-431f-bb5f-acc1b97d14e8\") " pod="openshift-marketplace/redhat-operators-666mb" Nov 24 23:19:52 crc kubenswrapper[4915]: I1124 23:19:52.359099 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b4fc230-ad53-431f-bb5f-acc1b97d14e8-catalog-content\") pod \"redhat-operators-666mb\" (UID: \"4b4fc230-ad53-431f-bb5f-acc1b97d14e8\") " pod="openshift-marketplace/redhat-operators-666mb" Nov 24 23:19:52 crc kubenswrapper[4915]: I1124 23:19:52.384753 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6cpz\" (UniqueName: \"kubernetes.io/projected/4b4fc230-ad53-431f-bb5f-acc1b97d14e8-kube-api-access-v6cpz\") pod \"redhat-operators-666mb\" (UID: \"4b4fc230-ad53-431f-bb5f-acc1b97d14e8\") " pod="openshift-marketplace/redhat-operators-666mb" Nov 24 23:19:52 crc kubenswrapper[4915]: I1124 23:19:52.480533 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-666mb" Nov 24 23:19:53 crc kubenswrapper[4915]: I1124 23:19:53.114969 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-666mb"] Nov 24 23:19:53 crc kubenswrapper[4915]: I1124 23:19:53.769882 4915 generic.go:334] "Generic (PLEG): container finished" podID="4b4fc230-ad53-431f-bb5f-acc1b97d14e8" containerID="feb58b695b276ad0697644486a73de30a0b70f3f75cb8b7a60a5018c5ffa3d3d" exitCode=0 Nov 24 23:19:53 crc kubenswrapper[4915]: I1124 23:19:53.770187 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-666mb" event={"ID":"4b4fc230-ad53-431f-bb5f-acc1b97d14e8","Type":"ContainerDied","Data":"feb58b695b276ad0697644486a73de30a0b70f3f75cb8b7a60a5018c5ffa3d3d"} Nov 24 23:19:53 crc kubenswrapper[4915]: I1124 23:19:53.770231 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-666mb" event={"ID":"4b4fc230-ad53-431f-bb5f-acc1b97d14e8","Type":"ContainerStarted","Data":"d20ea993696f3dc19f7f00a5c4af66ea5e718196bc0d2f76b7bb33540113c293"} Nov 24 23:19:53 crc kubenswrapper[4915]: I1124 23:19:53.774480 4915 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 24 23:19:54 crc kubenswrapper[4915]: I1124 23:19:54.782218 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-666mb" event={"ID":"4b4fc230-ad53-431f-bb5f-acc1b97d14e8","Type":"ContainerStarted","Data":"dc3a850a21de78f5f338edb92ef2362cad4eaa325c3ed5bf3891a53653f1db48"} Nov 24 23:19:56 crc kubenswrapper[4915]: I1124 23:19:56.426841 4915 scope.go:117] "RemoveContainer" containerID="4c4992e0c66e1a80e8d0f8351bb8d73606b94eabfb2432575a0d7a556eb597cf" Nov 24 23:19:56 crc kubenswrapper[4915]: E1124 23:19:56.427685 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:19:58 crc kubenswrapper[4915]: I1124 23:19:58.834571 4915 generic.go:334] "Generic (PLEG): container finished" podID="4b4fc230-ad53-431f-bb5f-acc1b97d14e8" containerID="dc3a850a21de78f5f338edb92ef2362cad4eaa325c3ed5bf3891a53653f1db48" exitCode=0 Nov 24 23:19:58 crc kubenswrapper[4915]: I1124 23:19:58.834628 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-666mb" event={"ID":"4b4fc230-ad53-431f-bb5f-acc1b97d14e8","Type":"ContainerDied","Data":"dc3a850a21de78f5f338edb92ef2362cad4eaa325c3ed5bf3891a53653f1db48"} Nov 24 23:19:59 crc kubenswrapper[4915]: I1124 23:19:59.851899 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-666mb" event={"ID":"4b4fc230-ad53-431f-bb5f-acc1b97d14e8","Type":"ContainerStarted","Data":"2b9da6d16d0997e2ba28be07d820c8b2f6fa8fddd0b9a298713a4f0549180e32"} Nov 24 23:19:59 crc kubenswrapper[4915]: I1124 23:19:59.903443 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-666mb" podStartSLOduration=2.366959336 podStartE2EDuration="7.903427008s" podCreationTimestamp="2025-11-24 23:19:52 +0000 UTC" firstStartedPulling="2025-11-24 23:19:53.774127198 +0000 UTC m=+7212.090379381" lastFinishedPulling="2025-11-24 23:19:59.31059488 +0000 UTC m=+7217.626847053" observedRunningTime="2025-11-24 23:19:59.894019295 +0000 UTC m=+7218.210271488" watchObservedRunningTime="2025-11-24 23:19:59.903427008 +0000 UTC m=+7218.219679181" Nov 24 23:20:02 crc kubenswrapper[4915]: I1124 23:20:02.481583 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-666mb" Nov 24 23:20:02 crc kubenswrapper[4915]: I1124 23:20:02.482131 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-666mb" Nov 24 23:20:03 crc kubenswrapper[4915]: I1124 23:20:03.545595 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-666mb" podUID="4b4fc230-ad53-431f-bb5f-acc1b97d14e8" containerName="registry-server" probeResult="failure" output=< Nov 24 23:20:03 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Nov 24 23:20:03 crc kubenswrapper[4915]: > Nov 24 23:20:10 crc kubenswrapper[4915]: I1124 23:20:10.427926 4915 scope.go:117] "RemoveContainer" containerID="4c4992e0c66e1a80e8d0f8351bb8d73606b94eabfb2432575a0d7a556eb597cf" Nov 24 23:20:10 crc kubenswrapper[4915]: E1124 23:20:10.428642 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:20:12 crc kubenswrapper[4915]: I1124 23:20:12.592216 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-666mb" Nov 24 23:20:12 crc kubenswrapper[4915]: I1124 23:20:12.691288 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-666mb" Nov 24 23:20:12 crc kubenswrapper[4915]: I1124 23:20:12.862039 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-666mb"] Nov 24 23:20:14 crc kubenswrapper[4915]: I1124 23:20:14.079266 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-666mb" podUID="4b4fc230-ad53-431f-bb5f-acc1b97d14e8" containerName="registry-server" containerID="cri-o://2b9da6d16d0997e2ba28be07d820c8b2f6fa8fddd0b9a298713a4f0549180e32" gracePeriod=2 Nov 24 23:20:14 crc kubenswrapper[4915]: I1124 23:20:14.700768 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-666mb" Nov 24 23:20:14 crc kubenswrapper[4915]: I1124 23:20:14.730066 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b4fc230-ad53-431f-bb5f-acc1b97d14e8-utilities\") pod \"4b4fc230-ad53-431f-bb5f-acc1b97d14e8\" (UID: \"4b4fc230-ad53-431f-bb5f-acc1b97d14e8\") " Nov 24 23:20:14 crc kubenswrapper[4915]: I1124 23:20:14.730206 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b4fc230-ad53-431f-bb5f-acc1b97d14e8-catalog-content\") pod \"4b4fc230-ad53-431f-bb5f-acc1b97d14e8\" (UID: \"4b4fc230-ad53-431f-bb5f-acc1b97d14e8\") " Nov 24 23:20:14 crc kubenswrapper[4915]: I1124 23:20:14.730323 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6cpz\" (UniqueName: \"kubernetes.io/projected/4b4fc230-ad53-431f-bb5f-acc1b97d14e8-kube-api-access-v6cpz\") pod \"4b4fc230-ad53-431f-bb5f-acc1b97d14e8\" (UID: \"4b4fc230-ad53-431f-bb5f-acc1b97d14e8\") " Nov 24 23:20:14 crc kubenswrapper[4915]: I1124 23:20:14.732193 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b4fc230-ad53-431f-bb5f-acc1b97d14e8-utilities" (OuterVolumeSpecName: "utilities") pod "4b4fc230-ad53-431f-bb5f-acc1b97d14e8" (UID: "4b4fc230-ad53-431f-bb5f-acc1b97d14e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:20:14 crc kubenswrapper[4915]: I1124 23:20:14.741731 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b4fc230-ad53-431f-bb5f-acc1b97d14e8-kube-api-access-v6cpz" (OuterVolumeSpecName: "kube-api-access-v6cpz") pod "4b4fc230-ad53-431f-bb5f-acc1b97d14e8" (UID: "4b4fc230-ad53-431f-bb5f-acc1b97d14e8"). InnerVolumeSpecName "kube-api-access-v6cpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:20:14 crc kubenswrapper[4915]: I1124 23:20:14.832636 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b4fc230-ad53-431f-bb5f-acc1b97d14e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b4fc230-ad53-431f-bb5f-acc1b97d14e8" (UID: "4b4fc230-ad53-431f-bb5f-acc1b97d14e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:20:14 crc kubenswrapper[4915]: I1124 23:20:14.833370 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b4fc230-ad53-431f-bb5f-acc1b97d14e8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 23:20:14 crc kubenswrapper[4915]: I1124 23:20:14.833413 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6cpz\" (UniqueName: \"kubernetes.io/projected/4b4fc230-ad53-431f-bb5f-acc1b97d14e8-kube-api-access-v6cpz\") on node \"crc\" DevicePath \"\"" Nov 24 23:20:14 crc kubenswrapper[4915]: I1124 23:20:14.833430 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b4fc230-ad53-431f-bb5f-acc1b97d14e8-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 23:20:15 crc kubenswrapper[4915]: I1124 23:20:15.089842 4915 generic.go:334] "Generic (PLEG): container finished" podID="4b4fc230-ad53-431f-bb5f-acc1b97d14e8" containerID="2b9da6d16d0997e2ba28be07d820c8b2f6fa8fddd0b9a298713a4f0549180e32" exitCode=0 Nov 24 23:20:15 crc kubenswrapper[4915]: I1124 23:20:15.089890 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-666mb" event={"ID":"4b4fc230-ad53-431f-bb5f-acc1b97d14e8","Type":"ContainerDied","Data":"2b9da6d16d0997e2ba28be07d820c8b2f6fa8fddd0b9a298713a4f0549180e32"} Nov 24 23:20:15 crc kubenswrapper[4915]: I1124 23:20:15.089920 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-666mb" event={"ID":"4b4fc230-ad53-431f-bb5f-acc1b97d14e8","Type":"ContainerDied","Data":"d20ea993696f3dc19f7f00a5c4af66ea5e718196bc0d2f76b7bb33540113c293"} Nov 24 23:20:15 crc kubenswrapper[4915]: I1124 23:20:15.089942 4915 scope.go:117] "RemoveContainer" containerID="2b9da6d16d0997e2ba28be07d820c8b2f6fa8fddd0b9a298713a4f0549180e32" Nov 24 23:20:15 crc kubenswrapper[4915]: I1124 23:20:15.090084 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-666mb" Nov 24 23:20:15 crc kubenswrapper[4915]: I1124 23:20:15.131842 4915 scope.go:117] "RemoveContainer" containerID="dc3a850a21de78f5f338edb92ef2362cad4eaa325c3ed5bf3891a53653f1db48" Nov 24 23:20:15 crc kubenswrapper[4915]: I1124 23:20:15.134159 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-666mb"] Nov 24 23:20:15 crc kubenswrapper[4915]: I1124 23:20:15.153376 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-666mb"] Nov 24 23:20:15 crc kubenswrapper[4915]: I1124 23:20:15.155603 4915 scope.go:117] "RemoveContainer" containerID="feb58b695b276ad0697644486a73de30a0b70f3f75cb8b7a60a5018c5ffa3d3d" Nov 24 23:20:15 crc kubenswrapper[4915]: I1124 23:20:15.253161 4915 scope.go:117] "RemoveContainer" containerID="2b9da6d16d0997e2ba28be07d820c8b2f6fa8fddd0b9a298713a4f0549180e32" Nov 24 23:20:15 crc kubenswrapper[4915]: E1124 23:20:15.253988 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b9da6d16d0997e2ba28be07d820c8b2f6fa8fddd0b9a298713a4f0549180e32\": container with ID starting with 2b9da6d16d0997e2ba28be07d820c8b2f6fa8fddd0b9a298713a4f0549180e32 not found: ID does not exist" containerID="2b9da6d16d0997e2ba28be07d820c8b2f6fa8fddd0b9a298713a4f0549180e32" Nov 24 23:20:15 crc kubenswrapper[4915]: I1124 23:20:15.254059 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b9da6d16d0997e2ba28be07d820c8b2f6fa8fddd0b9a298713a4f0549180e32"} err="failed to get container status \"2b9da6d16d0997e2ba28be07d820c8b2f6fa8fddd0b9a298713a4f0549180e32\": rpc error: code = NotFound desc = could not find container \"2b9da6d16d0997e2ba28be07d820c8b2f6fa8fddd0b9a298713a4f0549180e32\": container with ID starting with 2b9da6d16d0997e2ba28be07d820c8b2f6fa8fddd0b9a298713a4f0549180e32 not found: ID does not exist" Nov 24 23:20:15 crc kubenswrapper[4915]: I1124 23:20:15.254091 4915 scope.go:117] "RemoveContainer" containerID="dc3a850a21de78f5f338edb92ef2362cad4eaa325c3ed5bf3891a53653f1db48" Nov 24 23:20:15 crc kubenswrapper[4915]: E1124 23:20:15.260159 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc3a850a21de78f5f338edb92ef2362cad4eaa325c3ed5bf3891a53653f1db48\": container with ID starting with dc3a850a21de78f5f338edb92ef2362cad4eaa325c3ed5bf3891a53653f1db48 not found: ID does not exist" containerID="dc3a850a21de78f5f338edb92ef2362cad4eaa325c3ed5bf3891a53653f1db48" Nov 24 23:20:15 crc kubenswrapper[4915]: I1124 23:20:15.260206 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc3a850a21de78f5f338edb92ef2362cad4eaa325c3ed5bf3891a53653f1db48"} err="failed to get container status \"dc3a850a21de78f5f338edb92ef2362cad4eaa325c3ed5bf3891a53653f1db48\": rpc error: code = NotFound desc = could not find container \"dc3a850a21de78f5f338edb92ef2362cad4eaa325c3ed5bf3891a53653f1db48\": container with ID starting with dc3a850a21de78f5f338edb92ef2362cad4eaa325c3ed5bf3891a53653f1db48 not found: ID does not exist" Nov 24 23:20:15 crc kubenswrapper[4915]: I1124 23:20:15.260236 4915 scope.go:117] "RemoveContainer" containerID="feb58b695b276ad0697644486a73de30a0b70f3f75cb8b7a60a5018c5ffa3d3d" Nov 24 23:20:15 crc kubenswrapper[4915]: E1124 23:20:15.260526 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feb58b695b276ad0697644486a73de30a0b70f3f75cb8b7a60a5018c5ffa3d3d\": container with ID starting with feb58b695b276ad0697644486a73de30a0b70f3f75cb8b7a60a5018c5ffa3d3d not found: ID does not exist" containerID="feb58b695b276ad0697644486a73de30a0b70f3f75cb8b7a60a5018c5ffa3d3d" Nov 24 23:20:15 crc kubenswrapper[4915]: I1124 23:20:15.260553 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feb58b695b276ad0697644486a73de30a0b70f3f75cb8b7a60a5018c5ffa3d3d"} err="failed to get container status \"feb58b695b276ad0697644486a73de30a0b70f3f75cb8b7a60a5018c5ffa3d3d\": rpc error: code = NotFound desc = could not find container \"feb58b695b276ad0697644486a73de30a0b70f3f75cb8b7a60a5018c5ffa3d3d\": container with ID starting with feb58b695b276ad0697644486a73de30a0b70f3f75cb8b7a60a5018c5ffa3d3d not found: ID does not exist" Nov 24 23:20:15 crc kubenswrapper[4915]: I1124 23:20:15.664662 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7frkt"] Nov 24 23:20:15 crc kubenswrapper[4915]: E1124 23:20:15.670911 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b4fc230-ad53-431f-bb5f-acc1b97d14e8" containerName="registry-server" Nov 24 23:20:15 crc kubenswrapper[4915]: I1124 23:20:15.670962 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b4fc230-ad53-431f-bb5f-acc1b97d14e8" containerName="registry-server" Nov 24 23:20:15 crc kubenswrapper[4915]: E1124 23:20:15.671044 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b4fc230-ad53-431f-bb5f-acc1b97d14e8" containerName="extract-utilities" Nov 24 23:20:15 crc kubenswrapper[4915]: I1124 23:20:15.671058 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b4fc230-ad53-431f-bb5f-acc1b97d14e8" containerName="extract-utilities" Nov 24 23:20:15 crc kubenswrapper[4915]: E1124 23:20:15.671088 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b4fc230-ad53-431f-bb5f-acc1b97d14e8" containerName="extract-content" Nov 24 23:20:15 crc kubenswrapper[4915]: I1124 23:20:15.671099 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b4fc230-ad53-431f-bb5f-acc1b97d14e8" containerName="extract-content" Nov 24 23:20:15 crc kubenswrapper[4915]: I1124 23:20:15.671657 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b4fc230-ad53-431f-bb5f-acc1b97d14e8" containerName="registry-server" Nov 24 23:20:15 crc kubenswrapper[4915]: I1124 23:20:15.674039 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7frkt" Nov 24 23:20:15 crc kubenswrapper[4915]: I1124 23:20:15.694281 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7frkt"] Nov 24 23:20:15 crc kubenswrapper[4915]: I1124 23:20:15.755472 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/886704b9-4195-4a8c-b9e7-19a97adab5db-catalog-content\") pod \"community-operators-7frkt\" (UID: \"886704b9-4195-4a8c-b9e7-19a97adab5db\") " pod="openshift-marketplace/community-operators-7frkt" Nov 24 23:20:15 crc kubenswrapper[4915]: I1124 23:20:15.755607 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb2r8\" (UniqueName: \"kubernetes.io/projected/886704b9-4195-4a8c-b9e7-19a97adab5db-kube-api-access-mb2r8\") pod \"community-operators-7frkt\" (UID: \"886704b9-4195-4a8c-b9e7-19a97adab5db\") " pod="openshift-marketplace/community-operators-7frkt" Nov 24 23:20:15 crc kubenswrapper[4915]: I1124 23:20:15.755717 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/886704b9-4195-4a8c-b9e7-19a97adab5db-utilities\") pod \"community-operators-7frkt\" (UID: \"886704b9-4195-4a8c-b9e7-19a97adab5db\") " pod="openshift-marketplace/community-operators-7frkt" Nov 24 23:20:15 crc kubenswrapper[4915]: I1124 23:20:15.858279 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/886704b9-4195-4a8c-b9e7-19a97adab5db-catalog-content\") pod \"community-operators-7frkt\" (UID: \"886704b9-4195-4a8c-b9e7-19a97adab5db\") " pod="openshift-marketplace/community-operators-7frkt" Nov 24 23:20:15 crc kubenswrapper[4915]: I1124 23:20:15.858375 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb2r8\" (UniqueName: \"kubernetes.io/projected/886704b9-4195-4a8c-b9e7-19a97adab5db-kube-api-access-mb2r8\") pod \"community-operators-7frkt\" (UID: \"886704b9-4195-4a8c-b9e7-19a97adab5db\") " pod="openshift-marketplace/community-operators-7frkt" Nov 24 23:20:15 crc kubenswrapper[4915]: I1124 23:20:15.858789 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/886704b9-4195-4a8c-b9e7-19a97adab5db-catalog-content\") pod \"community-operators-7frkt\" (UID: \"886704b9-4195-4a8c-b9e7-19a97adab5db\") " pod="openshift-marketplace/community-operators-7frkt" Nov 24 23:20:15 crc kubenswrapper[4915]: I1124 23:20:15.858906 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/886704b9-4195-4a8c-b9e7-19a97adab5db-utilities\") pod \"community-operators-7frkt\" (UID: \"886704b9-4195-4a8c-b9e7-19a97adab5db\") " pod="openshift-marketplace/community-operators-7frkt" Nov 24 23:20:15 crc kubenswrapper[4915]: I1124 23:20:15.859145 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/886704b9-4195-4a8c-b9e7-19a97adab5db-utilities\") pod \"community-operators-7frkt\" (UID: \"886704b9-4195-4a8c-b9e7-19a97adab5db\") " pod="openshift-marketplace/community-operators-7frkt" Nov 24 23:20:15 crc kubenswrapper[4915]: I1124 23:20:15.886333 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb2r8\" (UniqueName: \"kubernetes.io/projected/886704b9-4195-4a8c-b9e7-19a97adab5db-kube-api-access-mb2r8\") pod \"community-operators-7frkt\" (UID: \"886704b9-4195-4a8c-b9e7-19a97adab5db\") " pod="openshift-marketplace/community-operators-7frkt" Nov 24 23:20:16 crc kubenswrapper[4915]: I1124 23:20:16.002797 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7frkt" Nov 24 23:20:16 crc kubenswrapper[4915]: I1124 23:20:16.113462 4915 generic.go:334] "Generic (PLEG): container finished" podID="23858cf5-6d7d-4875-9586-92ebb2e329d0" containerID="25ba5d523755facfc180e1fd1440ae2ae34e80005910166ecd79fc4ed75d3481" exitCode=0 Nov 24 23:20:16 crc kubenswrapper[4915]: I1124 23:20:16.114819 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4svfd/must-gather-2qzhh" event={"ID":"23858cf5-6d7d-4875-9586-92ebb2e329d0","Type":"ContainerDied","Data":"25ba5d523755facfc180e1fd1440ae2ae34e80005910166ecd79fc4ed75d3481"} Nov 24 23:20:16 crc kubenswrapper[4915]: I1124 23:20:16.117120 4915 scope.go:117] "RemoveContainer" containerID="25ba5d523755facfc180e1fd1440ae2ae34e80005910166ecd79fc4ed75d3481" Nov 24 23:20:16 crc kubenswrapper[4915]: I1124 23:20:16.446636 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b4fc230-ad53-431f-bb5f-acc1b97d14e8" path="/var/lib/kubelet/pods/4b4fc230-ad53-431f-bb5f-acc1b97d14e8/volumes" Nov 24 23:20:16 crc kubenswrapper[4915]: I1124 23:20:16.536222 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7frkt"] Nov 24 23:20:16 crc kubenswrapper[4915]: I1124 23:20:16.613383 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4svfd_must-gather-2qzhh_23858cf5-6d7d-4875-9586-92ebb2e329d0/gather/0.log" Nov 24 23:20:17 crc kubenswrapper[4915]: I1124 23:20:17.126810 4915 generic.go:334] "Generic (PLEG): container finished" podID="886704b9-4195-4a8c-b9e7-19a97adab5db" containerID="b4340ff004b8b6cd4dc56e74f883e18572e18d2559bd89c50a6408de1e75a0c6" exitCode=0 Nov 24 23:20:17 crc kubenswrapper[4915]: I1124 23:20:17.127242 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7frkt" event={"ID":"886704b9-4195-4a8c-b9e7-19a97adab5db","Type":"ContainerDied","Data":"b4340ff004b8b6cd4dc56e74f883e18572e18d2559bd89c50a6408de1e75a0c6"} Nov 24 23:20:17 crc kubenswrapper[4915]: I1124 23:20:17.127283 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7frkt" event={"ID":"886704b9-4195-4a8c-b9e7-19a97adab5db","Type":"ContainerStarted","Data":"d96b7b71dffef73c754fb95b1296082ed3d15575918ca7af9b8f1adf9353dc79"} Nov 24 23:20:18 crc kubenswrapper[4915]: I1124 23:20:18.145118 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7frkt" event={"ID":"886704b9-4195-4a8c-b9e7-19a97adab5db","Type":"ContainerStarted","Data":"fc10478fdb33e7cbb019328033304e59c85933b4ee60c0adf0d36cd7a6068d8f"} Nov 24 23:20:19 crc kubenswrapper[4915]: I1124 23:20:19.158407 4915 generic.go:334] "Generic (PLEG): container finished" podID="886704b9-4195-4a8c-b9e7-19a97adab5db" containerID="fc10478fdb33e7cbb019328033304e59c85933b4ee60c0adf0d36cd7a6068d8f" exitCode=0 Nov 24 23:20:19 crc kubenswrapper[4915]: I1124 23:20:19.158456 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7frkt" event={"ID":"886704b9-4195-4a8c-b9e7-19a97adab5db","Type":"ContainerDied","Data":"fc10478fdb33e7cbb019328033304e59c85933b4ee60c0adf0d36cd7a6068d8f"} Nov 24 23:20:20 crc kubenswrapper[4915]: I1124 23:20:20.185354 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7frkt" event={"ID":"886704b9-4195-4a8c-b9e7-19a97adab5db","Type":"ContainerStarted","Data":"b881ea8a5cb9b14098b720ef492f7b56cfcece74b00cd411806749ba489a8242"} Nov 24 23:20:20 crc kubenswrapper[4915]: I1124 23:20:20.212345 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7frkt" podStartSLOduration=2.762229467 podStartE2EDuration="5.212325338s" podCreationTimestamp="2025-11-24 23:20:15 +0000 UTC" firstStartedPulling="2025-11-24 23:20:17.129708228 +0000 UTC m=+7235.445960401" lastFinishedPulling="2025-11-24 23:20:19.579804059 +0000 UTC m=+7237.896056272" observedRunningTime="2025-11-24 23:20:20.211465505 +0000 UTC m=+7238.527717718" watchObservedRunningTime="2025-11-24 23:20:20.212325338 +0000 UTC m=+7238.528577521" Nov 24 23:20:24 crc kubenswrapper[4915]: I1124 23:20:24.304639 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4svfd/must-gather-2qzhh"] Nov 24 23:20:24 crc kubenswrapper[4915]: I1124 23:20:24.305686 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-4svfd/must-gather-2qzhh" podUID="23858cf5-6d7d-4875-9586-92ebb2e329d0" containerName="copy" containerID="cri-o://e3ff8640a16813f7fdbe44f0be2b2dd43c17a0714a52ead8dc915711b42aa6a1" gracePeriod=2 Nov 24 23:20:24 crc kubenswrapper[4915]: I1124 23:20:24.321137 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4svfd/must-gather-2qzhh"] Nov 24 23:20:24 crc kubenswrapper[4915]: I1124 23:20:24.430697 4915 scope.go:117] "RemoveContainer" containerID="4c4992e0c66e1a80e8d0f8351bb8d73606b94eabfb2432575a0d7a556eb597cf" Nov 24 23:20:24 crc kubenswrapper[4915]: E1124 23:20:24.431037 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:20:24 crc kubenswrapper[4915]: I1124 23:20:24.853929 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4svfd_must-gather-2qzhh_23858cf5-6d7d-4875-9586-92ebb2e329d0/copy/0.log" Nov 24 23:20:24 crc kubenswrapper[4915]: I1124 23:20:24.854764 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4svfd/must-gather-2qzhh" Nov 24 23:20:25 crc kubenswrapper[4915]: I1124 23:20:25.049288 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwsst\" (UniqueName: \"kubernetes.io/projected/23858cf5-6d7d-4875-9586-92ebb2e329d0-kube-api-access-xwsst\") pod \"23858cf5-6d7d-4875-9586-92ebb2e329d0\" (UID: \"23858cf5-6d7d-4875-9586-92ebb2e329d0\") " Nov 24 23:20:25 crc kubenswrapper[4915]: I1124 23:20:25.049887 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/23858cf5-6d7d-4875-9586-92ebb2e329d0-must-gather-output\") pod \"23858cf5-6d7d-4875-9586-92ebb2e329d0\" (UID: \"23858cf5-6d7d-4875-9586-92ebb2e329d0\") " Nov 24 23:20:25 crc kubenswrapper[4915]: I1124 23:20:25.058066 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23858cf5-6d7d-4875-9586-92ebb2e329d0-kube-api-access-xwsst" (OuterVolumeSpecName: "kube-api-access-xwsst") pod "23858cf5-6d7d-4875-9586-92ebb2e329d0" (UID: "23858cf5-6d7d-4875-9586-92ebb2e329d0"). InnerVolumeSpecName "kube-api-access-xwsst". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:20:25 crc kubenswrapper[4915]: I1124 23:20:25.153205 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwsst\" (UniqueName: \"kubernetes.io/projected/23858cf5-6d7d-4875-9586-92ebb2e329d0-kube-api-access-xwsst\") on node \"crc\" DevicePath \"\"" Nov 24 23:20:25 crc kubenswrapper[4915]: I1124 23:20:25.231234 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23858cf5-6d7d-4875-9586-92ebb2e329d0-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "23858cf5-6d7d-4875-9586-92ebb2e329d0" (UID: "23858cf5-6d7d-4875-9586-92ebb2e329d0"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:20:25 crc kubenswrapper[4915]: I1124 23:20:25.278222 4915 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/23858cf5-6d7d-4875-9586-92ebb2e329d0-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 24 23:20:25 crc kubenswrapper[4915]: I1124 23:20:25.287555 4915 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4svfd_must-gather-2qzhh_23858cf5-6d7d-4875-9586-92ebb2e329d0/copy/0.log" Nov 24 23:20:25 crc kubenswrapper[4915]: I1124 23:20:25.288063 4915 generic.go:334] "Generic (PLEG): container finished" podID="23858cf5-6d7d-4875-9586-92ebb2e329d0" containerID="e3ff8640a16813f7fdbe44f0be2b2dd43c17a0714a52ead8dc915711b42aa6a1" exitCode=143 Nov 24 23:20:25 crc kubenswrapper[4915]: I1124 23:20:25.288209 4915 scope.go:117] "RemoveContainer" containerID="e3ff8640a16813f7fdbe44f0be2b2dd43c17a0714a52ead8dc915711b42aa6a1" Nov 24 23:20:25 crc kubenswrapper[4915]: I1124 23:20:25.289066 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4svfd/must-gather-2qzhh" Nov 24 23:20:25 crc kubenswrapper[4915]: I1124 23:20:25.315918 4915 scope.go:117] "RemoveContainer" containerID="25ba5d523755facfc180e1fd1440ae2ae34e80005910166ecd79fc4ed75d3481" Nov 24 23:20:25 crc kubenswrapper[4915]: I1124 23:20:25.382345 4915 scope.go:117] "RemoveContainer" containerID="e3ff8640a16813f7fdbe44f0be2b2dd43c17a0714a52ead8dc915711b42aa6a1" Nov 24 23:20:25 crc kubenswrapper[4915]: E1124 23:20:25.382802 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3ff8640a16813f7fdbe44f0be2b2dd43c17a0714a52ead8dc915711b42aa6a1\": container with ID starting with e3ff8640a16813f7fdbe44f0be2b2dd43c17a0714a52ead8dc915711b42aa6a1 not found: ID does not exist" containerID="e3ff8640a16813f7fdbe44f0be2b2dd43c17a0714a52ead8dc915711b42aa6a1" Nov 24 23:20:25 crc kubenswrapper[4915]: I1124 23:20:25.382860 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3ff8640a16813f7fdbe44f0be2b2dd43c17a0714a52ead8dc915711b42aa6a1"} err="failed to get container status \"e3ff8640a16813f7fdbe44f0be2b2dd43c17a0714a52ead8dc915711b42aa6a1\": rpc error: code = NotFound desc = could not find container \"e3ff8640a16813f7fdbe44f0be2b2dd43c17a0714a52ead8dc915711b42aa6a1\": container with ID starting with e3ff8640a16813f7fdbe44f0be2b2dd43c17a0714a52ead8dc915711b42aa6a1 not found: ID does not exist" Nov 24 23:20:25 crc kubenswrapper[4915]: I1124 23:20:25.382892 4915 scope.go:117] "RemoveContainer" containerID="25ba5d523755facfc180e1fd1440ae2ae34e80005910166ecd79fc4ed75d3481" Nov 24 23:20:25 crc kubenswrapper[4915]: E1124 23:20:25.383195 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25ba5d523755facfc180e1fd1440ae2ae34e80005910166ecd79fc4ed75d3481\": container with ID starting with 25ba5d523755facfc180e1fd1440ae2ae34e80005910166ecd79fc4ed75d3481 not found: ID does not exist" containerID="25ba5d523755facfc180e1fd1440ae2ae34e80005910166ecd79fc4ed75d3481" Nov 24 23:20:25 crc kubenswrapper[4915]: I1124 23:20:25.383240 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25ba5d523755facfc180e1fd1440ae2ae34e80005910166ecd79fc4ed75d3481"} err="failed to get container status \"25ba5d523755facfc180e1fd1440ae2ae34e80005910166ecd79fc4ed75d3481\": rpc error: code = NotFound desc = could not find container \"25ba5d523755facfc180e1fd1440ae2ae34e80005910166ecd79fc4ed75d3481\": container with ID starting with 25ba5d523755facfc180e1fd1440ae2ae34e80005910166ecd79fc4ed75d3481 not found: ID does not exist" Nov 24 23:20:26 crc kubenswrapper[4915]: I1124 23:20:26.004047 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7frkt" Nov 24 23:20:26 crc kubenswrapper[4915]: I1124 23:20:26.004344 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7frkt" Nov 24 23:20:26 crc kubenswrapper[4915]: I1124 23:20:26.054954 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7frkt" Nov 24 23:20:26 crc kubenswrapper[4915]: I1124 23:20:26.413140 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7frkt" Nov 24 23:20:26 crc kubenswrapper[4915]: I1124 23:20:26.447250 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23858cf5-6d7d-4875-9586-92ebb2e329d0" path="/var/lib/kubelet/pods/23858cf5-6d7d-4875-9586-92ebb2e329d0/volumes" Nov 24 23:20:26 crc kubenswrapper[4915]: I1124 23:20:26.476709 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7frkt"] Nov 24 23:20:26 crc kubenswrapper[4915]: I1124 23:20:26.720277 4915 scope.go:117] "RemoveContainer" containerID="00f18dc3f8d40b4e9dbd0951a27a819a601abe8f559f08dd0837ac4dda7addd6" Nov 24 23:20:28 crc kubenswrapper[4915]: I1124 23:20:28.326857 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7frkt" podUID="886704b9-4195-4a8c-b9e7-19a97adab5db" containerName="registry-server" containerID="cri-o://b881ea8a5cb9b14098b720ef492f7b56cfcece74b00cd411806749ba489a8242" gracePeriod=2 Nov 24 23:20:28 crc kubenswrapper[4915]: I1124 23:20:28.893511 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7frkt" Nov 24 23:20:29 crc kubenswrapper[4915]: I1124 23:20:29.087149 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb2r8\" (UniqueName: \"kubernetes.io/projected/886704b9-4195-4a8c-b9e7-19a97adab5db-kube-api-access-mb2r8\") pod \"886704b9-4195-4a8c-b9e7-19a97adab5db\" (UID: \"886704b9-4195-4a8c-b9e7-19a97adab5db\") " Nov 24 23:20:29 crc kubenswrapper[4915]: I1124 23:20:29.087262 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/886704b9-4195-4a8c-b9e7-19a97adab5db-utilities\") pod \"886704b9-4195-4a8c-b9e7-19a97adab5db\" (UID: \"886704b9-4195-4a8c-b9e7-19a97adab5db\") " Nov 24 23:20:29 crc kubenswrapper[4915]: I1124 23:20:29.087517 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/886704b9-4195-4a8c-b9e7-19a97adab5db-catalog-content\") pod \"886704b9-4195-4a8c-b9e7-19a97adab5db\" (UID: \"886704b9-4195-4a8c-b9e7-19a97adab5db\") " Nov 24 23:20:29 crc kubenswrapper[4915]: I1124 23:20:29.088498 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/886704b9-4195-4a8c-b9e7-19a97adab5db-utilities" (OuterVolumeSpecName: "utilities") pod "886704b9-4195-4a8c-b9e7-19a97adab5db" (UID: "886704b9-4195-4a8c-b9e7-19a97adab5db"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:20:29 crc kubenswrapper[4915]: I1124 23:20:29.093353 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/886704b9-4195-4a8c-b9e7-19a97adab5db-kube-api-access-mb2r8" (OuterVolumeSpecName: "kube-api-access-mb2r8") pod "886704b9-4195-4a8c-b9e7-19a97adab5db" (UID: "886704b9-4195-4a8c-b9e7-19a97adab5db"). InnerVolumeSpecName "kube-api-access-mb2r8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:20:29 crc kubenswrapper[4915]: I1124 23:20:29.167604 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/886704b9-4195-4a8c-b9e7-19a97adab5db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "886704b9-4195-4a8c-b9e7-19a97adab5db" (UID: "886704b9-4195-4a8c-b9e7-19a97adab5db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:20:29 crc kubenswrapper[4915]: I1124 23:20:29.190627 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/886704b9-4195-4a8c-b9e7-19a97adab5db-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 23:20:29 crc kubenswrapper[4915]: I1124 23:20:29.190664 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb2r8\" (UniqueName: \"kubernetes.io/projected/886704b9-4195-4a8c-b9e7-19a97adab5db-kube-api-access-mb2r8\") on node \"crc\" DevicePath \"\"" Nov 24 23:20:29 crc kubenswrapper[4915]: I1124 23:20:29.190676 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/886704b9-4195-4a8c-b9e7-19a97adab5db-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 23:20:29 crc kubenswrapper[4915]: I1124 23:20:29.342872 4915 generic.go:334] "Generic (PLEG): container finished" podID="886704b9-4195-4a8c-b9e7-19a97adab5db" containerID="b881ea8a5cb9b14098b720ef492f7b56cfcece74b00cd411806749ba489a8242" exitCode=0 Nov 24 23:20:29 crc kubenswrapper[4915]: I1124 23:20:29.342917 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7frkt" event={"ID":"886704b9-4195-4a8c-b9e7-19a97adab5db","Type":"ContainerDied","Data":"b881ea8a5cb9b14098b720ef492f7b56cfcece74b00cd411806749ba489a8242"} Nov 24 23:20:29 crc kubenswrapper[4915]: I1124 23:20:29.342944 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7frkt" event={"ID":"886704b9-4195-4a8c-b9e7-19a97adab5db","Type":"ContainerDied","Data":"d96b7b71dffef73c754fb95b1296082ed3d15575918ca7af9b8f1adf9353dc79"} Nov 24 23:20:29 crc kubenswrapper[4915]: I1124 23:20:29.342962 4915 scope.go:117] "RemoveContainer" containerID="b881ea8a5cb9b14098b720ef492f7b56cfcece74b00cd411806749ba489a8242" Nov 24 23:20:29 crc kubenswrapper[4915]: I1124 23:20:29.343014 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7frkt" Nov 24 23:20:29 crc kubenswrapper[4915]: I1124 23:20:29.378693 4915 scope.go:117] "RemoveContainer" containerID="fc10478fdb33e7cbb019328033304e59c85933b4ee60c0adf0d36cd7a6068d8f" Nov 24 23:20:29 crc kubenswrapper[4915]: I1124 23:20:29.401976 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7frkt"] Nov 24 23:20:29 crc kubenswrapper[4915]: I1124 23:20:29.417377 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7frkt"] Nov 24 23:20:29 crc kubenswrapper[4915]: I1124 23:20:29.425654 4915 scope.go:117] "RemoveContainer" containerID="b4340ff004b8b6cd4dc56e74f883e18572e18d2559bd89c50a6408de1e75a0c6" Nov 24 23:20:29 crc kubenswrapper[4915]: I1124 23:20:29.523564 4915 scope.go:117] "RemoveContainer" containerID="b881ea8a5cb9b14098b720ef492f7b56cfcece74b00cd411806749ba489a8242" Nov 24 23:20:29 crc kubenswrapper[4915]: E1124 23:20:29.524464 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b881ea8a5cb9b14098b720ef492f7b56cfcece74b00cd411806749ba489a8242\": container with ID starting with b881ea8a5cb9b14098b720ef492f7b56cfcece74b00cd411806749ba489a8242 not found: ID does not exist" containerID="b881ea8a5cb9b14098b720ef492f7b56cfcece74b00cd411806749ba489a8242" Nov 24 23:20:29 crc kubenswrapper[4915]: I1124 23:20:29.524516 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b881ea8a5cb9b14098b720ef492f7b56cfcece74b00cd411806749ba489a8242"} err="failed to get container status \"b881ea8a5cb9b14098b720ef492f7b56cfcece74b00cd411806749ba489a8242\": rpc error: code = NotFound desc = could not find container \"b881ea8a5cb9b14098b720ef492f7b56cfcece74b00cd411806749ba489a8242\": container with ID starting with b881ea8a5cb9b14098b720ef492f7b56cfcece74b00cd411806749ba489a8242 not found: ID does not exist" Nov 24 23:20:29 crc kubenswrapper[4915]: I1124 23:20:29.524549 4915 scope.go:117] "RemoveContainer" containerID="fc10478fdb33e7cbb019328033304e59c85933b4ee60c0adf0d36cd7a6068d8f" Nov 24 23:20:29 crc kubenswrapper[4915]: E1124 23:20:29.524861 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc10478fdb33e7cbb019328033304e59c85933b4ee60c0adf0d36cd7a6068d8f\": container with ID starting with fc10478fdb33e7cbb019328033304e59c85933b4ee60c0adf0d36cd7a6068d8f not found: ID does not exist" containerID="fc10478fdb33e7cbb019328033304e59c85933b4ee60c0adf0d36cd7a6068d8f" Nov 24 23:20:29 crc kubenswrapper[4915]: I1124 23:20:29.524906 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc10478fdb33e7cbb019328033304e59c85933b4ee60c0adf0d36cd7a6068d8f"} err="failed to get container status \"fc10478fdb33e7cbb019328033304e59c85933b4ee60c0adf0d36cd7a6068d8f\": rpc error: code = NotFound desc = could not find container \"fc10478fdb33e7cbb019328033304e59c85933b4ee60c0adf0d36cd7a6068d8f\": container with ID starting with fc10478fdb33e7cbb019328033304e59c85933b4ee60c0adf0d36cd7a6068d8f not found: ID does not exist" Nov 24 23:20:29 crc kubenswrapper[4915]: I1124 23:20:29.524932 4915 scope.go:117] "RemoveContainer" containerID="b4340ff004b8b6cd4dc56e74f883e18572e18d2559bd89c50a6408de1e75a0c6" Nov 24 23:20:29 crc kubenswrapper[4915]: E1124 23:20:29.525358 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4340ff004b8b6cd4dc56e74f883e18572e18d2559bd89c50a6408de1e75a0c6\": container with ID starting with b4340ff004b8b6cd4dc56e74f883e18572e18d2559bd89c50a6408de1e75a0c6 not found: ID does not exist" containerID="b4340ff004b8b6cd4dc56e74f883e18572e18d2559bd89c50a6408de1e75a0c6" Nov 24 23:20:29 crc kubenswrapper[4915]: I1124 23:20:29.525396 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4340ff004b8b6cd4dc56e74f883e18572e18d2559bd89c50a6408de1e75a0c6"} err="failed to get container status \"b4340ff004b8b6cd4dc56e74f883e18572e18d2559bd89c50a6408de1e75a0c6\": rpc error: code = NotFound desc = could not find container \"b4340ff004b8b6cd4dc56e74f883e18572e18d2559bd89c50a6408de1e75a0c6\": container with ID starting with b4340ff004b8b6cd4dc56e74f883e18572e18d2559bd89c50a6408de1e75a0c6 not found: ID does not exist" Nov 24 23:20:30 crc kubenswrapper[4915]: I1124 23:20:30.449453 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="886704b9-4195-4a8c-b9e7-19a97adab5db" path="/var/lib/kubelet/pods/886704b9-4195-4a8c-b9e7-19a97adab5db/volumes" Nov 24 23:20:38 crc kubenswrapper[4915]: I1124 23:20:38.432430 4915 scope.go:117] "RemoveContainer" containerID="4c4992e0c66e1a80e8d0f8351bb8d73606b94eabfb2432575a0d7a556eb597cf" Nov 24 23:20:38 crc kubenswrapper[4915]: E1124 23:20:38.434103 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:20:52 crc kubenswrapper[4915]: I1124 23:20:52.442525 4915 scope.go:117] "RemoveContainer" containerID="4c4992e0c66e1a80e8d0f8351bb8d73606b94eabfb2432575a0d7a556eb597cf" Nov 24 23:20:52 crc kubenswrapper[4915]: E1124 23:20:52.443725 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:21:05 crc kubenswrapper[4915]: I1124 23:21:05.427911 4915 scope.go:117] "RemoveContainer" containerID="4c4992e0c66e1a80e8d0f8351bb8d73606b94eabfb2432575a0d7a556eb597cf" Nov 24 23:21:05 crc kubenswrapper[4915]: E1124 23:21:05.429214 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:21:16 crc kubenswrapper[4915]: I1124 23:21:16.456319 4915 scope.go:117] "RemoveContainer" containerID="4c4992e0c66e1a80e8d0f8351bb8d73606b94eabfb2432575a0d7a556eb597cf" Nov 24 23:21:16 crc kubenswrapper[4915]: E1124 23:21:16.457334 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:21:26 crc kubenswrapper[4915]: I1124 23:21:26.826633 4915 scope.go:117] "RemoveContainer" containerID="95e765f7f70eccda98bce04fd71d49a097936b769c11fd131d0ba3b0f839f0f5" Nov 24 23:21:31 crc kubenswrapper[4915]: I1124 23:21:31.426709 4915 scope.go:117] "RemoveContainer" containerID="4c4992e0c66e1a80e8d0f8351bb8d73606b94eabfb2432575a0d7a556eb597cf" Nov 24 23:21:31 crc kubenswrapper[4915]: E1124 23:21:31.428486 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:21:42 crc kubenswrapper[4915]: I1124 23:21:42.448440 4915 scope.go:117] "RemoveContainer" containerID="4c4992e0c66e1a80e8d0f8351bb8d73606b94eabfb2432575a0d7a556eb597cf" Nov 24 23:21:42 crc kubenswrapper[4915]: E1124 23:21:42.449619 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:21:53 crc kubenswrapper[4915]: I1124 23:21:53.427831 4915 scope.go:117] "RemoveContainer" containerID="4c4992e0c66e1a80e8d0f8351bb8d73606b94eabfb2432575a0d7a556eb597cf" Nov 24 23:21:53 crc kubenswrapper[4915]: E1124 23:21:53.429166 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:22:01 crc kubenswrapper[4915]: I1124 23:22:01.094656 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-dfc4k"] Nov 24 23:22:01 crc kubenswrapper[4915]: I1124 23:22:01.112432 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-dfc4k"] Nov 24 23:22:02 crc kubenswrapper[4915]: I1124 23:22:02.454267 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b249c5c-fb49-4f90-9821-2c4c7b37d448" path="/var/lib/kubelet/pods/4b249c5c-fb49-4f90-9821-2c4c7b37d448/volumes" Nov 24 23:22:06 crc kubenswrapper[4915]: I1124 23:22:06.426991 4915 scope.go:117] "RemoveContainer" containerID="4c4992e0c66e1a80e8d0f8351bb8d73606b94eabfb2432575a0d7a556eb597cf" Nov 24 23:22:06 crc kubenswrapper[4915]: E1124 23:22:06.427729 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:22:07 crc kubenswrapper[4915]: I1124 23:22:07.288030 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b89ss"] Nov 24 23:22:07 crc kubenswrapper[4915]: E1124 23:22:07.288968 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23858cf5-6d7d-4875-9586-92ebb2e329d0" containerName="gather" Nov 24 23:22:07 crc kubenswrapper[4915]: I1124 23:22:07.288990 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="23858cf5-6d7d-4875-9586-92ebb2e329d0" containerName="gather" Nov 24 23:22:07 crc kubenswrapper[4915]: E1124 23:22:07.289038 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="886704b9-4195-4a8c-b9e7-19a97adab5db" containerName="extract-utilities" Nov 24 23:22:07 crc kubenswrapper[4915]: I1124 23:22:07.289048 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="886704b9-4195-4a8c-b9e7-19a97adab5db" containerName="extract-utilities" Nov 24 23:22:07 crc kubenswrapper[4915]: E1124 23:22:07.289081 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="886704b9-4195-4a8c-b9e7-19a97adab5db" containerName="extract-content" Nov 24 23:22:07 crc kubenswrapper[4915]: I1124 23:22:07.289091 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="886704b9-4195-4a8c-b9e7-19a97adab5db" containerName="extract-content" Nov 24 23:22:07 crc kubenswrapper[4915]: E1124 23:22:07.289129 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="886704b9-4195-4a8c-b9e7-19a97adab5db" containerName="registry-server" Nov 24 23:22:07 crc kubenswrapper[4915]: I1124 23:22:07.289138 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="886704b9-4195-4a8c-b9e7-19a97adab5db" containerName="registry-server" Nov 24 23:22:07 crc kubenswrapper[4915]: E1124 23:22:07.289153 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23858cf5-6d7d-4875-9586-92ebb2e329d0" containerName="copy" Nov 24 23:22:07 crc kubenswrapper[4915]: I1124 23:22:07.289162 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="23858cf5-6d7d-4875-9586-92ebb2e329d0" containerName="copy" Nov 24 23:22:07 crc kubenswrapper[4915]: I1124 23:22:07.289436 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="23858cf5-6d7d-4875-9586-92ebb2e329d0" containerName="gather" Nov 24 23:22:07 crc kubenswrapper[4915]: I1124 23:22:07.289469 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="23858cf5-6d7d-4875-9586-92ebb2e329d0" containerName="copy" Nov 24 23:22:07 crc kubenswrapper[4915]: I1124 23:22:07.289499 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="886704b9-4195-4a8c-b9e7-19a97adab5db" containerName="registry-server" Nov 24 23:22:07 crc kubenswrapper[4915]: I1124 23:22:07.291568 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b89ss" Nov 24 23:22:07 crc kubenswrapper[4915]: I1124 23:22:07.313911 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b89ss"] Nov 24 23:22:07 crc kubenswrapper[4915]: I1124 23:22:07.452683 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6a6bcc0-9a0b-4932-ac58-977526a69091-utilities\") pod \"redhat-marketplace-b89ss\" (UID: \"e6a6bcc0-9a0b-4932-ac58-977526a69091\") " pod="openshift-marketplace/redhat-marketplace-b89ss" Nov 24 23:22:07 crc kubenswrapper[4915]: I1124 23:22:07.452893 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6znls\" (UniqueName: \"kubernetes.io/projected/e6a6bcc0-9a0b-4932-ac58-977526a69091-kube-api-access-6znls\") pod \"redhat-marketplace-b89ss\" (UID: \"e6a6bcc0-9a0b-4932-ac58-977526a69091\") " pod="openshift-marketplace/redhat-marketplace-b89ss" Nov 24 23:22:07 crc kubenswrapper[4915]: I1124 23:22:07.452967 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6a6bcc0-9a0b-4932-ac58-977526a69091-catalog-content\") pod \"redhat-marketplace-b89ss\" (UID: \"e6a6bcc0-9a0b-4932-ac58-977526a69091\") " pod="openshift-marketplace/redhat-marketplace-b89ss" Nov 24 23:22:07 crc kubenswrapper[4915]: I1124 23:22:07.555873 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6znls\" (UniqueName: \"kubernetes.io/projected/e6a6bcc0-9a0b-4932-ac58-977526a69091-kube-api-access-6znls\") pod \"redhat-marketplace-b89ss\" (UID: \"e6a6bcc0-9a0b-4932-ac58-977526a69091\") " pod="openshift-marketplace/redhat-marketplace-b89ss" Nov 24 23:22:07 crc kubenswrapper[4915]: I1124 23:22:07.555983 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6a6bcc0-9a0b-4932-ac58-977526a69091-catalog-content\") pod \"redhat-marketplace-b89ss\" (UID: \"e6a6bcc0-9a0b-4932-ac58-977526a69091\") " pod="openshift-marketplace/redhat-marketplace-b89ss" Nov 24 23:22:07 crc kubenswrapper[4915]: I1124 23:22:07.556197 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6a6bcc0-9a0b-4932-ac58-977526a69091-utilities\") pod \"redhat-marketplace-b89ss\" (UID: \"e6a6bcc0-9a0b-4932-ac58-977526a69091\") " pod="openshift-marketplace/redhat-marketplace-b89ss" Nov 24 23:22:07 crc kubenswrapper[4915]: I1124 23:22:07.557756 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6a6bcc0-9a0b-4932-ac58-977526a69091-utilities\") pod \"redhat-marketplace-b89ss\" (UID: \"e6a6bcc0-9a0b-4932-ac58-977526a69091\") " pod="openshift-marketplace/redhat-marketplace-b89ss" Nov 24 23:22:07 crc kubenswrapper[4915]: I1124 23:22:07.557753 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6a6bcc0-9a0b-4932-ac58-977526a69091-catalog-content\") pod \"redhat-marketplace-b89ss\" (UID: \"e6a6bcc0-9a0b-4932-ac58-977526a69091\") " pod="openshift-marketplace/redhat-marketplace-b89ss" Nov 24 23:22:07 crc kubenswrapper[4915]: I1124 23:22:07.578819 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6znls\" (UniqueName: \"kubernetes.io/projected/e6a6bcc0-9a0b-4932-ac58-977526a69091-kube-api-access-6znls\") pod \"redhat-marketplace-b89ss\" (UID: \"e6a6bcc0-9a0b-4932-ac58-977526a69091\") " pod="openshift-marketplace/redhat-marketplace-b89ss" Nov 24 23:22:07 crc kubenswrapper[4915]: I1124 23:22:07.616068 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b89ss" Nov 24 23:22:08 crc kubenswrapper[4915]: I1124 23:22:08.184926 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b89ss"] Nov 24 23:22:08 crc kubenswrapper[4915]: I1124 23:22:08.885516 4915 generic.go:334] "Generic (PLEG): container finished" podID="e6a6bcc0-9a0b-4932-ac58-977526a69091" containerID="fbee262f067c66ee4b14ff4ab46e6e4701c4ee87cd74187378e4ec116da94a2c" exitCode=0 Nov 24 23:22:08 crc kubenswrapper[4915]: I1124 23:22:08.885936 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b89ss" event={"ID":"e6a6bcc0-9a0b-4932-ac58-977526a69091","Type":"ContainerDied","Data":"fbee262f067c66ee4b14ff4ab46e6e4701c4ee87cd74187378e4ec116da94a2c"} Nov 24 23:22:08 crc kubenswrapper[4915]: I1124 23:22:08.886370 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b89ss" event={"ID":"e6a6bcc0-9a0b-4932-ac58-977526a69091","Type":"ContainerStarted","Data":"e5f35aee0d4b43694765a0658890161a7cb405e1db98459a9347774f39a04c35"} Nov 24 23:22:09 crc kubenswrapper[4915]: I1124 23:22:09.924014 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b89ss" event={"ID":"e6a6bcc0-9a0b-4932-ac58-977526a69091","Type":"ContainerStarted","Data":"5e5788d33aff972a049841f46cb95ef656ffa6854018ea9dc4ec5afd127c4f47"} Nov 24 23:22:10 crc kubenswrapper[4915]: I1124 23:22:10.940549 4915 generic.go:334] "Generic (PLEG): container finished" podID="e6a6bcc0-9a0b-4932-ac58-977526a69091" containerID="5e5788d33aff972a049841f46cb95ef656ffa6854018ea9dc4ec5afd127c4f47" exitCode=0 Nov 24 23:22:10 crc kubenswrapper[4915]: I1124 23:22:10.940632 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b89ss" event={"ID":"e6a6bcc0-9a0b-4932-ac58-977526a69091","Type":"ContainerDied","Data":"5e5788d33aff972a049841f46cb95ef656ffa6854018ea9dc4ec5afd127c4f47"} Nov 24 23:22:11 crc kubenswrapper[4915]: I1124 23:22:11.957901 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b89ss" event={"ID":"e6a6bcc0-9a0b-4932-ac58-977526a69091","Type":"ContainerStarted","Data":"e3431e3e41157222cdf2742466cbf88f2c0596e141737648f7170e8e8602668c"} Nov 24 23:22:11 crc kubenswrapper[4915]: I1124 23:22:11.995207 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b89ss" podStartSLOduration=2.326771482 podStartE2EDuration="4.995178579s" podCreationTimestamp="2025-11-24 23:22:07 +0000 UTC" firstStartedPulling="2025-11-24 23:22:08.888574233 +0000 UTC m=+7347.204826416" lastFinishedPulling="2025-11-24 23:22:11.55698131 +0000 UTC m=+7349.873233513" observedRunningTime="2025-11-24 23:22:11.985352664 +0000 UTC m=+7350.301604837" watchObservedRunningTime="2025-11-24 23:22:11.995178579 +0000 UTC m=+7350.311430792" Nov 24 23:22:14 crc kubenswrapper[4915]: I1124 23:22:14.058331 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-bnv6m"] Nov 24 23:22:14 crc kubenswrapper[4915]: I1124 23:22:14.068497 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-bnv6m"] Nov 24 23:22:14 crc kubenswrapper[4915]: I1124 23:22:14.471333 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f948bc6-46ff-4a75-98c5-beafdbd54bcc" path="/var/lib/kubelet/pods/3f948bc6-46ff-4a75-98c5-beafdbd54bcc/volumes" Nov 24 23:22:17 crc kubenswrapper[4915]: I1124 23:22:17.618194 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b89ss" Nov 24 23:22:17 crc kubenswrapper[4915]: I1124 23:22:17.619010 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b89ss" Nov 24 23:22:17 crc kubenswrapper[4915]: I1124 23:22:17.699290 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b89ss" Nov 24 23:22:18 crc kubenswrapper[4915]: I1124 23:22:18.083289 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b89ss" Nov 24 23:22:18 crc kubenswrapper[4915]: I1124 23:22:18.150043 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b89ss"] Nov 24 23:22:18 crc kubenswrapper[4915]: I1124 23:22:18.427990 4915 scope.go:117] "RemoveContainer" containerID="4c4992e0c66e1a80e8d0f8351bb8d73606b94eabfb2432575a0d7a556eb597cf" Nov 24 23:22:18 crc kubenswrapper[4915]: E1124 23:22:18.428351 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:22:20 crc kubenswrapper[4915]: I1124 23:22:20.043814 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b89ss" podUID="e6a6bcc0-9a0b-4932-ac58-977526a69091" containerName="registry-server" containerID="cri-o://e3431e3e41157222cdf2742466cbf88f2c0596e141737648f7170e8e8602668c" gracePeriod=2 Nov 24 23:22:20 crc kubenswrapper[4915]: I1124 23:22:20.674377 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b89ss" Nov 24 23:22:20 crc kubenswrapper[4915]: I1124 23:22:20.821544 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6a6bcc0-9a0b-4932-ac58-977526a69091-catalog-content\") pod \"e6a6bcc0-9a0b-4932-ac58-977526a69091\" (UID: \"e6a6bcc0-9a0b-4932-ac58-977526a69091\") " Nov 24 23:22:20 crc kubenswrapper[4915]: I1124 23:22:20.821986 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6a6bcc0-9a0b-4932-ac58-977526a69091-utilities\") pod \"e6a6bcc0-9a0b-4932-ac58-977526a69091\" (UID: \"e6a6bcc0-9a0b-4932-ac58-977526a69091\") " Nov 24 23:22:20 crc kubenswrapper[4915]: I1124 23:22:20.822128 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6znls\" (UniqueName: \"kubernetes.io/projected/e6a6bcc0-9a0b-4932-ac58-977526a69091-kube-api-access-6znls\") pod \"e6a6bcc0-9a0b-4932-ac58-977526a69091\" (UID: \"e6a6bcc0-9a0b-4932-ac58-977526a69091\") " Nov 24 23:22:20 crc kubenswrapper[4915]: I1124 23:22:20.824434 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6a6bcc0-9a0b-4932-ac58-977526a69091-utilities" (OuterVolumeSpecName: "utilities") pod "e6a6bcc0-9a0b-4932-ac58-977526a69091" (UID: "e6a6bcc0-9a0b-4932-ac58-977526a69091"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:22:20 crc kubenswrapper[4915]: I1124 23:22:20.830631 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6a6bcc0-9a0b-4932-ac58-977526a69091-kube-api-access-6znls" (OuterVolumeSpecName: "kube-api-access-6znls") pod "e6a6bcc0-9a0b-4932-ac58-977526a69091" (UID: "e6a6bcc0-9a0b-4932-ac58-977526a69091"). InnerVolumeSpecName "kube-api-access-6znls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:22:20 crc kubenswrapper[4915]: I1124 23:22:20.838448 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6a6bcc0-9a0b-4932-ac58-977526a69091-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6a6bcc0-9a0b-4932-ac58-977526a69091" (UID: "e6a6bcc0-9a0b-4932-ac58-977526a69091"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:22:20 crc kubenswrapper[4915]: I1124 23:22:20.925113 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6a6bcc0-9a0b-4932-ac58-977526a69091-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 23:22:20 crc kubenswrapper[4915]: I1124 23:22:20.925144 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6znls\" (UniqueName: \"kubernetes.io/projected/e6a6bcc0-9a0b-4932-ac58-977526a69091-kube-api-access-6znls\") on node \"crc\" DevicePath \"\"" Nov 24 23:22:20 crc kubenswrapper[4915]: I1124 23:22:20.925155 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6a6bcc0-9a0b-4932-ac58-977526a69091-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 23:22:21 crc kubenswrapper[4915]: I1124 23:22:21.071229 4915 generic.go:334] "Generic (PLEG): container finished" podID="e6a6bcc0-9a0b-4932-ac58-977526a69091" containerID="e3431e3e41157222cdf2742466cbf88f2c0596e141737648f7170e8e8602668c" exitCode=0 Nov 24 23:22:21 crc kubenswrapper[4915]: I1124 23:22:21.071330 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b89ss" Nov 24 23:22:21 crc kubenswrapper[4915]: I1124 23:22:21.071375 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b89ss" event={"ID":"e6a6bcc0-9a0b-4932-ac58-977526a69091","Type":"ContainerDied","Data":"e3431e3e41157222cdf2742466cbf88f2c0596e141737648f7170e8e8602668c"} Nov 24 23:22:21 crc kubenswrapper[4915]: I1124 23:22:21.072110 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b89ss" event={"ID":"e6a6bcc0-9a0b-4932-ac58-977526a69091","Type":"ContainerDied","Data":"e5f35aee0d4b43694765a0658890161a7cb405e1db98459a9347774f39a04c35"} Nov 24 23:22:21 crc kubenswrapper[4915]: I1124 23:22:21.072137 4915 scope.go:117] "RemoveContainer" containerID="e3431e3e41157222cdf2742466cbf88f2c0596e141737648f7170e8e8602668c" Nov 24 23:22:21 crc kubenswrapper[4915]: I1124 23:22:21.114574 4915 scope.go:117] "RemoveContainer" containerID="5e5788d33aff972a049841f46cb95ef656ffa6854018ea9dc4ec5afd127c4f47" Nov 24 23:22:21 crc kubenswrapper[4915]: I1124 23:22:21.139966 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b89ss"] Nov 24 23:22:21 crc kubenswrapper[4915]: I1124 23:22:21.157710 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b89ss"] Nov 24 23:22:21 crc kubenswrapper[4915]: I1124 23:22:21.159491 4915 scope.go:117] "RemoveContainer" containerID="fbee262f067c66ee4b14ff4ab46e6e4701c4ee87cd74187378e4ec116da94a2c" Nov 24 23:22:21 crc kubenswrapper[4915]: I1124 23:22:21.247137 4915 scope.go:117] "RemoveContainer" containerID="e3431e3e41157222cdf2742466cbf88f2c0596e141737648f7170e8e8602668c" Nov 24 23:22:21 crc kubenswrapper[4915]: E1124 23:22:21.250160 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3431e3e41157222cdf2742466cbf88f2c0596e141737648f7170e8e8602668c\": container with ID starting with e3431e3e41157222cdf2742466cbf88f2c0596e141737648f7170e8e8602668c not found: ID does not exist" containerID="e3431e3e41157222cdf2742466cbf88f2c0596e141737648f7170e8e8602668c" Nov 24 23:22:21 crc kubenswrapper[4915]: I1124 23:22:21.250227 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3431e3e41157222cdf2742466cbf88f2c0596e141737648f7170e8e8602668c"} err="failed to get container status \"e3431e3e41157222cdf2742466cbf88f2c0596e141737648f7170e8e8602668c\": rpc error: code = NotFound desc = could not find container \"e3431e3e41157222cdf2742466cbf88f2c0596e141737648f7170e8e8602668c\": container with ID starting with e3431e3e41157222cdf2742466cbf88f2c0596e141737648f7170e8e8602668c not found: ID does not exist" Nov 24 23:22:21 crc kubenswrapper[4915]: I1124 23:22:21.250272 4915 scope.go:117] "RemoveContainer" containerID="5e5788d33aff972a049841f46cb95ef656ffa6854018ea9dc4ec5afd127c4f47" Nov 24 23:22:21 crc kubenswrapper[4915]: E1124 23:22:21.250896 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e5788d33aff972a049841f46cb95ef656ffa6854018ea9dc4ec5afd127c4f47\": container with ID starting with 5e5788d33aff972a049841f46cb95ef656ffa6854018ea9dc4ec5afd127c4f47 not found: ID does not exist" containerID="5e5788d33aff972a049841f46cb95ef656ffa6854018ea9dc4ec5afd127c4f47" Nov 24 23:22:21 crc kubenswrapper[4915]: I1124 23:22:21.250961 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e5788d33aff972a049841f46cb95ef656ffa6854018ea9dc4ec5afd127c4f47"} err="failed to get container status \"5e5788d33aff972a049841f46cb95ef656ffa6854018ea9dc4ec5afd127c4f47\": rpc error: code = NotFound desc = could not find container \"5e5788d33aff972a049841f46cb95ef656ffa6854018ea9dc4ec5afd127c4f47\": container with ID starting with 5e5788d33aff972a049841f46cb95ef656ffa6854018ea9dc4ec5afd127c4f47 not found: ID does not exist" Nov 24 23:22:21 crc kubenswrapper[4915]: I1124 23:22:21.250995 4915 scope.go:117] "RemoveContainer" containerID="fbee262f067c66ee4b14ff4ab46e6e4701c4ee87cd74187378e4ec116da94a2c" Nov 24 23:22:21 crc kubenswrapper[4915]: E1124 23:22:21.251529 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbee262f067c66ee4b14ff4ab46e6e4701c4ee87cd74187378e4ec116da94a2c\": container with ID starting with fbee262f067c66ee4b14ff4ab46e6e4701c4ee87cd74187378e4ec116da94a2c not found: ID does not exist" containerID="fbee262f067c66ee4b14ff4ab46e6e4701c4ee87cd74187378e4ec116da94a2c" Nov 24 23:22:21 crc kubenswrapper[4915]: I1124 23:22:21.251579 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbee262f067c66ee4b14ff4ab46e6e4701c4ee87cd74187378e4ec116da94a2c"} err="failed to get container status \"fbee262f067c66ee4b14ff4ab46e6e4701c4ee87cd74187378e4ec116da94a2c\": rpc error: code = NotFound desc = could not find container \"fbee262f067c66ee4b14ff4ab46e6e4701c4ee87cd74187378e4ec116da94a2c\": container with ID starting with fbee262f067c66ee4b14ff4ab46e6e4701c4ee87cd74187378e4ec116da94a2c not found: ID does not exist" Nov 24 23:22:22 crc kubenswrapper[4915]: I1124 23:22:22.458361 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6a6bcc0-9a0b-4932-ac58-977526a69091" path="/var/lib/kubelet/pods/e6a6bcc0-9a0b-4932-ac58-977526a69091/volumes" Nov 24 23:22:26 crc kubenswrapper[4915]: I1124 23:22:26.957405 4915 scope.go:117] "RemoveContainer" containerID="b4034315af3f08e682f91b225dd9abfd4478dd5646f38ef894b68876899a8fd3" Nov 24 23:22:26 crc kubenswrapper[4915]: I1124 23:22:26.994279 4915 scope.go:117] "RemoveContainer" containerID="3e2bc40d8141a053268a19df9e449dfc313ffe1e064df31601ea3f0e4980a19b" Nov 24 23:22:29 crc kubenswrapper[4915]: I1124 23:22:29.427901 4915 scope.go:117] "RemoveContainer" containerID="4c4992e0c66e1a80e8d0f8351bb8d73606b94eabfb2432575a0d7a556eb597cf" Nov 24 23:22:29 crc kubenswrapper[4915]: E1124 23:22:29.429148 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:22:41 crc kubenswrapper[4915]: I1124 23:22:41.431857 4915 scope.go:117] "RemoveContainer" containerID="4c4992e0c66e1a80e8d0f8351bb8d73606b94eabfb2432575a0d7a556eb597cf" Nov 24 23:22:41 crc kubenswrapper[4915]: E1124 23:22:41.433663 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:22:52 crc kubenswrapper[4915]: I1124 23:22:52.514495 4915 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ss54f"] Nov 24 23:22:52 crc kubenswrapper[4915]: E1124 23:22:52.516055 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6a6bcc0-9a0b-4932-ac58-977526a69091" containerName="extract-utilities" Nov 24 23:22:52 crc kubenswrapper[4915]: I1124 23:22:52.516085 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6a6bcc0-9a0b-4932-ac58-977526a69091" containerName="extract-utilities" Nov 24 23:22:52 crc kubenswrapper[4915]: E1124 23:22:52.516112 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6a6bcc0-9a0b-4932-ac58-977526a69091" containerName="extract-content" Nov 24 23:22:52 crc kubenswrapper[4915]: I1124 23:22:52.516125 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6a6bcc0-9a0b-4932-ac58-977526a69091" containerName="extract-content" Nov 24 23:22:52 crc kubenswrapper[4915]: E1124 23:22:52.516189 4915 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6a6bcc0-9a0b-4932-ac58-977526a69091" containerName="registry-server" Nov 24 23:22:52 crc kubenswrapper[4915]: I1124 23:22:52.516202 4915 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6a6bcc0-9a0b-4932-ac58-977526a69091" containerName="registry-server" Nov 24 23:22:52 crc kubenswrapper[4915]: I1124 23:22:52.516617 4915 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6a6bcc0-9a0b-4932-ac58-977526a69091" containerName="registry-server" Nov 24 23:22:52 crc kubenswrapper[4915]: I1124 23:22:52.519702 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ss54f" Nov 24 23:22:52 crc kubenswrapper[4915]: I1124 23:22:52.534201 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ss54f"] Nov 24 23:22:52 crc kubenswrapper[4915]: I1124 23:22:52.592671 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhr2q\" (UniqueName: \"kubernetes.io/projected/322afde3-df1b-4f18-a6cb-b52e58df6538-kube-api-access-jhr2q\") pod \"certified-operators-ss54f\" (UID: \"322afde3-df1b-4f18-a6cb-b52e58df6538\") " pod="openshift-marketplace/certified-operators-ss54f" Nov 24 23:22:52 crc kubenswrapper[4915]: I1124 23:22:52.592731 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/322afde3-df1b-4f18-a6cb-b52e58df6538-catalog-content\") pod \"certified-operators-ss54f\" (UID: \"322afde3-df1b-4f18-a6cb-b52e58df6538\") " pod="openshift-marketplace/certified-operators-ss54f" Nov 24 23:22:52 crc kubenswrapper[4915]: I1124 23:22:52.592977 4915 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/322afde3-df1b-4f18-a6cb-b52e58df6538-utilities\") pod \"certified-operators-ss54f\" (UID: \"322afde3-df1b-4f18-a6cb-b52e58df6538\") " pod="openshift-marketplace/certified-operators-ss54f" Nov 24 23:22:52 crc kubenswrapper[4915]: I1124 23:22:52.696082 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhr2q\" (UniqueName: \"kubernetes.io/projected/322afde3-df1b-4f18-a6cb-b52e58df6538-kube-api-access-jhr2q\") pod \"certified-operators-ss54f\" (UID: \"322afde3-df1b-4f18-a6cb-b52e58df6538\") " pod="openshift-marketplace/certified-operators-ss54f" Nov 24 23:22:52 crc kubenswrapper[4915]: I1124 23:22:52.696162 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/322afde3-df1b-4f18-a6cb-b52e58df6538-catalog-content\") pod \"certified-operators-ss54f\" (UID: \"322afde3-df1b-4f18-a6cb-b52e58df6538\") " pod="openshift-marketplace/certified-operators-ss54f" Nov 24 23:22:52 crc kubenswrapper[4915]: I1124 23:22:52.696286 4915 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/322afde3-df1b-4f18-a6cb-b52e58df6538-utilities\") pod \"certified-operators-ss54f\" (UID: \"322afde3-df1b-4f18-a6cb-b52e58df6538\") " pod="openshift-marketplace/certified-operators-ss54f" Nov 24 23:22:52 crc kubenswrapper[4915]: I1124 23:22:52.696690 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/322afde3-df1b-4f18-a6cb-b52e58df6538-catalog-content\") pod \"certified-operators-ss54f\" (UID: \"322afde3-df1b-4f18-a6cb-b52e58df6538\") " pod="openshift-marketplace/certified-operators-ss54f" Nov 24 23:22:52 crc kubenswrapper[4915]: I1124 23:22:52.696997 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/322afde3-df1b-4f18-a6cb-b52e58df6538-utilities\") pod \"certified-operators-ss54f\" (UID: \"322afde3-df1b-4f18-a6cb-b52e58df6538\") " pod="openshift-marketplace/certified-operators-ss54f" Nov 24 23:22:52 crc kubenswrapper[4915]: I1124 23:22:52.720627 4915 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhr2q\" (UniqueName: \"kubernetes.io/projected/322afde3-df1b-4f18-a6cb-b52e58df6538-kube-api-access-jhr2q\") pod \"certified-operators-ss54f\" (UID: \"322afde3-df1b-4f18-a6cb-b52e58df6538\") " pod="openshift-marketplace/certified-operators-ss54f" Nov 24 23:22:52 crc kubenswrapper[4915]: I1124 23:22:52.847420 4915 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ss54f" Nov 24 23:22:53 crc kubenswrapper[4915]: I1124 23:22:53.402826 4915 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ss54f"] Nov 24 23:22:53 crc kubenswrapper[4915]: I1124 23:22:53.599745 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ss54f" event={"ID":"322afde3-df1b-4f18-a6cb-b52e58df6538","Type":"ContainerStarted","Data":"80e40b370a49ac84d4dd2f5a4f8679bd98c8c05f529e7555967e68ca51100553"} Nov 24 23:22:54 crc kubenswrapper[4915]: I1124 23:22:54.431643 4915 scope.go:117] "RemoveContainer" containerID="4c4992e0c66e1a80e8d0f8351bb8d73606b94eabfb2432575a0d7a556eb597cf" Nov 24 23:22:54 crc kubenswrapper[4915]: E1124 23:22:54.432549 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:22:54 crc kubenswrapper[4915]: I1124 23:22:54.610158 4915 generic.go:334] "Generic (PLEG): container finished" podID="322afde3-df1b-4f18-a6cb-b52e58df6538" containerID="409103b65990455971d3b31db2ab1f44ca1db6e00f3fe616194983be09f938cd" exitCode=0 Nov 24 23:22:54 crc kubenswrapper[4915]: I1124 23:22:54.610212 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ss54f" event={"ID":"322afde3-df1b-4f18-a6cb-b52e58df6538","Type":"ContainerDied","Data":"409103b65990455971d3b31db2ab1f44ca1db6e00f3fe616194983be09f938cd"} Nov 24 23:22:55 crc kubenswrapper[4915]: I1124 23:22:55.630131 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ss54f" event={"ID":"322afde3-df1b-4f18-a6cb-b52e58df6538","Type":"ContainerStarted","Data":"3733f461128e02cbd670ba0eacd4d2bfa24ca8337d26beb5136113e12054c994"} Nov 24 23:22:57 crc kubenswrapper[4915]: I1124 23:22:57.660084 4915 generic.go:334] "Generic (PLEG): container finished" podID="322afde3-df1b-4f18-a6cb-b52e58df6538" containerID="3733f461128e02cbd670ba0eacd4d2bfa24ca8337d26beb5136113e12054c994" exitCode=0 Nov 24 23:22:57 crc kubenswrapper[4915]: I1124 23:22:57.660370 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ss54f" event={"ID":"322afde3-df1b-4f18-a6cb-b52e58df6538","Type":"ContainerDied","Data":"3733f461128e02cbd670ba0eacd4d2bfa24ca8337d26beb5136113e12054c994"} Nov 24 23:22:58 crc kubenswrapper[4915]: I1124 23:22:58.679768 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ss54f" event={"ID":"322afde3-df1b-4f18-a6cb-b52e58df6538","Type":"ContainerStarted","Data":"c14d65d2069f332b942e9961f0b751858ff80c541c0c4e4f371ed780c7bc38fd"} Nov 24 23:22:58 crc kubenswrapper[4915]: I1124 23:22:58.719947 4915 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ss54f" podStartSLOduration=3.230787443 podStartE2EDuration="6.719928467s" podCreationTimestamp="2025-11-24 23:22:52 +0000 UTC" firstStartedPulling="2025-11-24 23:22:54.612523798 +0000 UTC m=+7392.928775971" lastFinishedPulling="2025-11-24 23:22:58.101664772 +0000 UTC m=+7396.417916995" observedRunningTime="2025-11-24 23:22:58.697628185 +0000 UTC m=+7397.013880398" watchObservedRunningTime="2025-11-24 23:22:58.719928467 +0000 UTC m=+7397.036180640" Nov 24 23:23:02 crc kubenswrapper[4915]: I1124 23:23:02.847690 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ss54f" Nov 24 23:23:02 crc kubenswrapper[4915]: I1124 23:23:02.848096 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ss54f" Nov 24 23:23:03 crc kubenswrapper[4915]: I1124 23:23:03.940227 4915 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-ss54f" podUID="322afde3-df1b-4f18-a6cb-b52e58df6538" containerName="registry-server" probeResult="failure" output=< Nov 24 23:23:03 crc kubenswrapper[4915]: timeout: failed to connect service ":50051" within 1s Nov 24 23:23:03 crc kubenswrapper[4915]: > Nov 24 23:23:09 crc kubenswrapper[4915]: I1124 23:23:09.426502 4915 scope.go:117] "RemoveContainer" containerID="4c4992e0c66e1a80e8d0f8351bb8d73606b94eabfb2432575a0d7a556eb597cf" Nov 24 23:23:09 crc kubenswrapper[4915]: E1124 23:23:09.427314 4915 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lxwjd_openshift-machine-config-operator(3a95ccb9-af8d-493c-b3c5-4fcb2e28b992)\"" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" Nov 24 23:23:12 crc kubenswrapper[4915]: I1124 23:23:12.946998 4915 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ss54f" Nov 24 23:23:13 crc kubenswrapper[4915]: I1124 23:23:13.021257 4915 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ss54f" Nov 24 23:23:13 crc kubenswrapper[4915]: I1124 23:23:13.200136 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ss54f"] Nov 24 23:23:14 crc kubenswrapper[4915]: I1124 23:23:14.884671 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ss54f" podUID="322afde3-df1b-4f18-a6cb-b52e58df6538" containerName="registry-server" containerID="cri-o://c14d65d2069f332b942e9961f0b751858ff80c541c0c4e4f371ed780c7bc38fd" gracePeriod=2 Nov 24 23:23:15 crc kubenswrapper[4915]: I1124 23:23:15.416625 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ss54f" Nov 24 23:23:15 crc kubenswrapper[4915]: I1124 23:23:15.516577 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/322afde3-df1b-4f18-a6cb-b52e58df6538-catalog-content\") pod \"322afde3-df1b-4f18-a6cb-b52e58df6538\" (UID: \"322afde3-df1b-4f18-a6cb-b52e58df6538\") " Nov 24 23:23:15 crc kubenswrapper[4915]: I1124 23:23:15.516617 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhr2q\" (UniqueName: \"kubernetes.io/projected/322afde3-df1b-4f18-a6cb-b52e58df6538-kube-api-access-jhr2q\") pod \"322afde3-df1b-4f18-a6cb-b52e58df6538\" (UID: \"322afde3-df1b-4f18-a6cb-b52e58df6538\") " Nov 24 23:23:15 crc kubenswrapper[4915]: I1124 23:23:15.516642 4915 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/322afde3-df1b-4f18-a6cb-b52e58df6538-utilities\") pod \"322afde3-df1b-4f18-a6cb-b52e58df6538\" (UID: \"322afde3-df1b-4f18-a6cb-b52e58df6538\") " Nov 24 23:23:15 crc kubenswrapper[4915]: I1124 23:23:15.517703 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/322afde3-df1b-4f18-a6cb-b52e58df6538-utilities" (OuterVolumeSpecName: "utilities") pod "322afde3-df1b-4f18-a6cb-b52e58df6538" (UID: "322afde3-df1b-4f18-a6cb-b52e58df6538"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:23:15 crc kubenswrapper[4915]: I1124 23:23:15.525116 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/322afde3-df1b-4f18-a6cb-b52e58df6538-kube-api-access-jhr2q" (OuterVolumeSpecName: "kube-api-access-jhr2q") pod "322afde3-df1b-4f18-a6cb-b52e58df6538" (UID: "322afde3-df1b-4f18-a6cb-b52e58df6538"). InnerVolumeSpecName "kube-api-access-jhr2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 24 23:23:15 crc kubenswrapper[4915]: I1124 23:23:15.566301 4915 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/322afde3-df1b-4f18-a6cb-b52e58df6538-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "322afde3-df1b-4f18-a6cb-b52e58df6538" (UID: "322afde3-df1b-4f18-a6cb-b52e58df6538"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 24 23:23:15 crc kubenswrapper[4915]: I1124 23:23:15.619445 4915 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/322afde3-df1b-4f18-a6cb-b52e58df6538-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 24 23:23:15 crc kubenswrapper[4915]: I1124 23:23:15.619487 4915 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhr2q\" (UniqueName: \"kubernetes.io/projected/322afde3-df1b-4f18-a6cb-b52e58df6538-kube-api-access-jhr2q\") on node \"crc\" DevicePath \"\"" Nov 24 23:23:15 crc kubenswrapper[4915]: I1124 23:23:15.619505 4915 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/322afde3-df1b-4f18-a6cb-b52e58df6538-utilities\") on node \"crc\" DevicePath \"\"" Nov 24 23:23:15 crc kubenswrapper[4915]: I1124 23:23:15.899251 4915 generic.go:334] "Generic (PLEG): container finished" podID="322afde3-df1b-4f18-a6cb-b52e58df6538" containerID="c14d65d2069f332b942e9961f0b751858ff80c541c0c4e4f371ed780c7bc38fd" exitCode=0 Nov 24 23:23:15 crc kubenswrapper[4915]: I1124 23:23:15.899326 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ss54f" event={"ID":"322afde3-df1b-4f18-a6cb-b52e58df6538","Type":"ContainerDied","Data":"c14d65d2069f332b942e9961f0b751858ff80c541c0c4e4f371ed780c7bc38fd"} Nov 24 23:23:15 crc kubenswrapper[4915]: I1124 23:23:15.900731 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ss54f" event={"ID":"322afde3-df1b-4f18-a6cb-b52e58df6538","Type":"ContainerDied","Data":"80e40b370a49ac84d4dd2f5a4f8679bd98c8c05f529e7555967e68ca51100553"} Nov 24 23:23:15 crc kubenswrapper[4915]: I1124 23:23:15.899337 4915 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ss54f" Nov 24 23:23:15 crc kubenswrapper[4915]: I1124 23:23:15.900804 4915 scope.go:117] "RemoveContainer" containerID="c14d65d2069f332b942e9961f0b751858ff80c541c0c4e4f371ed780c7bc38fd" Nov 24 23:23:15 crc kubenswrapper[4915]: I1124 23:23:15.937991 4915 scope.go:117] "RemoveContainer" containerID="3733f461128e02cbd670ba0eacd4d2bfa24ca8337d26beb5136113e12054c994" Nov 24 23:23:15 crc kubenswrapper[4915]: I1124 23:23:15.949515 4915 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ss54f"] Nov 24 23:23:15 crc kubenswrapper[4915]: I1124 23:23:15.960482 4915 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ss54f"] Nov 24 23:23:15 crc kubenswrapper[4915]: I1124 23:23:15.967068 4915 scope.go:117] "RemoveContainer" containerID="409103b65990455971d3b31db2ab1f44ca1db6e00f3fe616194983be09f938cd" Nov 24 23:23:16 crc kubenswrapper[4915]: I1124 23:23:16.020978 4915 scope.go:117] "RemoveContainer" containerID="c14d65d2069f332b942e9961f0b751858ff80c541c0c4e4f371ed780c7bc38fd" Nov 24 23:23:16 crc kubenswrapper[4915]: E1124 23:23:16.021711 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c14d65d2069f332b942e9961f0b751858ff80c541c0c4e4f371ed780c7bc38fd\": container with ID starting with c14d65d2069f332b942e9961f0b751858ff80c541c0c4e4f371ed780c7bc38fd not found: ID does not exist" containerID="c14d65d2069f332b942e9961f0b751858ff80c541c0c4e4f371ed780c7bc38fd" Nov 24 23:23:16 crc kubenswrapper[4915]: I1124 23:23:16.021902 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c14d65d2069f332b942e9961f0b751858ff80c541c0c4e4f371ed780c7bc38fd"} err="failed to get container status \"c14d65d2069f332b942e9961f0b751858ff80c541c0c4e4f371ed780c7bc38fd\": rpc error: code = NotFound desc = could not find container \"c14d65d2069f332b942e9961f0b751858ff80c541c0c4e4f371ed780c7bc38fd\": container with ID starting with c14d65d2069f332b942e9961f0b751858ff80c541c0c4e4f371ed780c7bc38fd not found: ID does not exist" Nov 24 23:23:16 crc kubenswrapper[4915]: I1124 23:23:16.022032 4915 scope.go:117] "RemoveContainer" containerID="3733f461128e02cbd670ba0eacd4d2bfa24ca8337d26beb5136113e12054c994" Nov 24 23:23:16 crc kubenswrapper[4915]: E1124 23:23:16.022540 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3733f461128e02cbd670ba0eacd4d2bfa24ca8337d26beb5136113e12054c994\": container with ID starting with 3733f461128e02cbd670ba0eacd4d2bfa24ca8337d26beb5136113e12054c994 not found: ID does not exist" containerID="3733f461128e02cbd670ba0eacd4d2bfa24ca8337d26beb5136113e12054c994" Nov 24 23:23:16 crc kubenswrapper[4915]: I1124 23:23:16.022580 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3733f461128e02cbd670ba0eacd4d2bfa24ca8337d26beb5136113e12054c994"} err="failed to get container status \"3733f461128e02cbd670ba0eacd4d2bfa24ca8337d26beb5136113e12054c994\": rpc error: code = NotFound desc = could not find container \"3733f461128e02cbd670ba0eacd4d2bfa24ca8337d26beb5136113e12054c994\": container with ID starting with 3733f461128e02cbd670ba0eacd4d2bfa24ca8337d26beb5136113e12054c994 not found: ID does not exist" Nov 24 23:23:16 crc kubenswrapper[4915]: I1124 23:23:16.022607 4915 scope.go:117] "RemoveContainer" containerID="409103b65990455971d3b31db2ab1f44ca1db6e00f3fe616194983be09f938cd" Nov 24 23:23:16 crc kubenswrapper[4915]: E1124 23:23:16.023053 4915 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"409103b65990455971d3b31db2ab1f44ca1db6e00f3fe616194983be09f938cd\": container with ID starting with 409103b65990455971d3b31db2ab1f44ca1db6e00f3fe616194983be09f938cd not found: ID does not exist" containerID="409103b65990455971d3b31db2ab1f44ca1db6e00f3fe616194983be09f938cd" Nov 24 23:23:16 crc kubenswrapper[4915]: I1124 23:23:16.023097 4915 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"409103b65990455971d3b31db2ab1f44ca1db6e00f3fe616194983be09f938cd"} err="failed to get container status \"409103b65990455971d3b31db2ab1f44ca1db6e00f3fe616194983be09f938cd\": rpc error: code = NotFound desc = could not find container \"409103b65990455971d3b31db2ab1f44ca1db6e00f3fe616194983be09f938cd\": container with ID starting with 409103b65990455971d3b31db2ab1f44ca1db6e00f3fe616194983be09f938cd not found: ID does not exist" Nov 24 23:23:16 crc kubenswrapper[4915]: I1124 23:23:16.445893 4915 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="322afde3-df1b-4f18-a6cb-b52e58df6538" path="/var/lib/kubelet/pods/322afde3-df1b-4f18-a6cb-b52e58df6538/volumes" Nov 24 23:23:24 crc kubenswrapper[4915]: I1124 23:23:24.427770 4915 scope.go:117] "RemoveContainer" containerID="4c4992e0c66e1a80e8d0f8351bb8d73606b94eabfb2432575a0d7a556eb597cf" Nov 24 23:23:25 crc kubenswrapper[4915]: I1124 23:23:25.029737 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerStarted","Data":"24570518b278a0e94463483eef602c4137bd17c7e7db82ddc76140bcd4dfed20"} Nov 24 23:25:24 crc kubenswrapper[4915]: I1124 23:25:24.326949 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:25:24 crc kubenswrapper[4915]: I1124 23:25:24.327895 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:25:54 crc kubenswrapper[4915]: I1124 23:25:54.327486 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:25:54 crc kubenswrapper[4915]: I1124 23:25:54.328126 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:26:24 crc kubenswrapper[4915]: I1124 23:26:24.327476 4915 patch_prober.go:28] interesting pod/machine-config-daemon-lxwjd container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 24 23:26:24 crc kubenswrapper[4915]: I1124 23:26:24.328203 4915 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 24 23:26:24 crc kubenswrapper[4915]: I1124 23:26:24.328276 4915 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" Nov 24 23:26:24 crc kubenswrapper[4915]: I1124 23:26:24.329734 4915 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"24570518b278a0e94463483eef602c4137bd17c7e7db82ddc76140bcd4dfed20"} pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 24 23:26:24 crc kubenswrapper[4915]: I1124 23:26:24.329852 4915 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" podUID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerName="machine-config-daemon" containerID="cri-o://24570518b278a0e94463483eef602c4137bd17c7e7db82ddc76140bcd4dfed20" gracePeriod=600 Nov 24 23:26:24 crc kubenswrapper[4915]: I1124 23:26:24.714567 4915 generic.go:334] "Generic (PLEG): container finished" podID="3a95ccb9-af8d-493c-b3c5-4fcb2e28b992" containerID="24570518b278a0e94463483eef602c4137bd17c7e7db82ddc76140bcd4dfed20" exitCode=0 Nov 24 23:26:24 crc kubenswrapper[4915]: I1124 23:26:24.714629 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerDied","Data":"24570518b278a0e94463483eef602c4137bd17c7e7db82ddc76140bcd4dfed20"} Nov 24 23:26:24 crc kubenswrapper[4915]: I1124 23:26:24.714965 4915 scope.go:117] "RemoveContainer" containerID="4c4992e0c66e1a80e8d0f8351bb8d73606b94eabfb2432575a0d7a556eb597cf" Nov 24 23:26:25 crc kubenswrapper[4915]: I1124 23:26:25.727005 4915 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxwjd" event={"ID":"3a95ccb9-af8d-493c-b3c5-4fcb2e28b992","Type":"ContainerStarted","Data":"1734b5e50d6fa59de429dd04cf8b25770a3f789732cb53d02c3782dcb2d43fdc"}